Thursday, 2016-04-07

*** spotz_zzz is now known as spotz00:00
*** thorst has quit IRC00:02
openstackgerritNeill Cox proposed openstack/openstack-ansible-ironic: Add tests for the ironic REST API  https://review.openstack.org/29865400:24
*** thorst has joined #openstack-ansible00:25
*** johnmilton has joined #openstack-ansible00:25
*** schwicht has joined #openstack-ansible00:29
*** asettle has quit IRC00:32
*** schwicht_ has joined #openstack-ansible00:34
*** schwicht has quit IRC00:35
*** busterswt has quit IRC00:44
*** thorst has quit IRC00:45
*** thorst has joined #openstack-ansible00:46
*** keedya has quit IRC00:46
*** saneax is now known as saneax_AFK00:50
*** busterswt has joined #openstack-ansible00:52
*** thorst has quit IRC00:54
*** iceyao has joined #openstack-ansible01:03
*** rramos has quit IRC01:09
*** iceyao_ has joined #openstack-ansible01:09
*** iceyao has quit IRC01:13
*** iceyao_ has quit IRC01:13
*** iceyao has joined #openstack-ansible01:13
*** iceyao has quit IRC01:18
*** iceyao has joined #openstack-ansible01:18
*** asettle has joined #openstack-ansible01:22
*** keedya has joined #openstack-ansible01:24
*** asettle has quit IRC01:28
*** sdake has quit IRC01:28
*** sdake has joined #openstack-ansible01:32
*** busterswt has quit IRC01:35
*** weezS has joined #openstack-ansible01:39
*** coolj has quit IRC01:43
*** coolj has joined #openstack-ansible01:43
*** thorst has joined #openstack-ansible01:51
*** busterswt has joined #openstack-ansible01:54
*** thorst has quit IRC01:59
*** weezS has quit IRC02:11
*** sdake_ has joined #openstack-ansible02:20
*** sdake has quit IRC02:22
*** asettle has joined #openstack-ansible02:29
*** deadnull has quit IRC02:35
*** retreved has quit IRC02:46
*** spotz is now known as spotz_zzz02:54
*** thorst has joined #openstack-ansible02:56
*** openstackstatus has quit IRC03:01
*** thorst has quit IRC03:04
*** sdake_ has quit IRC03:06
*** sdake has joined #openstack-ansible03:07
*** iceyao has quit IRC03:09
*** iceyao has joined #openstack-ansible03:09
*** sdake has quit IRC03:17
*** sdake has joined #openstack-ansible03:18
*** v1k0d3n has joined #openstack-ansible03:19
*** joker_ has quit IRC03:20
*** joker_ has joined #openstack-ansible03:20
*** prometheanfire has quit IRC03:22
*** prometheanfire has joined #openstack-ansible03:23
*** iceyao has quit IRC03:23
*** iceyao has joined #openstack-ansible03:23
*** jmccrory has quit IRC03:26
*** jmccrory has joined #openstack-ansible03:26
*** sdake has quit IRC03:28
*** iceyao has quit IRC03:31
*** iceyao has joined #openstack-ansible03:32
*** keedya has quit IRC03:47
*** schwicht_ has quit IRC03:48
*** v1k0d3n has quit IRC03:51
*** sekrit has quit IRC03:51
*** johnmilton has quit IRC03:54
*** thorst has joined #openstack-ansible04:01
*** rramos has joined #openstack-ansible04:02
*** busterswt has quit IRC04:02
*** sekrit has joined #openstack-ansible04:05
*** thorst has quit IRC04:09
openstackgerritMerged openstack/openstack-ansible-os_tempest: Fix broken installation task  https://review.openstack.org/30153304:11
*** asettle has quit IRC04:17
*** asettle has joined #openstack-ansible04:18
*** rramos has quit IRC04:24
*** javeriak has joined #openstack-ansible04:25
*** persia has quit IRC04:31
*** persia has joined #openstack-ansible04:33
*** rramos has joined #openstack-ansible04:37
*** jayc has joined #openstack-ansible04:44
*** shausy has joined #openstack-ansible04:55
*** admin0 has joined #openstack-ansible04:56
*** nhadie has joined #openstack-ansible04:58
*** rramos has quit IRC05:01
*** asettle has quit IRC05:02
*** asettle has joined #openstack-ansible05:03
*** nhadzter has joined #openstack-ansible05:03
*** nhadie has quit IRC05:06
*** thorst has joined #openstack-ansible05:06
*** admin0 has quit IRC05:08
*** nhadzter has quit IRC05:09
*** javeriak has quit IRC05:12
*** thorst has quit IRC05:13
*** nhadzter has joined #openstack-ansible05:27
*** shausy has quit IRC05:35
*** shausy has joined #openstack-ansible05:36
*** iceyao has quit IRC05:38
*** iceyao has joined #openstack-ansible05:38
*** iceyao has quit IRC05:39
*** iceyao has joined #openstack-ansible05:39
*** joker_ has quit IRC05:39
*** javeriak has joined #openstack-ansible05:39
*** iceyao has quit IRC05:40
*** iceyao_ has joined #openstack-ansible05:40
*** admin0 has joined #openstack-ansible05:43
*** zhangjn has quit IRC05:45
*** zhangjn has joined #openstack-ansible05:46
openstackgerritSashi Dahal proposed openstack/openstack-ansible: ceph configuration for glance and nova  https://review.openstack.org/30192205:46
*** admin0 has quit IRC05:51
*** saneax_AFK is now known as saneax05:57
*** nhadzter has quit IRC06:09
*** mikelk has joined #openstack-ansible06:10
*** thorst has joined #openstack-ansible06:11
*** thorst has quit IRC06:18
*** asettle has quit IRC06:20
*** pcaruana has joined #openstack-ansible06:26
*** asettle has joined #openstack-ansible06:33
*** asettle has quit IRC06:37
*** unlaudable has joined #openstack-ansible06:44
*** unlaudable has quit IRC06:44
openstackgerritMatt Thompson proposed openstack/openstack-ansible-os_nova: [WIP] Use tempest for testing  https://review.openstack.org/30156606:46
*** javeriak has quit IRC06:56
*** neilus has joined #openstack-ansible06:57
*** neilus has quit IRC07:01
*** nhadzter has joined #openstack-ansible07:04
*** admin0 has joined #openstack-ansible07:10
*** admin0 has quit IRC07:11
*** javeriak has joined #openstack-ansible07:14
*** thorst has joined #openstack-ansible07:16
*** admin0 has joined #openstack-ansible07:19
*** neilus has joined #openstack-ansible07:19
*** thorst has quit IRC07:24
*** neilus has quit IRC07:25
*** jayc has quit IRC07:36
*** Oku_OS-away is now known as Oku_OS07:38
*** asettle has joined #openstack-ansible07:42
*** jamielennox is now known as jamielennox|away07:45
*** asettle has quit IRC07:45
*** wangqwsh has joined #openstack-ansible07:56
admin0morning all07:57
*** neilus has joined #openstack-ansible08:13
*** admin0 has quit IRC08:17
*** admin0 has joined #openstack-ansible08:19
*** thorst has joined #openstack-ansible08:21
wangqwshmorning08:26
*** openstackstatus has joined #openstack-ansible08:28
*** ChanServ sets mode: +v openstackstatus08:28
*** thorst has quit IRC08:30
-openstackstatus- NOTICE: jobs depending on npm are now working again08:31
*** iceyao_ has quit IRC08:49
*** iceyao has joined #openstack-ansible08:50
*** iceyao has quit IRC08:51
*** iceyao has joined #openstack-ansible08:51
*** iceyao has quit IRC08:53
*** iceyao has joined #openstack-ansible08:53
*** iceyao has quit IRC08:58
*** iceyao_ has joined #openstack-ansible08:59
*** javeriak has quit IRC09:05
*** javeriak has joined #openstack-ansible09:22
*** shausy has quit IRC09:24
*** nhadzter has quit IRC09:24
odyssey4meo/09:25
*** iceyao has joined #openstack-ansible09:26
odyssey4mecoolj yes, you can theme horizon using the dict horizon_custom_uploads where you supply the source/dest files: https://github.com/openstack/openstack-ansible-os_horizon/blob/master/tasks/horizon_post_install.yml#L51-L5709:27
*** iceyao_ has quit IRC09:27
*** thorst has joined #openstack-ansible09:27
odyssey4mewe should add some sort of sample to the defaults, and add some documentation in http://docs.openstack.org/developer/openstack-ansible/install-guide/configure-horizon.html for that <--- mhayden :)09:28
*** deadnull has joined #openstack-ansible09:29
*** ChanServ changes topic to "Mitaka Release Critical Bugs: https://goo.gl/ipD9BL || Launchpad: https://launchpad.net/openstack-ansible || Weekly Meetings: https://wiki.openstack.org/wiki/Meetings/openstack-ansible || Review Dashboard: https://goo.gl/tTmdgs"09:30
deadnullmy initial setup failed, and my galera cluster wont come up, i tried running openstack-ansible galera-install.yml --tags galera-bootstrap but setup-infra is still failing, is my best/oply option to destroy containers and start over?09:30
*** ChanServ changes topic to "Launchpad: https://launchpad.net/openstack-ansible || Weekly Meetings: https://wiki.openstack.org/wiki/Meetings/openstack-ansible || Review Dashboard: https://goo.gl/tTmdgs"09:30
odyssey4medeadnull why's it failing?09:31
deadnullmysql is not running on one of the containers09:31
odyssey4meyeah, that shouldn't be an issue - it needs one running node to be able to work09:32
odyssey4mehave you tried working through http://docs.openstack.org/developer/openstack-ansible/install-guide/ops-galera-recovery.html ?09:32
deadnullodyssey4me its failing on this task '[galera_server | Check for cluster state failure]'09:32
deadnulli tried the first step, i will run through that now, thanks09:34
*** thorst has quit IRC09:34
odyssey4medeadnull assuming this is a fresh install, try executing openstack-ansible -e 'galera_ignore_cluster_state=true' galera-install.yml09:34
deadnullyes, perfectly clean09:35
deadnullthe original failure on this host was due to lxc running out of space09:35
odyssey4methat effectively ignores the cluster state and resets the cluster, which is not advised in production09:35
deadnulli have run the setup-openstack playbooks, but i dont have horizon, which caused me to backtrack and uncover the ctrl01 ran out of space and galera was not running09:36
odyssey4mehmm, that's odd - if horizon wasn't setup properly then it would seem that perhaps the right group name was used in openstack_user_config.yml09:36
deadnulli have horizon containers, just nothing is running on br-mgmt :44309:37
deadnullso then i re-ran setup-openstack09:37
odyssey4medo you have the group 'os-infra_hosts' in openstack_user_config?09:38
deadnullyeah the CLI is working perfectly09:38
deadnullcinder is working, keystone is replying09:38
odyssey4meit's strange that horizon's tasks were skipped though09:38
deadnulli think its an odd situation with running out of space causing an issue, just odd the failures didnt show until the 2nd run09:39
odyssey4mevery09:39
deadnullim going to run the ignore cluster state, then kick off setup-infra, then setup-openstack09:39
deadnulli'll be around all day, hopefully will get this squared away this AM09:39
odyssey4meone of the things we'd like to get included in this cycle is a better pre-flight check to determine whether everything that needs to be in place, is in place, before we touch anything09:39
deadnullthank you for the help so far.. I am documenting all the steps i am running across. basically writing a prereq file for my team09:39
odyssey4meadmin0 with regards to what to consider 'stable', working with OpenStack is a little different to vendor provided software, so the rules applied needs to be a little different09:41
odyssey4medeadnull sure thing, happy to help :)09:41
deadnullyeah, that would be great (preflight check), i have been using ansible extensively in our environment outside of openstack, happy to help out.09:41
deadnullstill ramping up on OSA now, so it might be a little bit ;-)09:42
odyssey4meadmin0 OpenStack is developed and released in 6 month cycles, whereas vendors typically release major versions every 2-3 years depending on the type of software09:42
odyssey4medeadnull any contributions are welcome: bug reporting, bug triage, patches for docs or code - it's all good09:43
odyssey4mealso helping people in the channel is also very welcome09:43
odyssey4meor the ML09:43
openstackgerritgit-harry proposed openstack/openstack-ansible: DEFCORE: Updated tempest config and resources  https://review.openstack.org/30262609:43
deadnullthat play failed with a dpkg error on mariadb... should i blow away the container on my ctrl0109:44
odyssey4meadmin0 so the best advice for anyone adopting OpenStack is to start testing the new version immediately after release, to help report and resolve bugs to improve stabilisation with the intent of having a fully tested and stable implementation/upgrade path by the first milestone of the next cycle, then ideally to actually implement that into production by the second milestone09:45
odyssey4medeadnull yeah, sometimes it is easier to just blow away the containers - the data stays behind on the host so if you really don't care about the data at this point then clear out the /openstack/<container-name> directories on the host09:46
odyssey4methere's a playbook to destroy the containers, just use '--limit galera_all' to ensure that it only removes the ones you want to :)09:47
odyssey4methat playbook won't delete the folders on the host though09:47
*** dsariel has joined #openstack-ansible09:48
*** jiteka has quit IRC09:51
*** javeriak has quit IRC09:53
hughsaundersodyssey4me: https://goo.gl/03qZPi09:55
*** iceyao has quit IRC09:55
deadnullcan i use the same -l galera_all when running setup_hosts speed that step up?09:55
*** iceyao_ has joined #openstack-ansible09:55
*** iceyao_ has quit IRC09:56
*** iceyao has joined #openstack-ansible09:56
deadnullopenstack-ansible lxc-containers-create.yml --limit galera_all09:59
deadnull seems like it should do it09:59
deadnullkicking off setup-infra now, :crosses_fingers:10:01
*** iceyao has quit IRC10:04
*** iceyao has joined #openstack-ansible10:04
*** iceyao has quit IRC10:06
*** iceyao has joined #openstack-ansible10:07
*** karimb has joined #openstack-ansible10:13
*** jiteka has joined #openstack-ansible10:22
deadnullodyssey4me seems like I cant win today... rabbitmq container failed to cluster on that ctrl01 host10:24
odyssey4medeadnull yep10:24
odyssey4medeadnull do you have time sync setup on all the hosts?10:25
*** iceyao has quit IRC10:25
*** iceyao has joined #openstack-ansible10:25
*** karimb has quit IRC10:26
deadnullodyssey4me yes, 5 statrum 1 GPS clocks on every node, seems to be perfect10:26
deadnullin log: reporting "already member of cluster"10:27
odyssey4mehmm, ok - odd that it's having issues - perhaps the disk space has been a bugs issue all along10:27
odyssey4meis your root partition particularly small?10:28
deadnullroot for lxc or root for the host10:28
deadnull917G   16G  856G   2% /var/lib/lxc10:28
deadnull28G  7.3G   19G  28% /10:28
deadnullseems odd that ctrl01 is having issues and nothing else is, of course the sysadmin who setup this host didnt use lvm :-\10:29
*** McMurlock1 has joined #openstack-ansible10:29
odyssey4meah, that's fun10:29
openstackgerritMatt Thompson proposed openstack/openstack-ansible-os_nova: [WIP] Use tempest for testing  https://review.openstack.org/30156610:29
odyssey4memaybe the sysadmin should setup cobbler to automate the host setup and borrow some of the configs done in this gist: https://gist.github.com/cloudnull/f71e3078f9a0018017c310:30
*** pjm6 has joined #openstack-ansible10:30
pjm6morning all :)10:30
*** thorst has joined #openstack-ansible10:32
deadnulldo you think the 30GB / partition is causing the issues, or should i blow everything away not just galera and start over10:32
*** pjm6 has quit IRC10:35
*** pjm6 has joined #openstack-ansible10:36
*** karimb has joined #openstack-ansible10:38
odyssey4medeadnull well, perhaps - there are bind mounts from the containers to the hosts for logs and critical data (like the database contents)10:38
odyssey4medeadnull so you could perhaps seperate /openstack into its own partition too10:39
deadnullthat is very do-able10:39
*** thorst has quit IRC10:39
deadnulli have another 1TB drive sitting in there, this is far from long-term production10:39
deadnulltrying to avoid a complete OS re-install, but i might have to10:39
deadnulli'll start with /openstack relocation10:39
odyssey4meI don't think it'll need very much space, but you will want room to grow10:39
deadnullyeah the docs do state at least 100GB per controller host10:40
deadnull30GB < 100GB :010:40
deadnull:)10:40
odyssey4meyep :)10:40
odyssey4meif your lxc vg is on a RAID disk, then perhaps carve out a section of the lxc vg for the /openstack partition10:41
odyssey4methe containers themselves will mostly only use 5G each (there are one or two exceptions), so not all that much is needed for the lxc vg - you have almost 100GB there10:42
odyssey4memaybe take 30GB of that for the /openstack partition and see how that goes10:42
deadnullyeah i'll give it a shot, once I stop the containers, and move the data, start with setup-hosts again I presume, then move to setup-infra > setup-openstack10:43
odyssey4meyep10:44
*** pjm6 has quit IRC10:46
*** karimb has quit IRC10:53
*** karimb has joined #openstack-ansible10:54
*** pjm6 has joined #openstack-ansible10:57
*** karimb has quit IRC11:01
deadnullodyssey4me thanks again, headed into the office, been a long day already... I will be back shortly (hopefully with good news)11:02
deadnullcheers11:03
*** deadnull has quit IRC11:03
*** McMurlock1 has quit IRC11:03
*** johnmilton has joined #openstack-ansible11:08
Bofu2Umorn11:10
*** thorst has joined #openstack-ansible11:11
winggundamthif I using Ceph. Cinder Volumes service can deploy on 3 of infra host right?11:11
*** thorst_ has joined #openstack-ansible11:11
*** pjm6 has quit IRC11:11
*** thorst has quit IRC11:15
openstackgerritMerged openstack/openstack-ansible-os_tempest: Remove volume-feature-enabled/snapshot override  https://review.openstack.org/30240011:17
*** iceyao has quit IRC11:18
*** iceyao has joined #openstack-ansible11:18
*** iceyao has quit IRC11:19
*** iceyao has joined #openstack-ansible11:19
*** deadnull has joined #openstack-ansible11:19
deadnullaaaand I'm back11:20
Bofu2Udeadnull: I've been awaiting your return anxiously.11:20
deadnullBofu2U :-)11:20
Bofu2U;)11:20
*** admin0 has quit IRC11:26
*** john_bar has quit IRC11:29
*** admin0 has joined #openstack-ansible11:31
*** pjm6 has joined #openstack-ansible11:36
*** pjm6 has quit IRC11:40
openstackgerritMatt Thompson proposed openstack/openstack-ansible: Add tempest_log_dir variable  https://review.openstack.org/30269211:50
*** Oku_OS is now known as Oku_OS-away11:52
*** keedya has joined #openstack-ansible11:55
*** weshay has joined #openstack-ansible11:56
*** javeriak has joined #openstack-ansible11:58
deadnullok, so setup-hosts went clean after the /openstack migration to a larger volume11:58
deadnullnow kicking off setup-infrastructure11:58
openstackgerritMatt Thompson proposed openstack/openstack-ansible-os_tempest: Updates for tempest in functional tests  https://review.openstack.org/30269711:58
openstackgerritMatt Thompson proposed openstack/openstack-ansible-os_nova: [WIP] Use tempest for testing  https://review.openstack.org/30156612:00
mhaydenmornin'12:06
mhaydenodyssey4me: i need to get a look at that doc blueprint that slid my way12:06
*** jhesketh has quit IRC12:07
*** jhesketh has joined #openstack-ansible12:07
*** admin0 has quit IRC12:16
deadnullodyssey4me no dice, still failed on the same step with rabbitmq after relocating to a larger drive12:17
*** javeriak has quit IRC12:17
deadnulli guess my next step is to blow away all the containers, and then start from setup-hosts on a clean install?12:17
odyssey4medeadnull yeah, that's odd - perhaps a full teardown would be a good idea12:18
odyssey4methe initial work to get the hosts and networking done right is usually the most painful12:18
deadnullteardown using lxc-containers-destroy.yml or full taredown as in bare metal12:18
odyssey4metry just destroying the containers, and clearing out the directories that contain the logs/data in /openstack on each host12:19
deadnullsounds good, I will start there... im working on an ansible playbook to bring them up from metal, basically similar to the cobbler config you sent earlier12:20
*** woodard has joined #openstack-ansible12:23
deadnullok logs clean and /openstack is clean, now going to give this a show starting with setup-hosts.yml12:24
*** woodard has quit IRC12:24
*** woodard has joined #openstack-ansible12:25
*** admin0 has joined #openstack-ansible12:26
*** Oku_OS-away is now known as Oku_OS12:38
*** joker_ has joined #openstack-ansible12:42
*** admin0 has quit IRC12:43
*** sigmavirus24_awa is now known as sigmavirus2412:47
*** iceyao has quit IRC12:48
*** v1k0d3n has joined #openstack-ansible12:53
*** automagically has joined #openstack-ansible12:55
*** keedya has quit IRC12:59
*** keedya has joined #openstack-ansible13:00
openstackgerritMerged openstack/openstack-ansible: DEFCORE: Updated tempest config and resources  https://review.openstack.org/30262613:04
*** jayc has joined #openstack-ansible13:07
*** eric_lopez has quit IRC13:10
automagicallymorning13:10
deadnullautomagically morning!13:10
*** eric_lopez has joined #openstack-ansible13:11
*** jayc has quit IRC13:13
*** jayc has joined #openstack-ansible13:13
*** javeriak has joined #openstack-ansible13:14
openstackgerritMerged openstack/openstack-ansible-os_tempest: Allocation pool for public subnet  https://review.openstack.org/30048113:18
*** jayc is now known as _55_13:19
*** psilvad has joined #openstack-ansible13:20
cloudnullmorning13:22
cloudnullsysadmin: https://github.com/cloudnull/osa-multi-node-aio -- thats the updated version of the gist posted by odyssey4me earlier today.13:23
javeriakmorning cloudnull, glad to have caught you13:25
cloudnullohai :)13:25
javeriakhaving some problems with my horizon13:25
cloudnullwhaats going on ?13:25
javeriakit wont load after the login page13:25
javeriakive got tons of things deployed on it, >500 VMs etc, so might it be scale/load issue?13:26
cloudnullmitaka or liberty ?13:26
javeriakliberty13:27
javeriak12.0.0 to be precise13:27
cloudnulldoes it work if you go to the URL https://$YOURIPHERE/identity/13:27
javeriak12.0.6** i mean13:27
cloudnullthe default landing page takes you to project which makes a miserable sql query and that may be taking a very long time to load13:28
*** psilvad_ has joined #openstack-ansible13:29
cloudnullwhen hundreds of VMs exist under a single tenant that page load time will take forever.13:29
cloudnullidk if thats the issue13:29
cloudnullbut its something i know happens13:29
javeriakhmm im trying the identity page, but thats taking forever too13:29
cloudnullthe default landing page is typically https://$YOUIPHERE/project/13:30
cloudnullis it in a loading state?13:30
javeriakaha the identity page worked!13:30
cloudnullcool13:30
cloudnullyou should be able to navigate around.13:30
cloudnulljust dont go to the overview page "/project"13:31
javeriakthanks as usual :)13:31
javeriakyea its mega slow, but seems to be working13:31
cloudnulli wonder if there's something we can do to make it faster.13:32
openstackgerritMerged openstack/openstack-ansible-plugins: Add allocation pool support to neutron module  https://review.openstack.org/30042113:32
cloudnullneillc <- has worked on horizon in the past, maybe there's some black magic we can consume13:32
javeriakhmm yea that would help, this is on that osic cluster btw, and ive got of things deployed on it now, so the network page wont load either now13:32
*** b3rnard0_away is now known as b3rnard013:33
javeriaktons** of things deployed on it13:33
cloudnullnice!13:33
cloudnullis the cli performance good with that many things deployed ?13:34
openstackgerritMerged openstack/openstack-ansible-openstack_hosts: Add a check for hosts particular kernel modules  https://review.openstack.org/29179713:34
cloudnulljaveriak: also is access to the OSIC cluster something that you might be able to share with a few folks to see if we can poke at things for the purpose of optimizing?13:35
javeriakcloudnull yea ali is fine, thats what we have been using mostly to test with. yea we have it for another week, when would you guys want to look?13:37
javeriakcli**13:38
*** KLevenstein has joined #openstack-ansible13:38
cloudnullsure that would be awesome13:38
cloudnullthere may be tweaks we can make to make things generally faster for horizon users.13:38
cloudnullits completely possible that horizon just sucks but maybe we can make it suck less.13:39
deadnullodyssey4me so good news so far, hosts and infra went through clean after blowing away the containers.13:39
deadnullit was def. just having a hard time realizing it was already joined into rabbitmq cluster, so it was timing out joining13:40
cloudnullbah. rabbit...13:40
deadnullmy thoughts exactly...13:42
deadnullzmq has its own issues also13:42
*** pjm6 has joined #openstack-ansible13:42
cloudnullI've not given ZMQ a go in a while13:44
cloudnullin the icehouse and juno timeframe a few of us tried it .13:44
cloudnullsadly the oslo drivers were garbage and we went back to rabbit13:44
cloudnullIDK if theyve gotten better.13:44
*** mgoddard_ has joined #openstack-ansible13:44
*** briancubed has joined #openstack-ansible13:46
*** mgoddard has quit IRC13:48
*** Oku_OS is now known as Oku_OS-away13:48
cloudnulldeadnull: have you messed with ZMQ recently?13:50
deadnullcloudnull I have used ZMQ but not with oslo, going to take a peek after i get this deployment online13:51
cloudnullcool13:51
openstackgerritMerged openstack/openstack-ansible-os_heat: Fix heat trustee configuration  https://review.openstack.org/30243313:51
cloudnullid love to know how it goes.13:51
deadnullabsolutely13:51
cloudnullif we can decentralize messaging that would be awesome13:51
deadnulli'll get some docs together, agreed.. that would be great13:51
deadnullmight be mitaka timeframe13:52
cloudnullif i can help let me know.13:52
javeriakalso im having some problems with rabbitmq keys on liberty, i took out the latest liberty tag (this is another setup btw); and playbooks fail with "Memcache key not found"; apparantly the rabbitmq.key file disappears from one of the infra container13:52
deadnullabsolutely, will do13:52
*** jthorne has joined #openstack-ansible13:53
briancubedHey, folks. I need some triage advice. Doing a 5-node virtual deployment using some new integration work from Nuage Networks. We have this working with AIO, so now we're trying multinode.13:53
cloudnulljaveriak:  thats likely an issue with rabbitmq not being able to sync ssl keys to all of the nodes in the cluster.13:53
briancubedI put a log snippet here: http://pastebin.com/GGFWyD6713:53
cloudnullif you rerun the whole play does it fail: openstack-ansible rabbitmq-install.yml13:53
*** iceyao has joined #openstack-ansible13:54
javeriakcloudnull, yea it fails at the same 'Distribute self signed ssl' task everytime13:54
cloudnullbriancubed: seems from your deploy node can not ssh to the containers on the other node.13:55
cloudnullare they one the same network ?13:55
odyssey4mejaveriak I think we fixed that up for Mitaka - we use slurp instead of cat to pick up the keys and it's much more reliable13:55
odyssey4mewe also found that we were putting the keys down badly13:55
cloudnullodyssey4me: maybe we need to backport that fix to liberty ?13:55
briancubedcloudnull: yes. a number of similar operations, waiting for ssh, succeed13:55
briancubedyes, they are on the same network, that is13:56
cloudnulldo they all fail on the same host?13:56
javeriakodyssey4me there was a similar problem before as well with the nova and keystone keys i remember, i used to fix it by combing the distribute and other playbook because the key wasnt registering across13:56
*** ametts has joined #openstack-ansible13:56
briancubedthe snippet I pasted was from a re-run of the setup_hosts playbook13:56
odyssey4mejaveriak it was https://github.com/openstack/openstack-ansible-rabbitmq_server/commit/bc411c677a4aef36817d990f8849d8d8d8befea3 and https://github.com/openstack/openstack-ansible-rabbitmq_server/commit/c9773b9d9c85dbec0422839829a9dedbd07991d0 that fixed it up13:56
odyssey4mecloudnull we've had very few reports of the problem from liberty deployers13:57
odyssey4mejaveriak you may find that you just need to clear the fact cache13:57
*** busterswt has joined #openstack-ansible13:57
odyssey4methat often helps13:57
javeriakodyssey4me okay i'll try clearing the fact cahce first and then use those fixes13:57
javeriakthanks guys, let me see if that helps13:57
odyssey4mejaveriak if those fixes work, then please propose a combined patch into liberty if that code's not already there13:58
openstackgerritMatt Thompson proposed openstack/openstack-ansible-security: [WIP] Doc updates  https://review.openstack.org/30277613:58
javeriakodyssey4me sure will do13:58
briancubedi searched online and saw a post where odyssey4me suggested increasing the ssh timeout using ssh_delay in user_variables.yml. I set it to several values, as high as 20 (assuming minutes), but it doesn't seem to have any effect.13:58
cloudnulllooks like liberty has this one already https://github.com/openstack/openstack-ansible-rabbitmq_server/commit/bc411c677a4aef36817d990f8849d8d8d8befea313:59
*** admin0 has joined #openstack-ansible14:00
cloudnulland it could use https://github.com/openstack/openstack-ansible-rabbitmq_server/commit/c9773b9d9c85dbec0422839829a9dedbd07991d0 to resolve the memcached issue, assuming its worth backporting at this point.14:00
cloudnullbriancubed: is the ip address conflicting with something else on your network ?14:00
*** Oku_OS-away is now known as Oku_OS14:00
cloudnullthe failed ones that is.14:01
cloudnullif you login to the host where the container is is running, can you attach to it, and from within it can you ping other devices on your network ?14:01
briancubedcloudnull: I don't think so. These nodes are VMs. I have them connected via a private network. Just to check, I tried pinging the problematic addresses when the playbook wasn't running. No response.14:02
cloudnullare they vms w/ KVM ?14:03
briancubedKVM, yes14:03
cloudnullor something like a public cloud provider ?14:03
cloudnulli know KVM works i test using the following repo https://github.com/cloudnull/osa-multi-node-aio14:03
briancubedKVM running on a bare-metal server under Ubuntu14:04
*** mgoddard has joined #openstack-ansible14:04
*** mgoddard_ has quit IRC14:04
javeriakodyssey4me the code in the rabbitmq_ssl_key_distribute.yml file is quite different between liberty and mitaka; https://github.com/openstack/openstack-ansible/blob/liberty/playbooks/roles/rabbitmq_server/tasks/rabbitmq_ssl_key_distribute.yml & https://github.com/openstack/openstack-ansible-rabbitmq_server/blob/c9773b9d9c85dbec0422839829a9dedbd07991d0/tasks/rabbitmq_ssl_key_distribute.yml14:04
briancubedIm sure it works. I'm trying to figure out what I've done incorrectly.14:04
cloudnullbriancubed: hum...14:04
cloudnullbonded nics in the vms?14:04
cloudnullmaybe a STP issue?14:05
*** cloudtrainme has joined #openstack-ansible14:05
cloudnullcheck the kernel log14:05
cloudnullif sg offloading is on it may be causing a stack trace w/ KVM14:06
*** automagically has quit IRC14:06
briancubedi'm not using bonded nics. I setup bridges to eth0, eth114:06
cloudnullthis is the host interface config I use14:06
cloudnullhttps://github.com/cloudnull/osa-multi-node-aio/blob/master/templates/kvm-bonded-bridges.cfg14:06
briancubedkernel log on the deployment host? It doubles as the 1st infra host.14:06
cloudnullyes14:07
briancubedokay. i'll have a look. Time for a meeting... Be back later...14:07
*** sdake has joined #openstack-ansible14:07
cloudnullthis is the sg offload option that needs to be there14:07
briancubedthank you!14:07
cloudnullhttps://github.com/cloudnull/osa-multi-node-aio/blob/master/templates/kvm-bonded-bridges.cfg#L7514:07
cloudnullanytime best of luck briancubed14:07
cloudnullhave a good meeting :)14:07
javeriakcloudnull the code in the rabbitmq_ssl_key_distribute.yml file is quite different between liberty and mitaka ... my paste above ^14:08
javeriaknot sure how to apply the mitaka fixes herein liberty14:08
cloudnulljaveriak:  looking14:08
*** Brew has joined #openstack-ansible14:12
*** sdake has quit IRC14:12
*** automagically has joined #openstack-ansible14:14
admin0odyssey4me: what does link referenced in any content mean ?14:16
odyssey4meyou provide a link, but no content references the link14:17
admin0 isn’t it the same as the other 3 links above that just give more info/examples on the subject ?14:17
*** ChrisBenson has joined #openstack-ansible14:17
cloudnulljaveriak: http://cdn.pasteraw.com/lc2zwqot88trbcqns07gmlvm6jkb07g that should do it14:17
odyssey4methe other links are referenced in content14:17
admin0oh .. so i should say below the example code block . hey if you need a working example, go to this link and mention the link again ?14:18
admin0 sorry .. new to all this :D14:18
javeriakthanks cloudnull14:19
*** rromans_ is now known as rromans14:19
cloudnullgive that a go and let me know.14:19
javeriakyes trying now14:19
cloudnullif it works for you, would you mind putting together a backport PR for that ?14:19
odyssey4meadmin0 see how line 10 has the content, and line 16 provides the link for the content14:20
odyssey4meadmin0 also look at the gate test result for docs to see how the final document is rendered14:20
*** thorst_ has quit IRC14:21
admin0i have no clue what a gate is :D14:21
*** sdake has joined #openstack-ansible14:21
admin0how do I see it ?14:22
odyssey4meadmin0 look at https://review.openstack.org/301922 - see the jenkins check result, it has a link - that link for the docs job shows the resulting rendered document14:22
cloudnulladmin0: "the gate" is the common term used for the jenkins test environment14:22
odyssey4meadmin0 if you execute 'tox -e docs' on your computer (after installing tox with pip) then you'll get a rendered set of documents on your workstation that you can browse14:22
*** thorst_ has joined #openstack-ansible14:22
cloudnullhttp://docs.openstack.org/infra/manual/developers.html#automated-testing14:23
admin0now I know how to check the gate \o/14:23
palendaeWorth noting: I don't know that tox -e docs works for all the repos; openstack-ansible definitely14:23
*** KLevenstein has quit IRC14:24
admin0 i want to first contribute to the openstack-ansible before expanding :)14:24
admin0but i know how to check gate now :)14:24
cloudnullautomagically: ping14:24
automagicallyYes?14:24
cloudnullis this something we should be pushing to other roles https://review.openstack.org/#/c/291908/ ?14:24
cloudnullseems sensible to me14:24
*** Mudpuppy has joined #openstack-ansible14:25
*** Mudpuppy has quit IRC14:25
cloudnullbut w/ the POC tag in front I wanted to raise the question14:25
automagicallycloudnull: I think so. A little up front validation by each role that its preconditions are satisfied should help discover issues more quickly14:25
cloudnull+114:25
*** Mudpuppy has joined #openstack-ansible14:25
cloudnullI like it, odyssey4me?14:26
automagicallyI think I got enough positive feedback on it and no negative feedback at this point, so I think we are ready to move forward14:26
*** thorst__ has joined #openstack-ansible14:26
admin0apart from documentation, where can I help on the testing side ?   on new deployments or upgrades ?14:27
automagicallyI need to amend that patchset before further reviews though, I think I just spotted an issue with it14:27
cloudnullI'd almost think that the individual role secrets should be checked as well as the playbook secrets, something like rabbitmq / mariadb user creates, etc..14:27
*** thorst_ has quit IRC14:27
cloudnullok14:27
cloudnullwas just working through my review q so i figured I'd ping you.14:28
*** spotz_zzz is now known as spotz14:28
openstackgerritTravis Truman (automagically) proposed openstack/openstack-ansible-os_ceilometer: Fail fast when required secrets are not present  https://review.openstack.org/29190814:29
*** javeriak has quit IRC14:30
*** automagically has left #openstack-ansible14:30
*** automagically has joined #openstack-ansible14:31
cloudnullautomagically:  i have a nit sorry. : )14:34
*** javeriak has joined #openstack-ansible14:34
*** pjm6 has quit IRC14:35
*** thorst__ has quit IRC14:37
*** spotz is now known as spotz_zzz14:38
javeriakcloudnull your patch worked, I'll push a PR into liberty in a bit14:40
cloudnull+114:40
javeriakalso i just got some more feedback from the team working with the osic cluster, and openstack calls are going through very slowly14:41
*** spotz_zzz is now known as spotz14:41
*** thorst_ has joined #openstack-ansible14:41
automagicallycloudnull: Agreed with your nit, new patchset coming14:41
admin0odyssey4me:14:42
admin0oops14:42
cloudnullautomagically: i shouldve raised that before but i wasnt thinking, and then i was, and then i saw a squirrel, and now here we are. :)14:43
*** psilvad_ has quit IRC14:44
*** pjm6 has joined #openstack-ansible14:44
openstackgerritTravis Truman (automagically) proposed openstack/openstack-ansible-os_ceilometer: Fail fast when required secrets are not present  https://review.openstack.org/29190814:45
openstackgerritMerged openstack/openstack-ansible: Specify allocation pool for public subnet  https://review.openstack.org/30048914:45
*** psilvad_ has joined #openstack-ansible14:47
*** ametts has quit IRC14:48
*** psilvad_ has quit IRC14:48
*** ametts has joined #openstack-ansible14:49
*** javeriak_ has joined #openstack-ansible14:49
*** saneax is now known as saneax_AFK14:50
*** javeriak has quit IRC14:50
admin0dunno what happened but now I get: http://pastebin.com/BnV6Uaz1  .. anyone can help me with this git thing :D ?14:52
busterswt@palendae https://bugs.launchpad.net/openstack-ansible/+bug/156344814:53
openstackLaunchpad bug 1563448 in openstack-ansible "Change to linuxbridge agent defaults affects upgrades" [Medium,Triaged] - Assigned to admin0 (shashi-eu)14:53
palendaebusterswt: Ahhh, ok, I think I misunderstood your previous comment14:54
openstackgerritJaveria Khan proposed openstack/openstack-ansible: Fixes for SSL key / cert distribution for rabbitmq  https://review.openstack.org/30282414:54
busterswtpalendae sorry, i probably didn't phrase it well.14:54
*** thorst__ has joined #openstack-ansible14:54
admin0busterswt: i took that up, but l2pop by default is not recommended .. so it is to be documented and left to the person upgrading14:55
*** psilvad_ has joined #openstack-ansible14:55
*** sigmavirus24 is now known as sigmavirus24_awa14:55
*** sigmavirus24_awa is now known as sigmavirus2414:55
odyssey4mecloudnull automagically re https://review.openstack.org/291908 yes, I think it's a good pattern - my only other suggestion was to have an 'extras' folder in the role with a 'user_secrets_rolename.yml' file in it. Then the README can simply include that file, and when the role is included in the integrated build it can grab the file (potentially). Something like https://github.com/openstack/openstack-ansible-os_designate/blob/master/extras14:55
odyssey4me/user_secrets_designate.yml14:55
busterswtadmin0 thanks. Are we saying it's not recommended based on the comments in https://review.openstack.org/#/c/252100/? Or has there been other discussion outside of that?14:55
busterswtadmin0 I'm not disagreeing, but multicast probably introduces other operational issues.14:56
*** _55_ has quit IRC14:56
admin0that, and I also had a chat on the neutron channel where it was suggested not to enable that14:56
* busterswt wishes that would have been the message four releases ago.14:57
*** ametts has quit IRC14:57
cloudnullbusterswt:  that was the recommendation from the neutron team due to additional complexity.14:57
*** javeriak has joined #openstack-ansible14:57
busterswtcloudnull for sure. we see it every day :)14:57
cloudnullhahaha14:57
*** cristicalin has joined #openstack-ansible14:58
*** thorst_ has quit IRC14:58
automagicallyInteresting idea odyssey4me14:58
*** ametts has joined #openstack-ansible14:58
cloudnullbut if users are upgrading from earlier releases, where it wasn't an option to have it disabled, and or deployers think it should always be on, its easy enough to enable.14:58
automagicallyWill dig into that a bit and see what I can do14:58
*** cristicalin has quit IRC14:58
*** cristicalin has joined #openstack-ansible14:59
*** javeriak_ has quit IRC14:59
odyssey4mebusterswt so when we upgrade the agent, the agent restarts - so I take it that the restart then doesn't do what's needed?14:59
busterswtcloudnull yeah i'm not opposed to turning it off by default, but i think there needs to be more documentation/discussion around the pros/cons of either approach, and how to revert14:59
*** mgoddard_ has joined #openstack-ansible15:00
odyssey4mebusterswt cloudnull a big part of the feedback from the neutron team was that l2pop is very buggy and should be avoided15:00
*** thorst__ is now known as thorst________15:00
odyssey4meit seems that somehow we managed to scrape by with it on in previous releases15:00
busterswtodyssey4me that's correct. The vxlan interfaces are not 'refreshed' as part of an agent restart, so there's technically no downtime during a restart of the agent. But what happens as a result is that ARP proxy stays ON, and the bridge flooding entries aren't updated. So with l2pop off, the ARP table isn't populated, ARP traffic gets dropped due to proxy, and no updates to the multicast group15:01
busterswtodyssey4me l2pop has been on by default since v9 and yes, it's not been without issues. some within Neutron control, others outside of it.15:01
odyssey4mebusterswt perhaps that relates to https://bugs.launchpad.net/openstack-ansible/+bug/1523031 ?15:01
openstackLaunchpad bug 1523031 in openstack-ansible trunk "Neighbor table entry for router missing with Linux bridge + L3HA + L2 population" [High,Confirmed]15:01
openstackgerritSashi Dahal proposed openstack/openstack-ansible: ceph configuration for glance and nova  https://review.openstack.org/30192215:02
deadnullso the setup-openstack ran clean after a re-do, everything look s good except neutron, returning a 503, going to dive into it now, but glance is happy and I have an image uploaded!15:02
deadnullcc odyssey4me cloudnull ^^15:02
busterswtodyssey4me yeah that's one. Also allowed-address-pairs issues addressed by https://bugs.launchpad.net/neutron/+bug/155180515:02
openstackLaunchpad bug 1551805 in openstack-manuals " add arp_responder flag to linuxbridge agent" [Undecided,New] - Assigned to Anseela M M (anseela-m00)15:02
*** mgoddard has quit IRC15:03
odyssey4mebusterswt that's a docimpact bug, I expect you mean https://bugs.launchpad.net/neutron/+bug/1445089 ?15:04
openstackLaunchpad bug 1445089 in neutron "allowed-address-pairs broken with l2pop/arp responder and LinuxBridge/VXLAN" [Undecided,Fix released] - Assigned to Mark McClain (markmcclain)15:04
*** pjm6_ has joined #openstack-ansible15:05
*** briancubed has quit IRC15:05
odyssey4mehmm, it looks like that's resolve for Mitaka and Liberty15:05
cloudnullRE:L2POP -- not to be that guy, but it would’ve been good for us to bring up concerns like this 4 months ago.  though I think a doc fix is in order for liberty15:07
cloudnullodyssey4me:  that issue is resolved in mitaka15:07
odyssey4meyeah, perhaps a release note is in order15:07
*** thorst________ is now known as thorst_15:08
cloudnulland this https://review.openstack.org/#/c/288050/ review fixes the arp responder problem in liberty15:08
cloudnullwhich was merged15:08
*** pjm6 has quit IRC15:08
cloudnullso when we rev forward that'll be fixed for us too.15:08
busterswtodyssey4me yes, that's the bug.15:08
odyssey4mewe'll need someone to put together a doc entry and a release note pointing to it - and the doc entry should go through the reasoning, etc and recommend using the default, but explain how to enable l2pop if that's preferred15:08
odyssey4meI don't have the expertise to figure that one out - not sure if busterswt is happy to do it?15:09
admin0odyssey4me: is it just mentining in the docs .. hey if you have l2pop enabled, set up this variblle to true, else it will break ?15:09
odyssey4meadmin0 right now there is not entry, which is why we're discussing that there should be15:10
cloudnulli think something added in  reno would be good15:11
admin0whats a reno :D ?15:11
cloudnullrelease notes15:11
admin0:D15:11
deadnullcloudnull figured out the vlan adapter needs more than 1 VLAN to unpack ;-)15:11
cloudnullah, yes it needs a range, even if that range is 1 -- 1:115:12
deadnullgotcha15:12
odyssey4meadmin0 http://docs.openstack.org/releasenotes/openstack-ansible/mitaka.html is generated from https://github.com/openstack/openstack-ansible/tree/stable/mitaka/releasenotes/notes15:12
deadnullwhat is the best process to change the config and re-deploy neutron, just setup-openstack.yml --limit neutron?15:12
odyssey4medeadnull it'd be neutron_all15:13
deadnullodyssey4me gotcha, makes sense, thanks!15:13
odyssey4mebut you can also just run os-neutron-install.yml15:13
deadnullof course I can :)15:14
deadnullmakes sense, learning the lay of the playbooks still15:14
*** javeriak has quit IRC15:15
*** pjm6 has joined #openstack-ansible15:15
*** wangqwsh has quit IRC15:17
*** javeriak has joined #openstack-ansible15:18
busterswtodyssey4me happy to update the docs at some point15:18
palendaecloudnull: I agree with you, but no one was working on upgrades 4 months ago15:18
*** pjm6_ has quit IRC15:19
cloudnulldeadnull: i use openstack-ansible os-neutron-install.yml15:19
cloudnullif its just a reconfiguration then you can use a tag15:19
cloudnull--tags neutron-config15:19
odyssey4mebusterswt that would be great - it'll take a bit of discussion with the neutron guys to get the details of why not to use l2pop and what circumstances it'd ok to use it so that the documentation can inform appropriately15:20
busterswtcloudnull odyssey4me that arp responder bug addresses the allowed-address-pairs issue we had, but has the side benefit of addressing slow arp programming by the L2 agents15:20
odyssey4meto get the right information will need someone who understands how it all works, otherwise I'd happily help out, but I'm dumb with this stuff :/15:21
busterswtbut it's unknown as to how the arp traffic across the overlay affects network performance, so something we need to look at15:21
cloudnullpalendae: true but we we're all here when that conversation was happening and no concerns were raised then. Not saying that its bad to rehash it but we could better participation from downstream as things are developed which will help us catch these types of things15:21
busterswtno worries15:21
cloudnull**we could use better15:21
odyssey4mecloudnull my hope is that we can work on Mitaka->Newton upgrades between Newton-3 and Newton-RC1 in this cycle to try and catch it all up so that we don't end up in this sort of situation15:22
cloudnull++15:22
odyssey4meideally we should be providing feedback to the service dev teams before a release, not after15:22
cloudnullalso we have reno and are forcing folks to use it so that will help too15:23
cloudnullin early liberty it was not implemented or forced15:23
odyssey4meyup, in this cycle we've upped the requirement for docs and release notes where warranted15:23
*** weezS has joined #openstack-ansible15:24
odyssey4meeven the restrospective reno compiles by mhayden have been useful to some - many thanks to mhayden for putting those together!15:24
mhaydenwheee :)15:24
cloudnull++15:24
deadnullcloudnull odyssey4me looking much better, i have networking service online and happy... going to grab a quick bite, but the instance didnt launch. no valid host was found...15:25
cloudnulldid you configure a flat network by chance?15:26
cloudnullin the openstack_user_config file?15:26
deadnullno flat network15:27
deadnulli dont have any native vlans on br-vlan15:27
cloudnullsomething like so in the config https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.aio#L42-L50 ?15:27
*** cloudtrainme has quit IRC15:28
deadnullcloudnull flat is not required, correct? or did I read that wrong15:28
cloudnullit is not.15:28
deadnullok15:28
cloudnullim asking because a lot of folks have tripped over that15:28
cloudnullbut sounds like your good15:28
deadnullgotcha15:29
*** cristicalin has quit IRC15:29
deadnullwould i need to re-run any playbooks on nova15:29
cloudnullno.15:29
cloudnullnot for a network reconfiguration15:29
cloudnullusing the same driver15:29
cloudnullcheck the scheduler/nova services15:29
cloudnullnova service-list15:30
cloudnullalso make sure all of the neutron agents are happy15:30
cloudnullneutron agent-list15:30
deadnullok hopping in my utility container15:30
cloudnullgo eat :)15:30
cloudnullopenstack will be broken when you get back :)15:30
cloudnullreferencing "going to grab a quick bite" -- :D15:30
odyssey4memaybe he needs the openstack-fixerater :p15:31
cloudnullthe fixerator can help15:31
cloudnullhttps://gist.github.com/cloudnull/936146115:31
*** cloudtrainme has joined #openstack-ansible15:31
cloudnullalso if you need to know how it works use "asd"15:31
cloudnull:p15:31
cloudnull-- https://pypi.python.org/pypi/AdvancedSearchDiscovery15:32
* cloudnull yup im a helper15:32
deadnulllol @ fixerator15:32
deadnulllike asd, been using silversearcher on my log container15:33
deadnullnova and neutron are happy15:33
deadnullbbs15:33
*** deadnull is now known as rovalent_afk15:33
cloudnulldeadnull: asd isnt that useful. put it in a venv and try it out.15:33
mhaydenodyssey4me / cloudnull: i'm totally okay with a new branch on the security role for https://review.openstack.org/#/c/302385/15:37
*** galstrom_zzz is now known as galstrom15:37
odyssey4memhayden so the SHA would still have to be managed anyway with a branch - I'll have to check with the release team whether we can create a liberty branch at this stage15:38
cloudnullodyssey4me: +1 for a branch15:40
cloudnullw/ it we can backport as needed15:40
cloudnulland manage it outside of the OSA repo15:40
cloudnullIMO it will stay more stable that way15:40
mhaydenodyssey4me: thanks for asking about that15:41
*** mikelk has quit IRC15:43
*** afranc has joined #openstack-ansible15:44
cloudnulljaveriak: ping15:44
cloudnullregarding performance https://review.openstack.org/#/c/30128815:44
cloudnullthat patch in liberty will make a fairly big impact15:45
*** javeriak has quit IRC15:45
cloudnullwe are running that now in the OSIC public cloud w/ ~375 compute nodes and haven't seen performance issues like you we're describing. not that, that change alone will fix all the things, but its something worth considering15:46
mhaydenwe should rewrite ansible in erlang15:47
cloudnullum no. . .15:47
cloudnull:)15:47
evrardjpfor security, branch + tag imo15:47
cloudnullerlang is the devil15:47
evrardjpcloudnull erlang is 1970s15:48
evrardjpnot the devil15:48
mhaydenevrardjp: branch *and* tag?15:48
cloudnull1970s == devil15:48
*** gregfaust has joined #openstack-ansible15:48
*** rovalent_afk is now known as _deadnull15:48
cloudnullcores / reviewers -- these changes could use some love -- https://review.openstack.org/#/q/project:%255Eopenstack/openstack-ansible.*+branch:liberty+status:open,n,z15:48
evrardjpmhayden yup, branching for an easy differentiation, tags for an easy versioning of update inside the branch15:49
evrardjpbut that's only my personal opinion15:49
*** admin0 has quit IRC15:49
mhaydenare we tagging with the other IRR's? maybe i missed that15:49
evrardjpwe do15:50
evrardjpdepending on which ones15:51
evrardjpbut keepalived does15:51
evrardjphowever keepalived doesn't need the branches, so it's quite easier15:51
evrardjpthe advantage of tagging is an easy update using ansible-galaxy15:51
evrardjpI'm not sure it's really needed for everything cloned directly on openstack git repos instead of galaxy15:52
*** pcaruana has quit IRC15:53
*** jayc has joined #openstack-ansible15:54
*** shanec has joined #openstack-ansible15:57
*** eil397 has joined #openstack-ansible15:59
*** jmccrory_ has joined #openstack-ansible15:59
*** shanec_ has joined #openstack-ansible16:00
odyssey4memeeting in #openstack-meeting-4 cloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, erikmwilson, mancdaz, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung, prometheanfire, evrardjp, arbrandes, mhayden, scarlisle, luckyinva, ntt, javeriak, automagically, spotz, vdo, jmccrory, alextricity25, jasondotstar, KLevenstein, admin0, michaelgugino, ametts, v1k0d3n,16:01
odyssey4me severion, bgmccollum16:01
*** ametts has quit IRC16:01
*** javeriak has joined #openstack-ansible16:02
*** shanec has quit IRC16:02
javeriakcloudnull sorry was away, cool so this should help with the openstack commands in general? since everything seems to be going thru slowly. guess i only need to run the rabbitmq playbooks after applying this?16:03
*** automagically has quit IRC16:03
*** automagically has joined #openstack-ansible16:04
*** mgoddard_ has quit IRC16:06
*** jayc_ has joined #openstack-ansible16:07
*** jayc has quit IRC16:07
*** mgoddard has joined #openstack-ansible16:08
*** _deadnull is now known as rovalent_afk16:12
stevelleo/ evrardjp, wb16:14
evrardjpstevelle thanks ;)16:15
rovalent_afklunch acquired , full stomach clear head16:15
pjm6anyone who had installed and runned tempest, had glance not working in uploading files?16:16
pjm6delete / upload16:16
*** ametts has joined #openstack-ansible16:17
cloudnulljaveriak:  yes16:17
cloudnullitll restart the app and apply the changes16:17
cloudnullpjm6: no. not that i can recall.16:18
pjm6hey cloudnull16:18
pjm6thanks16:18
pjm6for some reason it stops working the delete / update16:18
pjm6but i can create vms with the image16:18
pjm6if I try delete I receive: UnknownScheme: Unknown scheme 'file' found in URI16:18
javeriakcloudnull would it help it horizon too? it still seems sluggish to me...16:18
pjm6in upload: Error in store configuration. Adding images to store is disabled.16:19
cloudnulljaveriak: itll help services respond faster via messaging16:19
cloudnullso maybe ?16:19
javeriakalright, will let the testing continue and see what we get16:19
javeriakthanks for the link16:19
rovalent_afkcloudnull odyssey4me so odd, i re-ran os-nova-install.yml, and instance is running (from ISO off of glance) no block yet16:21
*** jayc_ is now known as jayc16:25
*** Bjoern_ has joined #openstack-ansible16:28
*** daneyon_ has quit IRC16:28
*** daneyon has joined #openstack-ansible16:29
*** Bjoern_ is now known as BjoernT16:30
cloudnullrovalent_afk: good to go then? :)16:36
*** KLevenstein has joined #openstack-ansible16:37
*** Oku_OS is now known as Oku_OS-away16:38
*** Oku_OS-away is now known as Oku_OS16:39
*** iceyao has quit IRC16:39
*** eil397 has quit IRC16:42
*** eil397 has joined #openstack-ansible16:42
*** javeriak_ has joined #openstack-ansible16:44
*** pjm6 has quit IRC16:45
*** javeriak has quit IRC16:45
*** galstrom is now known as galstrom_zzz16:46
javeriak_cloudnull its still slow, im getting ~35sec for a neuton net-list; while the same thing on a local setup of mine takes 0.9s16:47
javeriak_the cli is still useable though, just slow; horizon still comes and goes16:47
cloudnullbummer16:48
cloudnullwell it was worth a try16:48
*** Oku_OS is now known as Oku_OS-away16:48
javeriak_cloudnull yea its ok; do let me know if you guys think of anything else to help performance for such large installs16:51
evrardjphaproxy ?16:51
*** rovalent_afk is now known as deadnull16:51
evrardjpissues with reverse dns lookups, I don't know... I'm stabbing in the dark without logs :p16:52
evrardjpmaybe you should analyse where the performance gets worse thanks to your load balancer16:53
cloudnullevrardjp:  that may be something to improve db performance16:53
javeriak_im using simple haproxy installed on the deploy node16:54
evrardjpI don't know the issue, I'm barely jumped on a topic I don't know, just trying to help16:54
*** admin0 has joined #openstack-ansible16:54
javeriak_hmm i wonder if a proper dedicated LB would help hee16:54
evrardjpmy point was: as all api calls go through the reverse proxy, you could theoretically deduce metrics there16:55
cloudnullhap should be plenty performant, though i have seen reverse DNS slowing things down in the past16:55
evrardjphaproxy could have its issues sometimes, depending on the config, but I've never seen it slow16:56
evrardjpI only saw configuration issues16:56
*** cloudtrainme has quit IRC16:57
evrardjpif you think its the fault of haproxy, try disabling in haproxy the backend in excess16:57
evrardjptemporarily16:57
evrardjpif it's faster, then it is haproxy, else you have to look elsewhere16:58
evrardjpbut you could also do that by checking at haproxy metrics16:58
cloudnulljaveriak_: you can try setting -- mysql> SET GLOBAL skip_name_resolve=116:58
cloudnulland see if that helps16:58
*** weezS has quit IRC16:59
*** admin0 has quit IRC17:00
*** admin0 has joined #openstack-ansible17:00
javeriak_cloudnull alright17:03
*** dsariel is now known as dsariel|afk17:04
openstackgerritMerged openstack/openstack-ansible: Ensure the OpenStack gate has access to the logs  https://review.openstack.org/30128217:04
admin0i think finally satisfied everyone via this latest update to the ceph docs :D17:05
*** ametts has quit IRC17:07
*** pcaruana has joined #openstack-ansible17:08
*** michaelgugino has joined #openstack-ansible17:08
*** pjm6 has joined #openstack-ansible17:08
*** dsariel|afk has quit IRC17:09
odyssey4mejmccrory automagically d34dh0r53 stevelle a review of https://review.openstack.org/300505 would be appreciated to support https://review.openstack.org/29652517:11
* automagically looking17:11
odyssey4meI'm out for the night - have a great day everyone.17:12
automagicallyLater odyssey4me17:12
cloudnullAlso -- Congratulations everyone -- http://www.openstack.org/software/mitaka -- its been a lot of hard work and I'm prod to say I've worked with the lot of you on getting our Mitaka release out the door.17:13
cloudnullodyssey4me: cheers17:13
*** ametts has joined #openstack-ansible17:20
*** pjm6 has quit IRC17:21
*** woodard has quit IRC17:24
*** pjm6 has joined #openstack-ansible17:24
*** nhadzter has joined #openstack-ansible17:28
openstackgerritMerged openstack/openstack-ansible-plugins: Add logic to NOT build "proprietary" packages  https://review.openstack.org/30050517:29
openstackgerritMerged openstack/openstack-ansible: Ensure the OpenStack gate has access to the logs  https://review.openstack.org/30128317:38
admin0should i test new mitaka first, or start with liberty -> mitaka ?17:39
pjm6just now that i was starting to begin understand and have liberty working xD17:39
pjm6anyway congratulations guys for the good work :D17:39
*** weezS has joined #openstack-ansible17:48
cloudnulladmin0: i'd start with liberty17:49
admin0liberty -> mitaka17:49
cloudnullmitaka is out , but many of the projects will be stabalizing over the next few weeks .17:49
deadnullrabbit, you are going to be the death of me today17:49
cloudnullpjm6 too17:50
deadnullstill hitting the no valid host, not enough hosts available...17:50
jmccrorydeadnull: when trying to boot an instance?17:53
deadnulljmccrory , just trying to launch an instance, i added both a qcow2 (ubuntu) image and an iso (alpine) to glance17:53
deadnullblock is failing still but im less concerned about that17:53
*** phiche has joined #openstack-ansible17:54
jmccrorymake sure your compute services are active with `nova service-list`, and nova-scheduler and nova-compute logs should have more info on why hypervisors are either being filtered out or failing to schedule17:55
jmccroryalso check `neutron agent-list`17:56
deadnulljmccrory neutron agent-list and nova service-list are great17:56
deadnulli'll check logs again17:56
admin0deadnull: first check the libvirt log in the compute node .. that usually tells you the exact reason17:56
admin0like neutron not responding, network issue, cannot mount filesystem etc17:57
michaelguginospeaking of neutron plugin stuff...  Hoping to get some more input on https://review.openstack.org/#/c/298765/ and the associated change https://review.openstack.org/#/c/300605/17:57
mongo2and as a last resort run a grep -ir with the machine UUID against the rsyslog dir.17:57
admin0deadnull are you using spreading ?17:57
deadnulladmin0 spreading? :)17:57
admin0deadnull: https://www.openstackfaq.com/openstack-stacking-vs-spreading/17:58
admin0you might be hitting this :D17:58
deadnullnice doc!17:58
admin0i have a 20 lines SQL query to predit this :D17:58
deadnulli have 1 nova compute node17:58
*** phiche has quit IRC17:58
admin0but its too complex here17:58
admin0jsut 1 node :D17:58
admin0ok17:58
deadnulllibvirtd.log has "virNetlinkEventCallback:340 : nl_recv returned with error: No buffer space available"17:59
deadnullim going ot start a new instance17:59
admin0:D17:59
admin0always check the libvirt first17:59
admin0saves you time17:59
*** mgariepy has quit IRC17:59
deadnullgood tip17:59
deadnulli was using silversearcher for the UUID on syslog container17:59
deadnulland i thought that was great :)17:59
*** mgariepy has joined #openstack-ansible18:00
admin0 i have thousands of vms on hundreds of compute nodes .. and no horizon .. and public cloud .. so issues keep popping up  here and there18:00
admin0and i think years of debugging experince = check libvirt first18:00
admin0:D18:00
deadnull:)18:00
*** cloudtrainme has joined #openstack-ansible18:01
deadnullso the ISO instance launched from the image just fine.18:01
deadnulltrying the qcow2 img now18:02
admin0deadnull: whats the kernel on the compiute node ?18:02
mongo2michaelgugino: I dont' have the time to test the proposed changes there but why not just use the plugin name?  E.G. OVSNeutronPluginV2, this would allow people whom need to extend it to support vendor plugins to do so without mapping.18:02
deadnull4.2.0-35-generic18:02
admin0ok18:02
deadnull3.13 was the min right18:02
*** phiche has joined #openstack-ansible18:03
mongo2deadnull: are you using ixgbe network cards?18:03
admin0deadnull: google will show that many have errors that you see .  you need to check what you have and the solution18:04
deadnullmongo2 yes18:05
deadnulltwinax running from all my nodes to a Cisco Cat 6k, right now for the POC single link to the upstream switch. Moving to bonds when we get to prod18:06
michaelguginomongo2: the plugin name is ml2.  OVS is a driver of the ml2 plugin.  I don't think people are using OVSNeutronPluginV2 any more, it's not referenced in any docs.18:06
michaelguginoDoes anyone have a list of the summit sessions for OSA?  I looked at the schedule, only seen the 101 session18:10
mongo2michaelgugino: Yes I only have my older hand installed stack there...one that actually suffered from teh pain of the change to ml_2.  I'll try to deploy a stack, if I do i'll give feedback.  We need it as linux bridge is pretty slow.18:10
michaelguginomongo2: I'm more looking for code review more so than functional.  Specifically if the cores like the method of implementation.18:11
*** jayc has quit IRC18:12
mongo2deadnull: if you are on ubuntu try upgrading to linux-image-generic-lts-xenial and/or try adding "pre-up ethtool -G $IFACE rx 4078 tx 4078" to the interfaces.  I am not at the office but there was some crazy thing with multiqueue and the igxbe driver IIRC.18:13
admin0    container_vars: for say X containers, where should it be alighed ?18:13
admin0same line as hosts declaration ?18:13
mongo2michaelgugino: yes, meatspace limitation, I have to do functional to do a valid code review.18:13
cloudnullmichaelgugino: https://www.openstack.org/summit/austin-2016/summit-schedule/#day=2016-04-27&summit_types=2&tags=354218:16
michaelguginosweet, thanks18:17
*** javeriak_ has quit IRC18:17
michaelguginoso, we're primarily getting together on wednesday then?18:17
*** admin0 has quit IRC18:17
cloudnullit seems that way18:18
*** jmccrory_ has quit IRC18:18
cloudnulli think we're wed-fri18:19
*** v1k0d3n has quit IRC18:19
*** jayc has joined #openstack-ansible18:24
*** sdake_ has joined #openstack-ansible18:26
*** sdake has quit IRC18:26
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_neutron: Nuage neutron plugin ansible changes  https://review.openstack.org/29652518:26
*** automagically has quit IRC18:28
*** BjoernT has quit IRC18:35
*** phiche1 has joined #openstack-ansible18:35
*** phiche has quit IRC18:38
*** admin0 has joined #openstack-ansible18:40
*** automagically has joined #openstack-ansible18:46
*** rstarmer has joined #openstack-ansible18:53
*** woodard has joined #openstack-ansible18:59
*** admin0 has quit IRC19:06
*** sigmavirus24 is now known as sigmavirus24_awa19:17
*** deadnull is now known as _deadnull19:26
*** clickboom has joined #openstack-ansible19:31
*** clickboom is now known as aabradsh19:35
*** aabradsh has quit IRC19:35
*** cloudtrainme has quit IRC19:37
*** LanceHaig has joined #openstack-ansible19:40
*** karimb has joined #openstack-ansible19:42
*** clickboom has joined #openstack-ansible19:43
*** _deadnull is now known as deadnull19:47
*** karimb has quit IRC19:47
*** michaelgugino has quit IRC19:54
*** McMurlock1 has joined #openstack-ansible19:57
*** ametts has quit IRC20:01
*** cloudtrainme has joined #openstack-ansible20:08
*** galstrom_zzz is now known as galstrom20:13
*** ametts has joined #openstack-ansible20:17
*** pjm6 has quit IRC20:23
*** sdake_ is now known as sdake20:23
*** jthorne_ has joined #openstack-ansible20:31
*** jthorne has quit IRC20:31
*** johnmilton has quit IRC20:32
*** woodard has quit IRC20:32
*** michaelgugino has joined #openstack-ansible20:33
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-galera_server: Remove dependency on python2_lxc git source  https://review.openstack.org/30306320:35
*** ametts has quit IRC20:41
*** admin0 has joined #openstack-ansible20:56
mrdaMorning OSA21:00
*** pcaruana has quit IRC21:01
michaelguginohowdy21:01
mrdao/21:02
*** weshay has quit IRC21:03
*** psilvad_ has quit IRC21:04
*** deadnull has quit IRC21:05
*** eil397 has quit IRC21:06
*** clickboo_ has joined #openstack-ansible21:06
*** automagically has quit IRC21:08
*** Nepoc has quit IRC21:09
*** clickboom has quit IRC21:10
*** Nepoc has joined #openstack-ansible21:10
*** jayc has quit IRC21:11
*** thorst_ has quit IRC21:12
*** rstarmer has quit IRC21:13
*** clickboo_ has quit IRC21:15
*** thorst_ has joined #openstack-ansible21:18
*** Mudpuppy has quit IRC21:18
*** KLevenstein_ has joined #openstack-ansible21:18
*** Mudpuppy has joined #openstack-ansible21:19
*** McMurlock1 has quit IRC21:19
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ironic: Change the default thread pool size  https://review.openstack.org/30307921:19
cloudnullmorning mrda21:19
*** KLevenstein has quit IRC21:21
*** KLevenstein_ is now known as KLevenstein21:21
*** thorst__ has joined #openstack-ansible21:21
*** eil397 has joined #openstack-ansible21:21
*** thorst_ has quit IRC21:22
*** Mudpuppy has quit IRC21:23
*** woodard has joined #openstack-ansible21:24
*** jayc has joined #openstack-ansible21:25
*** thorst__ has quit IRC21:25
openstackgerritJulian Montez proposed openstack/openstack-ansible-os_swift: Cast region key's value as an integer  https://review.openstack.org/30308721:29
*** spotz is now known as spotz_zzz21:30
*** weezS has quit IRC21:35
*** b3rnard0 is now known as b3rnard0_away21:35
*** galstrom is now known as galstrom_zzz21:36
*** rstarmer has joined #openstack-ansible21:38
*** woodard has quit IRC21:38
*** cloudtrainme has quit IRC21:41
*** thorst_ has joined #openstack-ansible21:44
*** weshay has joined #openstack-ansible21:45
*** thorst_ has quit IRC21:48
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-galera_server: Reorganize test playbooks  https://review.openstack.org/30309021:50
*** jthorne_ has quit IRC21:51
*** phiche has joined #openstack-ansible21:54
*** phiche1 has quit IRC21:55
stevelleI'm seeing haproxy stopped during run-playbooks, not sure why.21:57
*** admin0 has quit IRC21:59
cloudnullstevelle: hap is getting stopped ?22:04
cloudnullmaybe port 5000 issue?22:04
stevellecloudnull: stopped or dying22:04
stevelleservice haproxy status shows it stopped, a start fixes and I can resume but it happened a couple times for me22:04
stevelleno clues in log22:05
cloudnullwell thats not good22:05
*** phiche has quit IRC22:07
*** phiche has joined #openstack-ansible22:08
*** phiche has quit IRC22:14
*** busterswt has quit IRC22:15
*** nhadzter has quit IRC22:19
*** winggundamth has quit IRC22:21
*** nhadzter has joined #openstack-ansible22:22
cloudnullstevelle: i've seen that happen in the past with hap restarting and trying to bind port 5000 which is being used by "something else" for a moment22:24
cloudnullbut not where it whole sale dies22:24
* cloudnull looking some more22:24
stevelleIf we see it pop up more, I'd worry.22:25
openstackgerritMerged openstack/openstack-ansible: Ensure cleanup-rabbitmq-vhost is re-runnable  https://review.openstack.org/30237722:25
*** winggundamth has joined #openstack-ansible22:30
*** michaelgugino has quit IRC22:30
*** gregfaust has quit IRC22:31
*** cloudtrainme has joined #openstack-ansible22:31
*** cloudtrainme has quit IRC22:32
*** cloudtrainme has joined #openstack-ansible22:32
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-galera_server: Reorganize test playbooks  https://review.openstack.org/30309022:33
*** Nepoc has quit IRC22:34
*** Nepoc has joined #openstack-ansible22:35
*** cloudtrainme has quit IRC22:37
*** Brew has quit IRC22:40
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-galera_server: Reorganize test playbooks  https://review.openstack.org/30309022:43
*** sdake has quit IRC22:54
*** sdake has joined #openstack-ansible22:58
*** cloader89 has joined #openstack-ansible23:00
openstackgerritMichael Carden proposed openstack/openstack-ansible-ironic: Add tests for the ironic CLI  https://review.openstack.org/30310423:05
*** rstarmer has quit IRC23:06
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-galera_server: Reorganize test playbooks  https://review.openstack.org/30309023:07
neillccloudnull: horizon performance is a reall problem, especially at scale.  it's a major focus of work for horizon devs.  answer is things like the new swift ui r1chardj0n3s has been working on and perhaps serachlight23:10
cloudnullthakns neillc javeria -- not here now -- was working with it and seeing issues w/ liberty23:12
*** sdake has quit IRC23:12
cloudnullany tips I be able to pass on regarding making it better?23:12
*** KLevenstein has quit IRC23:12
cloudnullthanks for getting back to me btw neillc23:12
neillcnp23:13
neillcsadly I don't have any great tips. richard is probably the person to ask, but not sure if he will either23:13
neillcmany of the problems are about the apis we consume, combined with old style full page loads on every refresh23:14
*** busterswt has joined #openstack-ansible23:14
stevelledoes horizon use the python-* clients or it's own rest apis generally?23:20
openstackgerritJimmy McCrory proposed openstack/openstack-ansible-galera_server: Reorganize test playbooks  https://review.openstack.org/30309023:28
*** markvoelker has quit IRC23:29
*** jamielennox|away is now known as jamielennox23:36
*** eil397 has left #openstack-ansible23:46
*** busterswt has quit IRC23:52
*** markvoelker has joined #openstack-ansible23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!