Wednesday, 2018-12-26

openstackgerritKevin Carter (cloudnull) proposed openstack/ansible-role-systemd_mount master: Ensure mounts are able to be network aware  https://review.openstack.org/62731802:02
*** xenefix has quit IRC02:04
*** hwoarang has quit IRC02:17
*** hwoarang has joined #openstack-ansible02:21
prometheanfirecloudnull: yourwelcome :D02:37
prometheanfirealso, working on christmas? for shame02:38
*** udesale has joined #openstack-ansible04:05
*** thuydang has quit IRC04:24
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Use tempest_tempestconf_profile for handling named args  https://review.openstack.org/62318705:11
*** shyamb has joined #openstack-ansible05:15
*** shyam89 has joined #openstack-ansible05:51
*** shyamb has quit IRC05:53
*** galaxyblr has joined #openstack-ansible06:18
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Use tempest_tempestconf_profile for handling named args  https://review.openstack.org/62318706:53
*** galaxyblr has quit IRC07:24
openstackgerritmelissaml proposed openstack/openstack-ansible-os_neutron master: fix url in doc  https://review.openstack.org/62736007:40
openstackgerritmelissaml proposed openstack/openstack-ansible-os_searchlight master: fix url in doc  https://review.openstack.org/62736107:44
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Use tempest_tempestconf_profile for handling named args  https://review.openstack.org/62318708:09
openstackgerritmelissaml proposed openstack/openstack-ansible-pip_install master: fix the url in doc  https://review.openstack.org/62737108:14
openstackgerritmelissaml proposed openstack/ansible-hardening master: fix the url in developer-guide.rst  https://review.openstack.org/62737308:20
*** shyam89 has quit IRC08:43
*** shyamb has joined #openstack-ansible08:45
*** shyamb has quit IRC09:24
*** luksky has joined #openstack-ansible09:53
*** nyloc has quit IRC10:08
*** shyamb has joined #openstack-ansible10:12
*** udesale has quit IRC10:17
*** vnogin has joined #openstack-ansible10:21
*** udesale has joined #openstack-ansible10:21
*** nyloc has joined #openstack-ansible10:22
*** udesale has quit IRC10:23
*** udesale has joined #openstack-ansible10:23
*** vnogin has quit IRC11:12
*** noonedeadpunk has joined #openstack-ansible11:31
*** thuydang has joined #openstack-ansible12:22
*** thuydang has quit IRC12:27
*** thuydang has joined #openstack-ansible12:27
*** shyamb has quit IRC12:49
*** gkadam has joined #openstack-ansible13:02
*** tosky has joined #openstack-ansible13:20
*** luksky has quit IRC13:26
*** gkadam_ has joined #openstack-ansible14:24
*** gkadam has quit IRC14:26
*** gkadam_ is now known as gkadam-afk14:51
*** gkadam-afk is now known as gkadam15:23
*** hamzaachi has joined #openstack-ansible15:57
redkriegHas anyone seen glance have privsep errors when trying to use the cinder backend?  in the glance api log I see "privsep log: sudo: no tty present and no askpass program specified" in addition to the traceback: http://paste.openstack.org/show/738655/16:14
redkriegalso looks similar to this bug, but this one's for kolla-ansible: https://bugs.launchpad.net/kolla/+bug/168289016:14
openstackLaunchpad bug 1702842 in kolla-ansible ocata "duplicate for #1682890 glance do not support cinder backend" [High,In progress] - Assigned to Jeffrey Zhang (jeffrey4l)16:14
*** markvoelker has quit IRC16:16
*** luksky has joined #openstack-ansible16:34
redkriegI don't see any sudoers entry in my glance-api container, so I suspect that might be the root cause.  Trying to figure out what it should be right now, I copied the neutron sudoers changing everything to glance, this got me a little further: http://paste.openstack.org/show/738656/16:43
*** hwoarang has quit IRC16:45
redkriegSolved the above by activating the glance venv and doing "pip install oslo.rootwrap".  Now I'm tracking down iscsi files being missing.  Could not find the iSCSI Initiator File /etc/iscsi/initiatorname.iscsi: ProcessExecutionError: Unexpected error while running command.16:47
*** hwoarang has joined #openstack-ansible16:47
redkriegwondering if nobody has used cinder as a backend for glance in openstack-ansible before16:47
redkrieghad to install open-iscsi in the glance api container as well.  now it's trying to connect to a cinder backend in the wrong location16:59
redkrieg*sigh*16:59
*** gkadam has quit IRC17:06
cloudnullprometheanfire hahaha, its just a day of the week for me17:20
cloudnullredkrieg I tried using cinder as a backend store for glance a long while back (mitaka time frame?), it didn't work quite right. I've not tried since17:21
cloudnullI generally use swift, ceph or nfs.17:22
redkriegI see.  seemed like openstack-ansible's storage node configs were geared toward cinder.  do you use a storage node for swift?17:23
cloudnullyes.17:25
redkriegdoes swift require uid/gid sync like nfs does?17:26
redkriegI'm really just trying to get a functional setup going with a storage node and it has become quite the chore.  turns out glance's cinder backend didn't even specify the region until a fix was committed 6 months ago17:28
redkriegcloudnull: do you happen to have an example of your swift configs for openstack-ansible?17:30
cloudnullsure let me grab the one i have in my local lab17:32
*** udesale has quit IRC17:33
cloudnullhttps://gist.github.com/cloudnull/eab22ba500843343470a7fd1e3f55d1c#file-openstack_user_config-yml-L74-L8717:36
cloudnullthat's the config.17:36
cloudnullhttps://pasted.tech/pastes/d37806d92a4b5c30ab12fa6979da4457a14b9cba - for this setup, disk{1,2,3} are logical volumes mounted at `/srv/disk{1,2,3}`17:38
redkriegwill swift complain if I only have one volume to start?17:38
redkriegalso thank you so much17:38
cloudnullnope.17:38
cloudnullunder the drives section , https://gist.github.com/cloudnull/eab22ba500843343470a7fd1e3f55d1c#file-openstack_user_config-yml-L78 , just specify one disk17:38
cloudnullIMHO swift is the easiest way to back glance.17:39
cloudnullswift doesnt need to be massive arrays, it just needs to be big enough for your intended images.17:41
redkriegthat'd be perfect.  does it back volumes as well for snapshots, etc?17:41
cloudnullyes.17:41
cloudnullif you have swift in config, and deploy it, you can rerun the cinder playbooks to setup your volume-backup service17:42
redkriegI see the storage network device here, that can't be a bridge I assume17:48
redkrieghmm, default config uses the bridge, I'll give it a shot with that.  I don't use bonds17:50
cloudnullit can be a bridge. just needs to be something that has an address.17:50
cloudnullif you have something like br-storage or br-mgmt, that would be perfectly fine.17:50
redkriegthanks.  I'm reconfiguring disks on my storage box now, hopefully things "just work" after I run playbooks17:51
cloudnull:)17:52
cloudnullmake sure to mount the disks at `/srv/${WHAT_EVER_YOU_NAMED_IT}`17:53
cloudnulland format it XFS17:53
cloudnullwhen I formatted the disks for swift I used the following command, `mkfs.xfs -K -d agcount=64 -l size=128m -f ${DEVICE_PATH}`17:54
cloudnullthe playbooks will not format and mount the disks for you.17:55
redkriegany particular reason for those options?  I see the examples only use -f -i size=1024 -L ${DEVICE_PATH}17:55
redkrieghttps://docs.openstack.org/openstack-ansible-os_swift/latest/configure-swift-devices.html17:55
cloudnullagcount us set to 64, its just been a good value for me. default is "scaled" based on the underlying system and I have a distrust of that.17:57
redkriegfair enough17:57
cloudnullagcount being account groups17:57
cloudnullsame is true for size, however on real disks that flag should not be needed.17:58
* cloudnull is using logical volumes 17:58
cloudnullthat said, setting a defined log size is desirable if you will be using disks of different sizes over the lifetime of the cloud.18:00
redkriegyeah, I'm doing an lv on hardware raid18:02
*** tosky has quit IRC18:19
*** xenefix has joined #openstack-ansible18:20
redkriegare the swift-proxy_hosts the compute nodes?18:28
cloudnullfor my little setup they're on the infra hosts18:42
cloudnullhttps://gist.github.com/cloudnull/eab22ba500843343470a7fd1e3f55d1c#file-openstack_user_config-yml-L16018:42
redkriegoh you have swift_hosts on your compute?  I have a storage node, should that be configured as both?18:43
cloudnullno, you can have separate storage hosts.18:45
cloudnullits on my compute host because I didn't have another machine18:45
redkriegawesome, thanks18:46
*** ESpiney has joined #openstack-ansible19:18
*** markvoelker has joined #openstack-ansible20:19
redkriegcloudnull: not sure if you're still around, but I hit an error installing swift.  the "Retrieve checksum for venv download" task is giving a 404 for the checksum file: http://paste.openstack.org/show/738659/20:47
redkriegconfirmed other services have files in /var/www/repo/venvs/18.1.0/ubuntu-18.04-x86_64 on the repo container, but not swift20:51
*** hamzaachi has quit IRC22:30
*** macza has joined #openstack-ansible22:44
*** macza has quit IRC22:46
redkriegdoes anyone know what triggers a service to be built and available on the repo container?  I'm missing swift on mine and I'm a bit stumped as to why22:56
*** luksky has quit IRC23:16
*** thuydang has quit IRC23:36
*** vnogin has joined #openstack-ansible23:37
*** vnogin has quit IRC23:41

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!