Thursday, 2020-12-17

*** tosky has quit IRC00:01
*** rfolco has joined #openstack-ansible00:38
*** jamesdenton has joined #openstack-ansible01:02
*** rfolco has quit IRC01:20
*** kukacz has quit IRC01:24
*** gyee has quit IRC01:24
*** evrardjp has quit IRC01:24
*** CeeMac has quit IRC01:24
*** mmercer has quit IRC01:24
*** tinwood has quit IRC01:24
*** nicolasbock has quit IRC01:24
*** sri_ has quit IRC01:24
*** alanmeadows has quit IRC01:24
*** Open10K8S has quit IRC01:24
*** sc has quit IRC01:24
*** jroll has quit IRC01:24
*** kukacz has joined #openstack-ansible01:30
*** gyee has joined #openstack-ansible01:30
*** evrardjp has joined #openstack-ansible01:30
*** CeeMac has joined #openstack-ansible01:30
*** mmercer has joined #openstack-ansible01:30
*** tinwood has joined #openstack-ansible01:30
*** nicolasbock has joined #openstack-ansible01:30
*** sri_ has joined #openstack-ansible01:30
*** alanmeadows has joined #openstack-ansible01:30
*** Open10K8S has joined #openstack-ansible01:30
*** sc has joined #openstack-ansible01:30
*** jroll has joined #openstack-ansible01:30
*** dave-mccowan has joined #openstack-ansible01:49
*** waxfire has quit IRC02:03
*** waxfire has joined #openstack-ansible02:03
*** macz_ has quit IRC02:05
*** macz_ has joined #openstack-ansible02:05
*** tinwood has quit IRC02:08
*** tinwood has joined #openstack-ansible02:11
*** gyee has quit IRC03:11
*** klamath_atx has joined #openstack-ansible04:06
*** klamath_atx has quit IRC04:12
*** klamath_atx has joined #openstack-ansible04:13
*** macz_ has quit IRC04:28
*** kukacz has quit IRC05:10
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
*** kukacz has joined #openstack-ansible05:35
*** shyamb has joined #openstack-ansible05:41
*** dasp has quit IRC06:30
*** dasp has joined #openstack-ansible06:47
*** shyamb has quit IRC06:52
openstackgerritMerged openstack/openstack-ansible-os_glance stable/ussuri: Trigger uwsgi restart  https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/76714407:52
*** andrewbonney has joined #openstack-ansible08:11
openstackgerritMerged openstack/openstack-ansible-os_glance stable/train: Trigger uwsgi restart  https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/76714508:14
openstackgerritAndrew Bonney proposed openstack/openstack-ansible master: Disable repeatedly failing zun tempest test  https://review.opendev.org/c/openstack/openstack-ansible/+/76746908:18
*** maharg101 has joined #openstack-ansible08:26
*** maharg101 has quit IRC08:31
*** jawad_axd has joined #openstack-ansible08:42
jrosserdebian memcached<>keystone trouble again https://zuul.opendev.org/t/openstack/build/0869d255089f41dba7c8cef7ff8cd26c/log/logs/host/keystone-wsgi-public.service.journal-00-29-15.log.txt#20394-2045008:43
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_zun master: Update zun role to match current requirements  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/76314108:44
noonedeadpunkI can;t recall previous time tbh...08:45
*** jbadiapa has joined #openstack-ansible08:48
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Bump SHAs for stable/ussuri  https://review.opendev.org/c/openstack/openstack-ansible/+/76686008:49
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible stable/train: Bump SHAs for stable/train  https://review.opendev.org/c/openstack/openstack-ansible/+/76685908:50
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Bump SHAs for stable/ussuri  https://review.opendev.org/c/openstack/openstack-ansible/+/76686008:51
*** maharg101 has joined #openstack-ansible08:54
*** tosky has joined #openstack-ansible09:12
*** shyamb has joined #openstack-ansible09:17
admin0morning09:25
ptomorning10:09
*** shyamb has quit IRC10:11
ptojrosser: Are you online?10:12
ptoIt doesnt look like the part is beeing run https://github.com/openstack/openstack-ansible/blob/ba8b6af1740aa98ec930178a206f1ac248b026fc/playbooks/os-keystone-install.yml#L15410:16
ptoThe tasks_from in the role includes, seems to be discarded10:17
*** shyamb has joined #openstack-ansible10:18
*** jpward has quit IRC10:20
jrosserpto: hello10:36
ptojrosser: I am just testing the idp fix, and its not working10:36
jrosserpto: can you paste an example?10:37
ptojrosser: Sure. I will run again and pipe to a file. The tasks_from part is ignored and the os_keystone role is run again without the IDP part10:38
jrosserunfortunately we have no means to test this in CI so i was waiting for feedback on the patch10:38
ptojrosser: Thats why i am testing it :-)10:38
jrosseroh maybe i see whats wrong10:39
jrossercould you take the '.yml' off the end of the tasks_from line?10:40
ptoSure. Testing now10:40
*** kukacz has quit IRC10:43
*** shyamb has quit IRC10:44
*** tosky has quit IRC10:47
*** tosky has joined #openstack-ansible10:47
openstackgerritMarcus Klein proposed openstack/openstack-ansible-os_octavia master: Do not set amp_ssh_access_allowed configuration option any more.  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76751110:47
kleinijrosser, noonedeadpunk: ^^^ this is a small improvement to fix a configuration warning in Octavia.10:48
ptojrosser: https://pastebin.com/DnEc86HR10:49
ptojrosser: Tasks are still skipped10:49
jrosserpto: sorry  i have meetings for a while10:50
kleinithe other warning is about amp_image_id. when looking at the role, I would just remove octavia_amp_image_id and its amp_image_id part in the config template, but I am not sure, how best approach would be in this regards10:51
ptojrosser: No worries. I will open a bug and propose a fix10:51
*** rpittau|afk is now known as rpittau10:59
noonedeadpunkkleini: and it should be probably backported as well back to train...11:00
kleininoonedeadpunk: cherry pick in Gerrit shows conflicts. what is the better way to solve these conflicts? locally cherry-picking to train and ussuri and pushing new reviews or accepting the conflicts and solve them by downloading the review?11:07
kleininoonedeadpunk: what do you think about https://docs.openstack.org/octavia/ussuri/configuration/configref.html#controller_worker.amp_image_id11:07
openstackgerritPerToft proposed openstack/openstack-ansible master: The current version does not include the os_keystone role correct, as it will run the role again, ignoring the tasks_from: main_keystone_federation_sp_idp_setup.yml part. This fix has been tested and now it corectly configures the SP/IDP config. Paste of a test: https://pastebin.com/DnEc86HR  https://review.opendev.org/c/openstack/openstack-ansible/+/76751311:08
*** kukacz has joined #openstack-ansible11:10
openstackgerritMarcus Klein proposed openstack/openstack-ansible-os_octavia stable/ussuri: Do not set amp_ssh_access_allowed configuration option any more.  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76749411:20
openstackgerritMarcus Klein proposed openstack/openstack-ansible-os_octavia stable/train: Do not set amp_ssh_access_allowed configuration option any more.  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76749511:20
openstackgerritMarcus Klein proposed openstack/openstack-ansible-os_octavia stable/train: Do not set amp_ssh_access_allowed configuration option any more.  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76749511:29
openstackgerritMarcus Klein proposed openstack/openstack-ansible-os_octavia stable/ussuri: Do not set amp_ssh_access_allowed configuration option any more.  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76749411:30
*** rohit02 has joined #openstack-ansible11:33
openstackgerritMarcus Klein proposed openstack/openstack-ansible-os_octavia master: Remove octavia_amp_image_id as it is deprecated.  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76751611:47
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Fix keystone IDP setup  https://review.opendev.org/c/openstack/openstack-ansible/+/76751311:58
jrosserpto: thanks for the patch & testing, i tidied up the commit message formatting ^11:58
ptoDo you know how i can change the domain name? It looks not pretty with an uuid in horizon: https://pasteboard.co/JFkWqgo.png12:02
*** rfolco has joined #openstack-ansible12:16
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_keystone master: Add no_log to LDAP domain config  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/76752512:25
ptoNo, the trust_idp_list name appears (WAYF)12:39
*** macz_ has joined #openstack-ansible12:59
*** macz_ has quit IRC13:04
*** spatel has joined #openstack-ansible13:09
noonedeadpunkkleini: I'd drop amp_image_id with the same patch tbh13:10
noonedeadpunkbut it needs reno I guess13:10
noonedeadpunkas in Victoria these options are not valid13:11
*** spatel has quit IRC13:15
*** spatel has joined #openstack-ansible13:47
kleininoonedeadpunk: okay, so I add my second review to the first one, right? is that correct with the file for the release notes?14:01
noonedeadpunkI think you can just amend first commit and do git review14:02
openstackgerritMarcus Klein proposed openstack/openstack-ansible-os_octavia master: Omit amp_ssh_access_allowed and remove amp_image_id options.  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76751114:12
kleininoonedeadpunk: I know, how to work with Git and Gerrit. But I am to lazy to read all guides regarding documentation and so on.14:18
kleinioh, there is a tool named reno for release notes14:22
kleiniI love my Manjaro providing python-reno from AUR14:23
spatelUbuntu question, I have bunch of these loop device on latest ubuntu 20.04  - /dev/loop1             72M   72M     0 100% /snap/lxd/1609914:28
openstackgerritMarcus Klein proposed openstack/openstack-ansible-os_octavia master: Omit amp_ssh_access_allowed and remove amp_image_id options.  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/76751114:28
spatelwhat are these?14:28
spateldo i need those LXC ?14:28
kleinithey are from snap package manager14:29
spateldo you guys keep them in production openstack?14:29
kleinilxd obviously is only provided in Ubuntu through a snap package14:29
kleiniI hate those loopback devices from snap package manager and uninstall snapd immediately if I see it somewhere14:30
spatelbut OSA also install LXC so do we need snap LXC?14:30
noonedeadpunkah, yes, sorry for not saying that it's just installing reno from pypi or whatever and running `reno new`14:30
spatelI love to get rid of everything which i don't care and OSA don't care..14:30
kleiniAt least with 18.04 LXC is provided through normal packages. I don't know with 20.04.14:30
kleininoonedeadpunk: already fixeds14:31
kleininoonedeadpunk: already fixed14:31
spatelI found OSA installed LXC + i already had LXC via snap14:31
spatelI am going to remove snap but let me confirm with more folks if its safe and won't create issue in future deployment14:32
spatel@jrosser14:32
noonedeadpunkyep, thanks!14:32
kleinijrosser, noonedeadpunk if you basically agree with https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/767511 I will backport that to my cherry-picks for T and U14:32
noonedeadpunklet's leave T and U as is14:33
noonedeadpunkwe can't really backport removal of variables. well we can, but we should not do that14:33
noonedeadpunkunless it's really required14:33
kleinisure, didn't think about that14:34
jrosserspatel: are you sure it is lxc and not lxd?14:34
spateloh wait.. so they are different? i thought lxc is client and lxd is daemon14:35
noonedeadpunkhave anybody tried out multiattach for cinder and ceph?14:35
jrosserbecasue we do nothing at all with snaps in OSA CI for focal, unless it's actually installing a snap sneakily when we install lxc14:35
jrosserspatel: yes hey are totally different14:35
spateljrosser: check out - http://paste.openstack.org/show/801131/14:35
jrosserright but thats lxd14:36
spatelyou will see bunch of snap stuff, is it safe to remove snap?14:36
spatelyes lxd (this is there since i install fresh OS)14:36
jrosserlxd is a daemon and API and all sorts of shiny new stuff as a later on top of the same technology that makes lxc14:37
jrosser*layer14:37
jrosseryou should be able to uninstall all of that14:37
spatelperfect! that is what i was looking for.14:37
spateljust wanted to make sure i remove something which ubuntu make angry14:38
jrosserfwiw i use LXD a lot outside of OSA14:38
jrosserit is very nice14:38
jrosserbut the snap installer is unfortunate, imho14:38
spatelhmm, may be good for laptop not sure about production cloud14:39
kleininoonedeadpunk: how is that supposed to work? two machines writing to the same physical block devices. this has to break, not?14:41
*** rohit02 has quit IRC14:43
*** pto has quit IRC14:43
noonedeadpunkwell you can mount it as ro eventually14:44
noonedeadpunkas for proper shared filesytem manila should be used14:44
noonedeadpunkbut for simple cases (or when you place only config files there) it should be pretty ok14:44
noonedeadpunk(I guess)14:44
kleinii think, that depends on the filesystem14:44
noonedeadpunkwell, yes...14:45
admin0spatel, the whole of ubuntu is moving towards snap ( which is pushing me towards debian) :)14:54
admin0lxc and lxd are different .. ( we are still using the old lxc) for some reason though14:55
admin0noonedeadpunk, multiattach for ceph does not work14:55
admin0i have tired cinder with 2 different ceph backends14:55
admin0it works in cinder, you can create volumes on both ceph .. but it fails on nova14:55
noonedeadpunkWell, I can't create in cinder even. and https://access.redhat.com/errata/RHBA-2020:2161seems related for me...14:56
admin0404 page14:56
tbarronkleini: noonedeadpunk with cinder multi-attach all responsibility for write arbitration is up to the user (vs shared filesystems), so14:56
admin0i have the code somewhere on cinder +  dual ceph14:56
admin0osa + cinder + 2x ceph14:57
tbarronkleini: noonedeadpunk you can put a node-local filesystem on there if you can ensure all consumers will be from that same node14:57
tbarronkleini: noonedeadpunk you can put a clustered file system on there14:57
tbarronkleini: noonedeadpunk or you can run an app that does its own write arbitration and works with raw block offsets instead of file system paths14:58
tbarronkleini: noonedeadpunk but yeah, with conventional applications it's easy to get in trouble14:58
tbarronkleini: noonedeadpunk even a "read only" local filesystem mount won't necessarily be actually read only to the block device14:59
noonedeadpunkyeah, I totally understand that multiattach is not really good idea. But in istuation where I need to share just several config files that are not going to be changed almost never. and have no time for manila deployment :)15:01
tbarronkleini: noonedeadpunk Clearly I'm not speaking atm to the nova/cinder implementation issues, just the architectural issues.15:01
tbarronnoonedeadpunk: kk15:02
* tbarron aims to get manila deployment so easy that "no time for manila" becomes a distant memory but knows we aren't there yet15:02
admin0noonedeadpunk, here is my working code https://gist.github.com/a1git/6898a29d84008e0b01556e899249a87f15:03
admin0 i created a 2nd ceph cluster .. added it as HDD .. was able to create volumes ..  could not mount it .. nova reads only 1 ceph.conf and has no support/clue of the 2nd one15:03
admin0maybe my setup was wrong and you will have better results15:04
admin0sorry i forgot to remove the # in the end ..  #rbd_user: hdd-cinder => rbd_user: hdd-cinder15:05
admin0multi nfs, multi iscsi, all work fine15:05
noonedeadpunktbarron: well it's not only about time for deployment, but when you need this in some public region, you face with issues that you need a billing there or not expose to customers, agreement for maintenance and so on :P15:05
admin0noonedeadpunk, use S3 for that15:06
admin0swift15:06
noonedeadpunkI think I'm doing a bit different:) multiattach is the ability to mount the same rbd drive to several instances. While you here trying to achieve multicluster support which is not going to work15:07
admin0or setup a few volumes and gluster on top :)15:07
tbarronnoonedeadpunk: agree.  FWIW I am thinking that capability of Manila with manila-csi to offer RWX storage to k8s on openstack may help drive public cloud readiness.15:07
noonedeadpunkand you would need to separate nova to different AZ or aggregations15:07
noonedeadpunkadmin0: yep, I'm using s3 for multimedia but application does not support to store config in s3...15:09
noonedeadpunkunless I missed the way to just mount bucket on host15:10
admin0it exists :)15:10
noonedeadpunkoh, rly?15:10
admin0s3fs15:10
noonedeadpunk s3fs-fuse o_O15:11
noonedeadpunkadmin0: ok, thank!15:11
noonedeadpunkthat is going to solve my issue15:11
*** poopcat has quit IRC15:26
admin0going through trove documentation, i think its network requirement is the same as octavia15:33
admin0the trove images need access to dbaas containers15:33
admin0trove api i meant15:34
spateljamesdenton: around ?15:36
spatelDo you care around dpdk support for centos8? we have couple of commit hanging around.15:37
jrosseradmin0: I think trove instances want to talk direct to rabbitmq 8-O15:37
admin0yeah .. i read that and was thinking how to wire the network15:37
spateljrosser: why trove need rabbitmq?15:38
*** macz_ has joined #openstack-ansible15:38
jrosserbecause that’s how it was designed15:38
admin0https://docs.openstack.org/openstack-ansible-os_trove/latest/configure-trove.html15:38
jrosserand I find it a bit scary really15:39
spateltrove should invoke nova/neutron api to get machine ready15:39
admin0"The trove guest VMs need connectivity back to the trove services via RPC (oslo.messaging) and the OpenStack services."15:39
jrosserwell yes, but that machine needs to be on the mgmt network and uses rabbit directly15:39
admin0so need to reserve a bit more ips for mgmt and use it inversely for trove :)15:40
admin0i want to do osa+trove next15:40
jrosserthere’s magnum/trove/Octavia/Manila all using service vm and all diffferent approaches15:40
jrosserbig headache15:40
spatelhmm interesting - https://pt.slideshare.net/mirantis/trove-d-baa-s-28013400/715:42
jrosser admin0: is you have a router/fw put trove on another subnet maybe with rules only to get to rabbit?15:42
*** macz_ has quit IRC15:42
spatelall RPC call15:42
admin0i always have a small vyos  that holds .1 of the mgmt ranges, but is firewalled15:42
admin0was thinking of the same15:42
admin0i don't want to use flat network .. i am thinking to do it on a tagged vlan ( like octavia ) and then add .1 to the vyos and allow it to talk to the mgmt network15:44
admin0though if i allow it to talk to mgmt network, i do not see a need to add the vlan15:44
admin0i can just add a normal vlan network, add .1 in the vyos and allow it to talk to br-mgmt15:44
admin0i will give it a try ..15:44
admin0anyone did swift with radosgw as backend ? can care to pass me the configs ?15:45
admin0i need to have swift .. but reading the docs, rings generation etc looks complicated15:46
admin0if i add this as a regular ext-network, won't others be able to select it also ?15:48
*** poopcat has joined #openstack-ansible15:58
noonedeadpunkadmin0: yeah it does15:59
noonedeadpunkI think trove role is not really complete15:59
noonedeadpunkI'm going to make trove installation early next year (I should have already started) and it feels that role will be heavily adjusted15:59
noonedeadpunkas eventually trove vms need messaging as well, but giving access to mgmt network is bad solution...16:00
noonedeadpunkso might be we would need another rabbit cluster specifically for trove that would serve on trove network, or we need to add trove network to rabbit containers16:01
*** macz_ has joined #openstack-ansible16:09
admin0i am playing with it now .. let me see how far i can go16:12
admin0 i am adding a vlan network .. ( not as external) and making the configs now16:13
admin0if i can get a instance to boot up, tcpdump will show what it wants to do16:13
spatelnoonedeadpunk: how is the Debian support for Debian buster ?16:17
spatelJust debating between Ubuntu vs Debian :)16:18
spatelIf we don't have lots of users running debian with OSA then i don't want to be first person :)16:19
admin0whats wrong with being the first person :D ?16:19
admin0there always need to be a first person16:19
admin0:)16:20
admin0the docs say supported .. so i guess all CI passes16:20
spatelnoonedeadpunk: CI vs running in production experience (missing stuff breaking libs/drivers etc)16:21
spateldpdk support/sriov support (specially vendor driver support etc)16:21
jrosserspatel: look at what you ceph packages situation would be for Debian16:25
spatelHmm! not sure what to look for but may try some google for latest Ceph support for Debian.16:27
spatelI don't want last minute surprise that someone say hey Debian has no support for foo.. haha16:28
spatelI know ubuntu is primary OS for all openstack development so not worried about ubuntu16:28
spatelLook at this openstack survey - https://ibb.co/yBMKsMd16:32
spatelUbuntu is way to go :)16:38
*** pto has joined #openstack-ansible16:40
*** stee has left #openstack-ansible16:42
*** SecOpsNinja has joined #openstack-ansible16:44
*** kukacz has quit IRC16:45
*** pto has quit IRC16:45
SecOpsNinjahi. one quick question, how do you normaly export/view the logs do multiple systemd jobs in all lxc-containers? trying to open journalctl -xef  in each container (in nova-api, placement ) and nova compute service to try to find the  cause for the forever build  state of the vm creation hasn't been a easy task (its an worse problem with debug logs...)17:04
openstackgerritMerged openstack/openstack-ansible-os_tempest stable/train: Switch tripleo job to content provider  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/76102117:05
spatelSecOpsNinja: logs are located on /openstack/logs/<containers/17:08
spatelSecOpsNinja: sometime i use this command  tail -f  /openstack/log/*/*.log17:08
SecOpsNinjaspatel,  yep i tought it wasn't including journal logs17:09
spatelthat is correct for that you can use syslog container17:10
spatelall container ship journal to syslog container17:10
SecOpsNinjaor at elest is not easy to read... i dont undestand how syslog is worjking because i cant fiund the logs17:10
spatelThat is why ELK or some good centralization required to debug all logs17:10
admin0SecOpsNinja, i use graylog .. quick and easy to setup .. works nicely17:11
spatelI hate journalctl personally (i missed old school syslog txt file)17:11
SecOpsNinjahow to dyou use it? you ship from rsyslog to it?17:11
spatelIn my cloud i used syslog to read all journalctl logs and ship to graylog17:12
SecOpsNinjabut the problem is that in rsyslog container i cand find where are the logs of all the services17:13
SecOpsNinjabecause rsyslog doesnt expose any logs :D17:13
SecOpsNinjain/openstack/logs....17:13
admin0SecOpsNinja, will find my config and share17:13
SecOpsNinjathanks17:13
admin0i have a store of configs :)17:13
*** kukacz has joined #openstack-ansible17:14
admin0SecOpsNinja, https://gist.github.com/a1git/7232afe07f46474d5370113d609b938517:16
spatelagreed we need OSA official doc somewhere to deal with logging issue, how to migrate from syslog to journactl way17:16
SecOpsNinjais there that i need to do in osa default configuration in user_vvarivales file to force all container to redepoys there systemclgos to rsyslog?17:17
spateladmin0: how do you shipping logs to greylog? using journalctl or legacy rsyslog17:17
admin0i dunno how this works .. i just do this and the logs magically appear in graylog :)17:17
spateladmin0: haha17:17
admin0when it works, i move on to something else in the todo :)17:18
spatelSecOpsNinja: i have this configuration in rsyslog  http://paste.openstack.org/show/801134/17:19
spatelship this logs to graylog17:19
SecOpsNinjaok i will try to configurar that because there aren't much information regarding configureing certalize loggin... https://docs.openstack.org/openstack-ansible-rsyslog_server/latest/17:20
SecOpsNinjaeven in my rsylog container doesnt have /var/log/rsyslog so i dont know what is messing17:21
jrosserthe rsyslog stuff in OSA is really not useful17:21
jrosserthe journal is where the logs go in the most part17:21
ThiagoCMCHello! Does anyone has experience with Ansible's Dynamic Inventory (https://docs.ansible.com/ansible/latest/user_guide/intro_dynamic_inventory.html - that famous "openstack_inventory.py") for when the Inventory's source is, for example, a Project running on OpenStack (Heat or not)? I'm trying to understand how to aggregate my Instances (from a Heat Template, for example), into different "group of serv17:22
ThiagoCMCers in Ansible" but, I have no idea about how to do this.17:22
jrosserif you arrange on the host yourself for the systemd journal to be sent to rsyslog then that will get you back syslog stuff17:22
SecOpsNinjaand there is a miss of stuff also there is something that arent going to journal but going to specific log file which doesnt help also17:22
spateljrosser: I have syslog container and i am seeing its not receiving any logs from anywhere. (shouldn't all container ship those to syslog container?)17:22
jrosserthat’s right17:22
jrosserno because you will see that is disabled somewhere in group_vars and a link to the relevant systemd bug17:23
ThiagoCMCWhen I run that `openstack_inventory.py --list`, I'm seeing things like `"server_groups": null` - not sure if it's related or not and, if yes, not sure how to set it from OpenStack side, so Ansible will detect it and build the "group_vars" accordingly.17:23
jrossereverything is present to make systemd remote journal forwarding work but it is disabled by default17:23
ThiagoCMCSorry to hijack the conversation!  =P17:24
spateljrosser: oh!! that make sense17:24
spateljrosser: so its known bug or something related to OSA ?17:24
jrosserit is a bug in systemd, I’m not able to search this up right now but it has to be disabled17:25
SecOpsNinjasoi the rsyslog should be destroyed if its not doing anything. regading systemd journal forwarding how tdo that work?17:25
jrosserit’s a log container17:25
jrossersyslog|systemd journal remote17:25
spateljrosser: i think i have to find alternative for my new production to deal with this issue.17:26
jrosserit was the target for both iirc17:26
SecOpsNinjabut yeh i was able to find that the problem is between nova  api and compute nova service but cant find where its causing the problems in sheduling the vm because it seems cheduler is getitng the updates in each compute node.17:28
jrosserSecOpsNinja: it’s worth looking a bit deeper at what openstack does with the systemd journal17:28
jrosserbecause for example the request id is embedded in each log entry as a metadata field17:28
SecOpsNinjajrosser,  sorry i didnt understand that17:28
jrosserthere is lots more data stored than just the log text17:29
jrosserthere is extra context and fields inserted by the oslo17:29
SecOpsNinjayep that is something that i wasnt able to find how to track a spefici req id and how it travels between all the compontentes17:29
jrosserlog layer17:29
jrosserif you push it all into elk or similar you can make the req is first class data to query against17:30
SecOpsNinjaok i will try to find that info becsue atm im complety lost how to resolve this if i cant find which component is causing the problm17:30
jrosserrather than just being plain text to match against17:30
jrosseryou need some centralised logging really17:30
jrosserand if you do it all via syslog somehow then that will throw away the hidden fields in the journal data which are really helpful17:31
SecOpsNinjawe have some loki experimental isntalation that i can try to use it...17:31
SecOpsNinjanow i need to see how to export to loki from all the xcontainers17:31
SecOpsNinjabecause rsyslog container it doesnt seem to do anything atm...17:32
jrosseryou only needs to worry about the hosts not containers17:32
jrosserbecause the journal files are mounted on the hosts from the containers17:33
*** jawad_axd has quit IRC17:33
jrosserso a good tool will be able to read all those journals if you give it the path17:33
*** jawad_axd has joined #openstack-ansible17:33
jrosserand kind of neatly your logging setup only then needs to know about your hosts, not the OSA inventory17:34
SecOpsNinjahum.... using the /openstack/logs/*.system files in the host?17:34
jrosseryes17:34
jrosserthere is an outstanding bug with ordering of the bind mounts bet there’s a patch for that17:34
*** jawad_axd has quit IRC17:35
SecOpsNinjathanks i wil try to find a way to configure this and see if i can check the metadata that you spoke off17:36
*** jawad_axd has joined #openstack-ansible17:36
jrossermight be “-o verbose” journalctl to see all the fields17:36
SecOpsNinjaok thanks17:40
*** jawad_axd has quit IRC17:41
spateljrosser: you are saying journal logs mounted on host:/openstack/log/17:44
spatelat present its disable but when we fixed that bug it will right?17:44
*** gyee has joined #openstack-ansible17:49
jrosserspatel: yes17:57
jrosserand no17:57
jrosserthe jrounals are mounted on each host17:57
jrosserforwarding off the host using systemd remote is disabled17:58
spatelso in current scenario what is the alternative? use third-party tool to ship logs out?18:02
jrosserright now openstack-ansible is not opinionated about how log collection is done18:06
jrosserbecasue everyone has their own preference18:06
jrosserthere is the example graylog and elk stuff in the openstack-ansible-ops repo18:07
mgariepygraylog is nice.18:10
mgariepyi havent tested elk much to have an opinon.18:10
spatelmgariepy: i am also using graylog and its really nice18:16
spatelmgariepy: i would like if you share your dashboard because i have shitty one.. :)18:16
spateldo you guys parse logs at server end or client end?18:17
*** rpittau is now known as rpittau|afk18:20
mgariepymy dashboard is also shitty lol18:28
mgariepydo you push the logs via fgelf?18:29
mgariepygelf?18:29
spatelrsyslog to push logs18:31
spatelthis is all i have in dashboard - https://ibb.co/smjTvvw18:32
spatelThis will quickly tells me if anywhere ERROR happening in stack18:33
spatelmgariepy: does gelf has support to ship journal?18:35
*** pto has joined #openstack-ansible18:42
*** pto has quit IRC18:46
mgariepyspatel, https://github.com/openstack/openstack-ansible-ops/blob/master/graylog/graylog-forward-logs.yml18:57
mgariepyyep. it does.18:57
spatelsweet! so all i need to run that playbook right?18:57
admin0can someone point me to the 16-18 upgrade notes again .. i can't seen to find it19:00
spatelwhy not 20.04 ?19:07
*** maharg101 has quit IRC19:10
*** andrewbonney has quit IRC19:11
mgariepyadmin0, https://etherpad.opendev.org/p/osa-rocky-bionic-upgrade https://etherpad.opendev.org/p/osa-newton-xenial-upgrade https://docs.openstack.org/openstack-ansible/rocky/admin/upgrades/distribution-upgrades.html19:11
mgariepythe xenial there might be good also for some stuff.19:11
mgariepyspatel, because rocky doesn't support 20.04.19:12
spatelah!19:12
admin0thanks mgariepy19:16
mgariepyadmin0, do you have ephemeral storagE?19:17
admin0yes19:17
admin0this one where i need to do the upgrade has ceph backend19:18
admin0does it make it easier, or harder ?19:18
mgariepyhttps://etherpad.opendev.org/p/osa-rocky-bionic-upgrade line 86.19:18
mgariepyif you have local ephemeral storage on the computes. be careful not to be bitten by this19:19
mgariepy;)19:19
mgariepyi guess it's going to be eaiser if all ceph (or at least a lot faster) since you will need to migrate the vm from one compute to the other while ungrading.19:20
admin0i do not have ceph+ephemreal storage19:22
mgariepyok19:23
mgariepyif it's all ceph it's going to be ok i guess; )19:23
spateldo you guys disable systemd-resolved ?19:35
admin0mgariepy, once i migrate the vms, i can just reinstall it fresh with 18 instead of doing an upgrade right ?20:00
admin0a compute node i mean20:00
mgariepyif it's empty yep sure why not ?20:04
admin0so based on the docs, i identified that my c3 is primary .. i reformatted my c1 and c2 ( api uptime is not important) . and they are now on 18.04 .. now the docs says i need to disable c3 repo using .. echo “disable server repo_all-back/<infrahost>_repo_container-<hash>” | socat /var/run/haproxy.stat stdio20:26
admin0this i do not understand20:26
admin0or can i just delete this 3rd (16.04) repo and let it use the other 2 ?20:26
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Disable repeatedly failing zun tempest test  https://review.opendev.org/c/openstack/openstack-ansible/+/76746920:34
mgariepyhave fun during christmas guys, i'll be back in 2021.20:40
admin0:)20:58
admin0have a great holiday20:58
spatelmgariepy: enjoy!!20:58
*** maharg101 has joined #openstack-ansible21:06
*** rfolco has quit IRC21:09
*** MickyMan77 has quit IRC21:10
*** maharg101 has quit IRC21:10
*** pto has joined #openstack-ansible21:27
*** pto has joined #openstack-ansible21:28
*** pto has quit IRC21:33
*** MickyMan77 has joined #openstack-ansible21:40
*** SecOpsNinja has left #openstack-ansible21:48
*** pto has joined #openstack-ansible21:59
*** pto has quit IRC22:04
*** jbadiapa has quit IRC23:15
*** tosky has quit IRC23:43

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!