Friday, 2022-01-28

opendevreviewMerged openstack/openstack-ansible master: Fix infra scenario repo server cluster  https://review.opendev.org/c/openstack/openstack-ansible/+/82646800:45
*** anbanerj is now known as frenzyfriday05:42
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/xena: Fix infra scenario repo server cluster  https://review.opendev.org/c/openstack/openstack-ansible/+/82683107:16
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/wallaby: Fix infra scenario repo server cluster  https://review.opendev.org/c/openstack/openstack-ansible/+/82683207:16
opendevreviewAndrew Bonney proposed openstack/openstack-ansible-plugins master: Add ssh_keypairs role  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/82511310:45
*** dviroel_ is now known as dviroel11:03
opendevreviewMerged openstack/ansible-role-systemd_service master: Allow StandardOutput to be set for a systemd service  https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/82660211:32
opendevreviewMerged openstack/openstack-ansible-os_nova master: Drop cell1 upgrade to template format  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/82650411:34
opendevreviewMerged openstack/openstack-ansible-os_aodh stable/victoria: Ensure libxml2 is installed on debian systems  https://review.opendev.org/c/openstack/openstack-ansible-os_aodh/+/82638011:57
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_zun master: Restore CI jobs  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/82445711:59
jrossernot good progress here https://github.com/logan2211/ansible-etcd/pull/2011:59
noonedeadpunkshould we pull it in opendev or jsut fork to some of our github?12:20
jrosserfork would get us moving quickly12:21
jrosserif we then moved it to opendev we could gate it against the zun role12:21
noonedeadpunkyeah, let me fix that then12:21
jrosserbut this will take like 1 week minimum for all the setup12:22
noonedeadpunkwe can move to opendev from fork anytime as well :D12:22
jrosserabsolutely, yes12:22
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Fix tempest plugins versions  https://review.opendev.org/c/openstack/openstack-ansible/+/82612912:24
noonedeadpunkI hope logan- doing fine after all...12:24
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Used forked etcd role  https://review.opendev.org/c/openstack/openstack-ansible/+/82688112:31
noonedeadpunkjrosser: ^12:31
andrewbonneynoonedeadpunk: I think there was a case of the 'static' keyword that also needed removing from the role12:32
andrewbonneyhttps://github.com/noonedeadpunk/ansible-etcd/blob/master/tasks/etcd_post_install_server.yml#L2612:32
noonedeadpunkmhm12:32
noonedeadpunkDone12:34
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_zun master: Restore CI jobs  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/82445712:35
jrosserlets see....12:35
jrosserupgrade jobs may still break though12:35
noonedeadpunkwell yes, we'd need to backport this change and do bump for upgrade jobs12:52
opendevreviewAndrew Bonney proposed openstack/openstack-ansible-os_nova master: Use ssh_keypairs role to generate cold migration ssh keys  https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/82530613:04
opendevreviewAndrew Bonney proposed openstack/openstack-ansible-plugins master: Add ssh_keypairs role  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/82511313:05
admin1what could be the best way to override polling.yml in ceilometer so that it survives upgrades of osa ? 14:19
noonedeadpunkum, I though it has overrides var?14:19
noonedeadpunkbut I don't see it indeed...14:20
noonedeadpunkah lol I was looking in gnocchi)14:22
noonedeadpunkthere're 2 options - either provide pooling.yml file or define overrides14:22
noonedeadpunkceilometer_polling_default_file_path and ceilometer_polling_yaml_overrides control that14:22
spateladmin1 are you using ceilometer with gnocchi. I am thinking to try out for my new cloud to see what advantage i can take 14:30
noonedeadpunkI was using it a lot one day14:30
noonedeadpunkBilling was relying on this data14:31
noonedeadpunkand it was pretty reliable14:31
noonedeadpunkupgrades were disaster though as items could get renamed or changed design behind them14:32
noonedeadpunkor jsut deprecated:)14:32
noonedeadpunkBut ceilometer is stable now as barely maintained14:32
noonedeadpunkand for gnocchi you need some storage... Likely like redis... As rbd is kind of broken. Swift/S3 is fine though as well14:33
noonedeadpunk(rbd broken by design)14:33
noonedeadpunkas gnocchi tries to use rbd as pure rados, which results in thousands of small objects, so you actually don't expect to see rbd that much sliced which results in more tricky rebalances. And ofc ceph complains on pool having too many objects comparing to otheres14:35
noonedeadpunkand pure rados is basically object storage, so with swift/s3 you're prepared for that workload14:35
admin1 i am using gnocchi with redis + ceph 14:37
admin1which reminds me, i also need to test that patch 14:37
admin1will do next week 14:37
admin1the gnocchi + redis patch 14:37
noonedeadpunkoh yes14:37
noonedeadpunkwould be awesome14:38
noonedeadpunkhttps://review.opendev.org/c/openstack/openstack-ansible-os_gnocchi/+/82290514:38
opendevreviewMerged openstack/openstack-ansible master: Fix infra upgrade  https://review.opendev.org/c/openstack/openstack-ansible/+/82642414:38
opendevreviewMerged openstack/openstack-ansible stable/wallaby: Fix infra scenario repo server cluster  https://review.opendev.org/c/openstack/openstack-ansible/+/82683214:39
opendevreviewMerged openstack/openstack-ansible stable/xena: Fix infra scenario repo server cluster  https://review.opendev.org/c/openstack/openstack-ansible/+/82683114:39
spatelnoonedeadpunk thanks for input14:47
spateli am not worried about billing etc.. because i run my own private cloud. I want some kind of monitoring and do capacity planning.. 14:48
spatelcurrently i have my own in-hour influx/grafana to see what is going on for capacity etc..14:48
noonedeadpunkwell gnocchi grafana plugin is kind of mess14:49
spatelsomething like what i have currently - https://ibb.co/7bNDsLB14:50
noonedeadpunkwell, we had graphs per vm from gnocchi to see who's consuming cpu/disk/memory/network14:53
spatelhow about compute capacity? 14:53
spatelcould you share graph and please hide your PI stuff14:54
noonedeadpunkUm, well, for compute it's not that great. OR well, it's not designed for compute monitoring a lot. Worth checking metrics it capable of gathering https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html14:55
noonedeadpunkah, well, unless you gather them with snmp)14:56
noonedeadpunkbut we didn't do that for computes14:56
noonedeadpunkUnfortunatelly I don't have access to this environment for couple of years now :)14:56
spatelno worry! 14:57
spatelin that case i don't want gnocchi.. :) i think i am happy with my inhouse monitoring 14:57
spatelby the way i have upgraded my W -> X 14:59
spatelnow i am waiting for 24.1.0 to push out in production 14:59
noonedeadpunkah, indeed, I should push releases I believe15:15
spatelnoonedeadpunk we are planning to build large store to store some small file like audio files.. what kind of storage i should pick ceph or glusterfs or swift ? i am looking for s3 style 15:22
noonedeadpunkceph or swift I believe15:23
spatelcurrently we are using AWS s3 but soon our data limit going to reach 50TB data per day and i don't want to spend money for that in AWS15:23
noonedeadpunkboth support s3 compatability15:23
spatelwhich one would be good to manage and easy operation :)15:23
noonedeadpunkceph is probably closer to aws feature-wise15:23
spatelceph is little complex but good overall15:23
spatelI need to get into more details to see what storage i should be using 15:25
noonedeadpunkwell it depends likely:) for me ceph is more common solution. But I just not really expert is swift as never used it in production or at scale15:25
spatelagreed also very few folks using swift now days15:25
noonedeadpunkand ceph is not really that complicated. But yes, it needs some understanding of how it works15:25
spateli have one more question related our glance storage.. 15:26
noonedeadpunkthey both have compatbility matrix https://docs.openstack.org/swift/latest/s3_compat.html https://docs.ceph.com/en/latest/radosgw/s3/#features-support15:26
noonedeadpunkand ceph also support Swift API just in case ;)15:26
spatelcan we add glusterfs for glance in 3 node deployment when you don't need big storage for images... 15:26
spatel+1 i will check feature list and compare 15:27
spatelcurrently in my deployment i don't have shared storage for glance (we have only handful images, like gold image etc)15:28
noonedeadpunkum, for gluster - isn't it just matter of mounting filesystem?15:28
spateli don't want to create dedicated shared storage for it so thinking why don't we install or add-on service to install small glusterfs inside glance for shared storage15:28
spatelYes..15:29
noonedeadpunklike if you mount it to /var/lib/glance/images/ that would be it?15:29
spatelcurrently i am using rsync kind of tool to sync my glance images 15:29
spatelYes mounting  /var/lib/glance/images/15:29
spateli found glusterfs is very simple to deploy so what if we have that add-on in OSA to say YES i need glusterfs for glance 15:30
noonedeadpunkwell, I'd suggest then having ceph - it would solve all these issues :D15:30
spatelfor ceph i need mon nodes and OSD etc.. 15:30
spatelits too complicated to store 4 images :)15:30
noonedeadpunkfor gluster you also need drives?15:31
spatelno you can do DD image file 15:31
spateland mount it using /dev/loop 15:31
noonedeadpunkand what stopps yo do same for ceph osd?:)15:31
*** dviroel is now known as dviroel|lunch15:31
spatelyou don't need mon node 15:31
noonedeadpunkas it eventually jsut need lvm pv drive15:31
noonedeadpunkhave you ever seen how we deploy ceph in aio? :p15:32
spateli need simple solution. gluster you can install with single RPM and it replicate file between all 3 nodes 15:32
spatel:D15:32
spateli know it also use dd image stuff15:32
noonedeadpunkit 3 loop drives, mon inside lxc container, and osd on controller itself15:33
noonedeadpunkbut yeah15:33
spatelone of my old environment i have install glusterfs on infra nodes and replicating my glance images 15:33
noonedeadpunkso for gluster, I think that it doesn't require any intervention into glance role for sure15:33
noonedeadpunkit could be something for ops repo, yes15:33
noonedeadpunkbut tbh, with this approach it easier to have just one nfs server somewhere15:34
noonedeadpunkeven more simple15:34
spatelalso good think glusterfs keep actual files on volume so easy to read and debug while ceph chunk them out..15:34
spatelone NFS is single point of failure :(15:35
noonedeadpunkthere's `rbd map` which gives you block drive you can mount or do whatever you want with it15:36
spatelI was just thinking to have that option but no big deal.. anyway i am deploying it my way to fulfill need.. 15:36
spateli am deploying small small openstack where i don't need lots images so was looking for something simple and easy which i don't need to worry about during upgrade process (ceph is sometime messy when it comes to upgrade)15:37
noonedeadpunkso eventually, I think solution would be just to use `glance_nfs_client` to define glusterfs mount. It's just a variable that is passed to systemd_mount.15:38
spatelwhile gluster is single RPM binary and all done :)15:38
noonedeadpunkwe might want to rename it though, as current;y it's confusing15:38
noonedeadpunkbtw, you can also use s3fs ;)15:38
noonedeadpunkand store your 3 images on AWS15:38
noonedeadpunkthat would be slow though15:39
noonedeadpunkoh, eventually, glance supports s3 as a storage as well15:39
spatelhmm does it support in OSA ?15:39
noonedeadpunkI bet we merged fix recently for that15:40
spateli am asking s3fs application?15:40
noonedeadpunkhttps://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/82287015:40
noonedeadpunks3fs is supported by systemd_mount, yes15:40
noonedeadpunkso you can leverage  `glance_nfs_client` to get it mounted15:41
noonedeadpunkBut it's weird considering you can jsut use s3 directly15:41
spatelhmm look like i need to read first :) if this is easy then i like it for small deployment requirement15:41
spatelworth testing out in my lab15:41
noonedeadpunkso we define this way for nfs https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/templates/user_variables_nfs.yml.j2#L3-L915:42
noonedeadpunkbut for s3 here's sample https://opendev.org/openstack/ansible-role-systemd_mount/src/branch/master/defaults/main.yml#L63-L6715:42
noonedeadpunkwell, just respect variable name mapping https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/tasks/glance_post_install.yml#L172-L17615:43
spatellet me understand, this will keep my image in AWS?15:44
spateltrying to understand what will be my backend storage ?15:44
noonedeadpunkah, yes, it won't work for s3fs in this shape :(15:44
noonedeadpunkyes, that would keep image in aws15:44
noonedeadpunkor any s3 storage you pick15:44
noonedeadpunkfor it to work https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/tasks/glance_post_install.yml#L172-L176 needs to be patched15:45
spateli want to keep it local :(15:45
noonedeadpunkas it would need credentials option and other way of defining `what`15:45
spatelno more external cloud dependency 15:45
noonedeadpunkbut well, you want local s3?15:46
spatellocal s3 or anything which replicated to make my images available on all glance.. 15:46
noonedeadpunkso why not to use local s3 that you want to have anyway for images as well?15:47
spatelNFS is bad bc of single point of failure 15:47
spatelceph is good but too many components and upgrade is mess15:47
noonedeadpunkwell, nfs can be on gluster, but that's weird :)15:47
spatelrsync is good but hackie 15:47
spatellsync also good solution but not going to work in this case15:48
noonedeadpunkI can't say ceph upgrade is a mess. It was like that 5-7 years ago, on Hammer (0.8) 15:48
jrosserimages are cached on the compute node too?15:48
noonedeadpunkNow it's quite straightforward imo - matter of package upgrade and service restart15:48
jrossermaybe just simplify by saying that partially mitigates nfs server SPOF15:48
noonedeadpunkyes, for sure they would be cached there...15:48
spateli did tried last time and it was epic disaster for me (may be my knowledge is limited) 15:49
jrosserso really if you are clever with the caching time, you can live with nfs being broken for a while15:49
noonedeadpunkspatel: do you have cinder volumes anywhere?15:49
spateljrosser agreed if image is cached and NFS is down then i have time to back it up.. 15:49
spatelno cinder 15:50
noonedeadpunkok :(15:50
jrosserthis is the same problem as the repo server15:50
spatelMy usage case is just CPU and memory.. we do audio processing and don't keep any single bits or bytes on disk (so storage has zero requirement in my cloud)15:50
noonedeadpunkso well, I think what we need to do anyway - refactor glance_nfs_client  variable to allow us same set of options that are provided by systemd_mount15:51
noonedeadpunkwith that - you can set any mount point - would it be gluster, s3, nfs - anything that role can handle15:52
spateli need small storage for my glance to keep 2 or 3 images to deploy distro and we mostly don't deal with snapshot etc.. so glance is for just image 15:52
noonedeadpunkfor gluster deployment - dunno, I'd say it's stuff for ops. Like create containers for it or smth, configure, etc15:53
spatelcurrently i am installing glusterfs on infra node and mounting filesystem in glance 15:53
noonedeadpunklikely patch to systemd_mount as well if it's not supporting gluster now or requires some extra packages for that15:54
spateli was thinking why don't i just install glusterfs-server inside glance 15:54
spatelwe have 2TB disk on infra so more than enough for keeping 4 distro images :)15:54
spatelLet me trying systemd_mount and see otherwise i will keep my hacks running.. no big deal.. (just trying to avoid ceph but if ceph is best solution then i may adopt it)15:55
noonedeadpunkum, when you have image (amd image in raw format) and nova ephemeral drives on ceph, vm creation is almost instant, as it doesn't require copying data from glance container to compute, converting image format, etc.15:57
noonedeadpunkJust block device instantly created out of image as of snapshot and ready to be used15:57
noonedeadpunkwell yes, when you have like 4 images it might be overkill as they just could be cached15:58
noonedeadpunkand you still want to use local storage for computes15:58
spatel+1 if using ceph for everything but using for just glance would be overkill for small environment 15:59
admin1spatel, ceph with EC .. 16:06
spatelEC?16:06
*** dviroel|lunch is now known as dviroel16:23
noonedeadpunkerasure coded pool16:40
noonedeadpunkbut I hope it's bad joke :D16:40
opendevreviewMerged openstack/openstack-ansible-os_tempest stable/victoria: Remove tempestconf centos-8 job  https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/82669716:54
spatellol17:23
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: [docs] Use extlink for deploy guide references  https://review.opendev.org/c/openstack/openstack-ansible/+/82692317:24
noonedeadpunkor well, I'm too noob in ceph to understand how to make it somehow reliable...17:28
spatelCeph is really good for very large environment but its overhead for 3 node deployment :(17:31
spatelif you have 30 node cluster then it do very good job with performance and healing but with 3 node it won't justify that performance17:32
spatelThis patch should get merge ASAP correct? - https://review.opendev.org/c/openstack/openstack-ansible/+/82678217:33
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Used forked etcd role  https://review.opendev.org/c/openstack/openstack-ansible/+/82688118:01
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Used forked etcd role  https://review.opendev.org/c/openstack/openstack-ansible/+/82688118:01
opendevreviewMerged openstack/openstack-ansible master: Clarify major upgrade documentation for updating internal CA  https://review.opendev.org/c/openstack/openstack-ansible/+/82678218:15
opendevreviewMerged openstack/openstack-ansible master: Clarify the difference between generating and regenerating certificates  https://review.opendev.org/c/openstack/openstack-ansible/+/82678618:17
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/xena: Clarify major upgrade documentation for updating internal CA  https://review.opendev.org/c/openstack/openstack-ansible/+/82684418:17
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server master: Use cloudsmith repo for rabbit and erlang  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/82644418:19
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server master: Allow different install methods for rabbit/erlang  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/82644518:19
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server master: Update used RabbitMQ and Erlang  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/82644618:20
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_zun master: Update Zun api-paste  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/82284718:21
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: [docs] Use extlink for deploy guide references  https://review.opendev.org/c/openstack/openstack-ansible/+/82692318:22
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Used forked etcd role  https://review.opendev.org/c/openstack/openstack-ansible/+/82688118:22
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Used forked etcd role  https://review.opendev.org/c/openstack/openstack-ansible/+/82688118:22
noonedeadpunkhm, gerrit feel broken after upgrade with regards to missing marks if patch is relevant and rebase menu when patch already on top of another...18:23
jrosseri wonder if thats no longer default-to-on feature18:24
noonedeadpunkwell, knowing gerrit flexability, I kind of doubt google moved that to some config...18:26
noonedeadpunkbut yeah. Worth bugging infra likely...18:26
jrosserone for #opendev i guess18:26
noonedeadpunkyep... but on Monday :)18:27
admin124.0.0 .. glance .. is using nfs .. mount is fine .. root and glance can touch/create files ... when i try to upload file, the api error is:  Error in store configuration. Adding images to store is disabled.: glance_store.exceptions.StoreAddDisabled: Configuration for store failed. Adding images to this store is disabl18:38
admin1the config is like this: https://pastebin.com/raw/WtVve8m918:40
admin1default_backend = file  and path is correct 18:43
jrosserthe indentation there looks wrong18:45
jrosseroh wait18:45
jrosserno its ok18:45
admin1i am going to nuke the containers and re-deploy and test again 18:48
opendevreviewMerged openstack/openstack-ansible-haproxy_server master: Adjust default configuration to support TLS v1.3  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/82394418:56
admin1worked in the 3rd try .. not sure what the issue was .. 19:36
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_zun master: Update Zun api-paste  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/82284720:33
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_zun master: Refactor use of include_vars  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/82431320:36
opendevreviewJonathan Rosser proposed openstack/openstack-ansible stable/wallaby: Remove left-over centos-8 job from project template  https://review.opendev.org/c/openstack/openstack-ansible/+/82693720:43
*** dviroel is now known as dviroel|brb20:46
opendevreviewMerged openstack/openstack-ansible stable/xena: Clarify major upgrade documentation for updating internal CA  https://review.opendev.org/c/openstack/openstack-ansible/+/82684420:55
opendevreviewMerged openstack/openstack-ansible-os_glance master: Use common service setup tasks from a collection rather than in-role  https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/82439721:21
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_zun master: Use common service setup tasks from a collection rather than in-role  https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/82437221:44
opendevreviewMerged openstack/openstack-ansible-os_octavia master: Use common service setup tasks from a collection rather than in-role  https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/82437922:24

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!