Thursday, 2021-02-11

jrossermaybe important to understand that all of these services are very agnostic to the actual choice of backend store00:00
eatthoselemonsah so the cinder-volume stores the actual packages/config/etc for the vm's?00:00
jrossercinder is block devices00:00
jrosserlike a hard disk, 4k blocks00:00
eatthoselemonsglance just provides the boot images so the speed of glance only matters for bootup?00:00
jrosserusually yes00:00
jrosserit's getting late here, i'm done for today00:01
*** jpvlsmv has quit IRC00:01
jrosseryou can explore this all in the AIO :)00:01
*** jpvlsmv has joined #openstack-ansible00:02
eatthoselemonsOkay that is making sense00:03
eatthoselemonsI will mess with the aio00:03
eatthoselemonsthanks for all your help! Hope you have a great evening!00:03
*** tosky has quit IRC00:17
*** eatthoselemons has left #openstack-ansible00:24
*** eatthoselemons has quit IRC00:25
openstackgerritMerged openstack/ansible-role-python_venv_build stable/victoria: Import wheels build only when necessary  https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/77480400:28
*** maharg101 has joined #openstack-ansible00:34
*** maharg101 has quit IRC00:38
*** macz_ has quit IRC00:42
*** macz_ has joined #openstack-ansible01:15
*** fanfi has quit IRC01:17
*** macz_ has quit IRC01:20
*** ianychoi has joined #openstack-ansible02:09
*** macz_ has joined #openstack-ansible02:29
*** macz_ has quit IRC02:33
*** maharg101 has joined #openstack-ansible02:35
*** maharg101 has quit IRC02:40
*** spatel has joined #openstack-ansible03:08
*** gyee has quit IRC03:33
*** LowKey has joined #openstack-ansible04:18
*** maharg101 has joined #openstack-ansible04:36
*** maharg101 has quit IRC04:40
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:33
*** spatel has quit IRC06:43
*** CeeMac has joined #openstack-ansible07:07
*** miloa has joined #openstack-ansible07:11
*** kleini has joined #openstack-ansible07:12
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Do not use tempestconf for ironic role tests  https://review.opendev.org/c/openstack/openstack-ansible/+/77290707:44
*** rpittau|afk is now known as rpittau07:53
noonedeadpunkmorning08:05
noonedeadpunkwow, jrosser, are you sleeping at all?:)08:06
jrossernot enough08:06
noonedeadpunkyou shouldn't really burning midnight oil08:10
noonedeadpunk(I guess)08:11
jrossertrue, wierd lockdown times i think08:12
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add hosts integrated tests  https://review.opendev.org/c/openstack/openstack-ansible/+/77468508:12
noonedeadpunkYeah, we feel it less here I guess, since we don't have reall lockdowns here. Well on paper we do, but in reality nobody controls it, so everybody is kind of free to do whatever they want...08:14
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_murano master: Add global override for service bind address  https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/77507708:15
*** andrewbonney has joined #openstack-ansible08:15
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_murano master: Use the utility host for db setup tasks  https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/74723608:16
noonedeadpunkI'm reading through ML regarding CI and devstack parallel installation and thinking if we could parallize setup-openstack (except keystone) as well... The tricky thing is resource creation...08:17
noonedeadpunkwhich should be totally done afterwards all setup I guess08:17
noonedeadpunk(in case it's run in parallel). but with our architecture it seems hardly achievable without nasty hacks... or just moving out all resource creation to the separate thing08:18
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_murano master: Add global override for service bind address  https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/77507708:18
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_murano master: Use the utility host for db setup tasks  https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/74723608:19
noonedeadpunkmurano is broken on tempest for some time...08:19
noonedeadpunkit's just timeouting and I don't see the reason why actually...08:19
noonedeadpunkproject is pretty much deserted overall08:20
jrosserhmmm, seems we get a long way behind on merging stuff for the role as a result08:21
noonedeadpunkyeah, I even stopped pushing patches for it until figure out what's wrong with tempest...08:21
jrosseri saw jobs break with not being able to bind to 0.0.0.0 due to our metal setup changes08:21
noonedeadpunkI guess we would need to squash these changes anyway?08:22
*** maharg101 has joined #openstack-ansible08:23
jrossermaybe, for metal galera[0] == utility host anyway so it might be ok08:23
jrosserthough upgrade jobs are kind of pointless with it in this state08:23
noonedeadpunkbtw https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/747236/08:23
noonedeadpunkah, that's what you rebased08:24
noonedeadpunklol08:24
noonedeadpunkok08:24
noonedeadpunkyeah upgrade totally useless now08:25
*** jbadiapa has joined #openstack-ansible08:26
*** ianychoi has quit IRC08:33
*** tosky has joined #openstack-ansible08:36
*** ianychoi has joined #openstack-ansible08:48
jrosseryes about parallelising stuff08:54
jrosserit's a real shame there is no natural construct for that08:54
jrosserin ansible08:54
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Fix cert verification logic for cinder api  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/77507908:56
noonedeadpunkwell, for setup-hosts we can probably use free strategy. thet won't help us in CI though08:57
noonedeadpunk*that08:58
jrosseralso from back in time https://review.opendev.org/c/openstack/openstack-ansible/+/49774209:03
*** fanfi has joined #openstack-ansible09:16
*** mindthecap has joined #openstack-ansible09:28
*** miloa has quit IRC09:32
*** miloa has joined #openstack-ansible09:32
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible master: [doc] Add ceph_mons note  https://review.opendev.org/c/openstack/openstack-ansible/+/77508509:39
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_horizon master: Fix race condition in compression of static files  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/77508609:40
*** maciejel has joined #openstack-ansible09:44
*** fanfi has quit IRC09:47
noonedeadpunkhow do you think https://bugs.launchpad.net/openstack-ansible/+bug/1732481 is relevant? I guess qemu nowadays does include apparmor/selinux by default from system packages?09:52
openstackLaunchpad bug 1732481 in openstack-ansible "qemu config should set security driver to apparmor on ubuntu" [Medium,In progress]09:52
admin0good morning all .. i need to upgrade one platform from ubuntu 16 (rocky) to latest one09:54
admin0i need a bit of links on getting started .. i know we had some etherpad on it09:54
noonedeadpunkebbex made pretty good doc out of them https://docs.openstack.org/openstack-ansible/rocky/admin/upgrades/distribution-upgrades.html09:57
admin0noonedeadpunk, thank you10:03
admin0i will work on this today10:03
*** jbadiapa has quit IRC10:04
*** macz_ has joined #openstack-ansible10:12
*** macz_ has quit IRC10:17
*** gokhani has joined #openstack-ansible10:28
gokhaniHi folks, I can mount nfs share from host but not from container. ı am getting access denied error on container side. Do I need a new config from lxc side ?10:31
admin0it depends on the acl/rules from the nfs server10:39
admin0check via tcpdump what ips its receiving and if its in the allow range10:39
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Make requirements repo available during distro CI builds  https://review.opendev.org/c/openstack/openstack-ansible/+/77509510:40
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Update pip/setuptools/wheel to latest version  https://review.opendev.org/c/openstack/openstack-ansible/+/77028410:41
*** jbadiapa has joined #openstack-ansible10:49
gokhaniadmin0 ı am using netapp and they are in same ip range, weird thing is I can mount from host10:50
admin0are host and containers in the same L2 ?10:50
admin0if not,is there NAT happening on the gateway ?10:51
gokhaniyes, I created br-nfs and they are connected with this bridge.10:52
admin0is it possible to do a tcpdump and see the outgoing ip and mac  from host and also from bridge ( you can run 1 single tcpdump to netapp as host ) to see this10:53
admin0and some logs in the netapp side10:53
*** ioni has quit IRC10:58
kleininoonedeadpunk: does the above linked distribution upgrade also apply for bionic -> focal? when would be the best time frame to do this OS upgrade? with U or V or maybe later OSA release?11:00
gokhaniadmin0 , I am listening with 'tcpdump -s 111 port nfs -i br-nfs'  and run /'bin/mount 10.1.100.21:/ussuri_glance_nfs /tmp/destek -t nfs -o clientaddr=10.1.100.252,_netdev' from glance container. I can't see any trafic :(11:03
admin0ok .. so how tcpdump works is for incoming its before firewall and for outgoing its after firewall11:03
admin0means something is blocking that .. firewall, nat rules, etc11:03
admin0you need to find and fix :)11:03
admin0you can actually only do "mount 10.1.100.21:/ussuri  /tmp/destek"  .. and it should still mount fine11:05
noonedeadpunkkleini: I think we need to revise and maybe adjust it11:07
*** ioni has joined #openstack-ansible11:07
noonedeadpunkI haven't made bionic->focal for myself yet, while already running victoria for some regions11:07
noonedeadpunkso I have not answer for this yet. But focal is available since Ussuri, so I think it's up to you to decide about when to upgrade11:08
noonedeadpunkin the meanwhile I'm not sure we will have bionic support for W11:08
kleinithanks, so I need to plan either U -> V and then focal or U, focal, V11:11
gokhaniadmin0, ı am again getting access denied error. I rebooted my server.11:14
admin0until you find and fix whats causing the blocks, you will have that issue :)11:14
admin0what you can do is do iptables -Z ( to reset the counters)11:14
admin0then run the command again from container and do iptables -L -nvx -t nat ( and one without -t nat)11:15
admin0to check the counters11:15
admin0usually if you do the mount 10 times, the counter in 1 or multiple rules will go up11:15
admin0and that might tell you exactly what line in the iptables is blocking this11:15
admin0if its not iptables, then its routing rules11:15
*** MickyMan77 has joined #openstack-ansible11:15
MickyMan77Hi, is version 22.0.1 of osa the stable version of victoria ?11:18
noonedeadpunkkleini: I guess you can even setup new hosts on focal in case you're on V. But for this you will need to upgrade at least one controller as well11:23
noonedeadpunkor disable wheels build for the new hosts iirc11:23
noonedeadpunkMickyMan77: yeah, kind of)11:24
noonedeadpunkstill have some bugs (as any osa version) but generally it's the way better then 22.0.011:24
noonedeadpunkit's safe to use for sure11:24
gokhaniadmin0, iptables doesn't block it. this is ip route show output > http://paste.openstack.org/show/802560/11:27
*** macz_ has joined #openstack-ansible11:34
gokhanido you recommend  mount nfs from host with /var/lib/lxc/infra3_glance_container-0c045fb1/rootfs/var/lib/glance/images/ ? I can't mount from glance container :(11:37
admin0no  you have to fix the underlying issue and not take shortcuts or cut corners :)11:39
*** macz_ has quit IRC11:39
admin0you can ping the nfs server ? is that captured in the tcpdump ?11:39
gokhaniadmin0, yes ping is captured in tcp but ı can not see any nfs traffic :(11:47
gokhaniadmin0 this is netstat -tupln output > http://paste.openstack.org/11:50
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Remove version cap on PrettyTable  https://review.opendev.org/c/openstack/openstack-ansible/+/77512611:53
kleininoonedeadpunk: disabling wheels build would mean, that repo server would not be used for that host and everything is built locally? Is there documentation somewhere in OSA about wheels and repo server and so on? I don't really understand that construct yet.11:53
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Make requirements repo available during distro CI builds  https://review.opendev.org/c/openstack/openstack-ansible/+/77509511:54
openstackgerritJonathan Rosser proposed openstack/openstack-ansible master: Update pip/setuptools/wheel to latest version  https://review.opendev.org/c/openstack/openstack-ansible/+/77028411:54
*** macz_ has joined #openstack-ansible11:55
jrosserkleini: check out the readme on this repo https://github.com/openstack/ansible-role-python_venv_build11:55
noonedeadpunkkleini: yep, that would mean exactly that. I don't think we have any good explanation unfortunatelly. In case you want to have wheels, it's required that they should be built on the same OS, so there should be at least repo serer running focal for wheels to be built for focal11:56
noonedeadpunkthere might be weird things once you fully upgrade to focal because of the lsyncd though, but it' completely different story11:56
*** macz_ has quit IRC12:00
kleiniI think, mixing up OS upgrade to focal and upgrade to V is not a good idea. Too much complexity for me. So, I will stay on U and try to upgrade nodes to focal either by having one controller on focal with a repo server or disabling the use of the repo server12:03
kleiniI think the later one looks more interesting to get used to focal and not breaking any controller nodes. once some computes run without problems on focal and U, I can move on the upgrade the controller nodes.12:04
jrosseri've been thinking that adding an extra very minimal node to the environment might be useful for upgrades12:08
jrosserit would be the new OS and for the purpose of the upgrade you override the venv build host to be that one12:08
gokhaniadmin0 , I can only capture nfs v3 with tcpdump. http://paste.openstack.org/show/802565/12:09
jrossertheres probably detail i've not thought about but it might make things a little less interdependant when upgrading the controllers12:09
kleinihmm, good idea. I can create a focal VM and connect that using VLAN to the mgmt network for this upgrade scenario. how do I overwrite the venv build host?12:12
jrosserhttps://github.com/openstack/ansible-role-python_venv_build/blob/master/defaults/main.yml#L115-L12112:15
jrosseri guess it would need to have the repo_server role run against it and be the backend for the loadbalancer port 818112:16
kleiniWill check that out in my staging environment12:17
jrosseri'm kind of hand-waving this a bit so yes good idea :)12:17
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_barbican master: Fix crypto_plugin defenition  https://review.opendev.org/c/openstack/openstack-ansible-os_barbican/+/76820112:49
*** mgagne has quit IRC12:52
*** mgagne has joined #openstack-ansible12:53
*** luksky has joined #openstack-ansible12:56
mgariepymorning everyone13:25
*** waxfire has quit IRC13:42
gokhaniHello again folks, I tried mount nfs from containers and I am getting permission denied errors > http://paste.openstack.org/show/802568/ . I used as nfs server both netapp and manually created nfs server. ıt gave same error. I doubt about lxc3. Dou you have any ideas ? This environment is Ussuri with ubuntu 18.04. I deployed it from OSA13:50
gokhanistable/ussuri branch. jrosser do you have any idea about this problem ?13:50
*** waxfire has joined #openstack-ansible13:52
*** spatel has joined #openstack-ansible13:53
*** lemko has quit IRC14:09
*** lemko has joined #openstack-ansible14:09
*** miloa has quit IRC14:09
gokhaniadmin0 , jrosser I find my problem is about appormor lxc profile and I solve it with following these steps >  http://paste.openstack.org/show/802573/. now I can mount from containers. I think we need to add these parameters to lxc profile on OSA lxc container create role.14:23
jrossergokhani: https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/templates/lxc-openstack.apparmor.j214:26
*** pcaruana has quit IRC14:29
spatelceph question, I have configured cinder-api/volume services to integrate with ceph but do i need to tell nova about ceph otherwise how it will mount volume for vm?14:31
ionispatel:  you will have to re-run nova playbook and it will detect that you have ceph configured and it will configure everything for you14:43
ioniget ceph keys, configure nova.conf and so on14:43
ioniit will attach the block volume to your instance and inside the vm you will see a new device called /dev/vdb or /dev/sdb depending on how you configured the bus14:44
spatelI do have ceph running in older cloud which has cinder and i can mount everything.. so all good there.. its been long time so i don't know what i did there14:45
spatelI don't think i have created any special config for nova related cinder14:45
spatelall i can see i have /etc/ceph/ceph.client.cinder.keyring file on all compute nodes14:47
ionithat's fine, i was thinking that only now you have configured cinder with ceph14:47
*** macz_ has joined #openstack-ansible14:47
ioniin this case you have to re-run os-nova-install to pick up the ceph stuff14:48
spatelyes.. look like14:48
gokhanijrosser, this file didn't work for me. It worked only after I added these variables in /etc/apparmor.d/lxc/lxc-default-cgns file.14:50
jrossersorry i'm in meetings pretty much the rest of the afternoon14:50
jrosserit will need some debugging to find out why those settings are not applying14:50
jrosseror its an LXC2/3 difference in config files14:50
*** macz_ has quit IRC14:52
gokhaniyes may be. I think this is lxc3 issue. it seems those settings are not applying with lxc-openstack.apparmor file14:53
openstackgerritAndrew Bonney proposed openstack/openstack-ansible-os_horizon master: Fix race condition in compression of static files  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/77508614:54
*** pcaruana has joined #openstack-ansible14:58
*** macz_ has joined #openstack-ansible15:09
*** macz_ has quit IRC15:13
admin0gokhani, from my glance containers, i can mount nfs fine .. i did not do anything special/apparmor15:26
*** waxfire has quit IRC15:28
*** macz_ has joined #openstack-ansible15:44
*** luksky has quit IRC15:48
*** macz_ has quit IRC15:48
-openstackstatus- NOTICE: Recent POST_FAILURE results from Zuul for builds started prior to 15:47 UTC were due to network connectivity issues reaching one of our log storage providers, and can be safely rechecked15:50
gokhaniadmin0, I can't mount it. I deployed this environment yesterday. My OS is ubuntu 18.04.5 LTS and kernel version is 5.4.0.65.15:59
admin0is it listed in glance ?15:59
admin0i mean glance will mount it automatically15:59
admin0needs further checks ..15:59
spatelioni thats it, its working now16:02
ionispatel: nice16:02
spatelafter running playbook on nova it deployed ceph.cinder keyring and now my vm can access16:02
gokhaniyes, normally glance will mount automatically but in my environment it gets error when runs glance playbook.16:03
admin0gokhani, just to check . do you have a infra/deployment host server ?16:04
admin0not a container16:04
admin0try setting up a quick nfs server there .. and then try to mount that one16:04
admin0just want to check if its specific to your netapp or nfs in general16:04
*** luksky has joined #openstack-ansible16:05
admin0apt install nfs-kernel-server ;     and put /srv/glance  172.29.244.0/22(rw,sync,no_subtree_check,no_root_squash)   in /etc/exports and you are done16:05
gokhaniadmin0, ı tested it as you said and set up nfs server. It again gave same error "Permission Denied".16:05
admin0then you have a different issue which I do not know .. maybe kernel, firewall, permissions16:06
admin0someting in ubuntu16:06
gokhaniit is about lxc profile / apparmor settings and not specific to netapp.16:06
admin0why should it be .. for nfs, its just an IP ?16:07
admin0ip:/mountpoint16:07
admin0in one of my ubuntu 18.04 cluster where nfs is being used for glance, the kernel is  4.15.0-112-generic16:07
admin0though it think it makes no diff16:07
admin0gokhani, is it a new install ?16:08
admin0then why not go with ubuntu 20 and latest 22.0 version16:08
*** macz_ has joined #openstack-ansible16:09
gokhanifor lxc we need to set this variable https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/templates/lxc-openstack.apparmor.j2#L18 for nfs working on containers. But in my environment it doesn't apply these settings.16:10
gokhaniyes it is new install16:10
gokhaniour test environment is ubuntu 18.04 and we didn't test ubuntu 20.0416:12
admin0whats preventing you to test 20.04 :)16:25
*** waxfire has joined #openstack-ansible16:26
admin0i am mostly an ops guy and not a hardcore dev .. and i don't understand why you not want to use a perfectly working 20.04 latest ubuntu with all new packages .. but want to still use 18.04 and again take a headache of upgrading it later16:29
admin0especially when its greenfield16:29
gokhaniadmin0, yes you are right but  there is a lot to do at my side ı need time:(  it is in my plans. Also ubuntu 18.04 is working perfect and ı didn't use 20.04 yet.16:35
*** gyee has joined #openstack-ansible16:37
admin0is this platform just for openstack ? or are you doing multiple things on it ?16:37
admin0i mean are you using the controller and compute for something else also ?16:37
admin0the whole reason why i use OSA is because i don't have to write, manage, test or even document anything .. its already done and tested and used by this community ( everyone clap for themselves) ..16:40
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-lxc_hosts stable/ussuri: Fix lxc_hosts_container_image_url condition  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/77521816:47
*** ioni has quit IRC16:50
gokhaniadmin0, ı am also appreciating OSA guys. It is really awesome and I have used osa for 4 years.16:56
admin0so focal + osa is tested :)16:57
gokhaniwe are mostly using sahara,magnum and gpus on openstack. we only have problem with time.17:00
*** ioni has joined #openstack-ansible17:07
spateladmin0 I have question, I have created cinder volume and mounted on vm-1 created filesystem and copy some important file on that volume.  Now suddenly something happened and i delete vm-1  (Now if i create vm-2 can i mount that volume back and retrieve my data? )17:08
admin0yes17:10
spatelHow?17:10
admin0unless that something happened was in the middle of an io operation and some data it was writing is lost or corrupted17:10
admin0mount to vm217:10
admin0a cinder volume is like a usb disk17:10
spatelhow does VM-2 understand its new disk or partitioned17:11
admin0you mount to 1 instance ..   read/write data .. unplug .. mount to something else .. do the same17:11
admin0unmount from vm-117:11
admin0then mount to vm-217:11
admin0lsblk17:11
admin0will show /dev/vdX .. mount  /dev/vdb /mnt/17:11
admin0and that is all you need to do17:11
spatelhmm let me create vm-2 and test quickly17:11
admin0again, treat it like a ubs disk17:11
admin0when you format a usb disk . you don't have to reformat it again everytime you move it to a diff system17:12
admin0if its pre-formatted and if the new OS knows that format, it will just mount it and you can read/write data17:12
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Combined patch to unblock CI  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/77523917:18
spateladmin0 +1 it works!17:38
*** maharg101 has quit IRC17:40
*** rpittau is now known as rpittau|afk17:43
*** alinabuzachis has joined #openstack-ansible18:23
*** alinabuzachis has quit IRC18:26
*** alinabuzachis has joined #openstack-ansible18:26
noonedeadpunkwhaat `cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1110 - No space left on device - Failed to move monitor 52075 to "/sys/fs/cgroup/cpuset//lxc.pivot"` https://a7bc13a3b1d3ff8939d4-b66311f00e65e72370f624798f3cdac4.ssl.cf5.rackcdn.com/775239/1/check/openstack-ansible-ovs-nsh-ubuntu-focal/6a3e119/logs/host/lxc/lxc-agents1.log.txt18:34
noonedeadpunkfocal on functional looks so weird...18:34
*** gokhani has quit IRC18:34
jrosseroh yes that18:35
*** luksky has quit IRC18:35
jrossernoonedeadpunk: if we could get this into shape it would all just go away https://review.opendev.org/c/openstack/openstack-ansible/+/53431818:36
jrossertrouble with the neutron role right now is that there are just too many simultaneous things need addressing18:37
noonedeadpunkwe will increase CI time from other side, but yeah18:38
jrosserwell it's about the big picture really not just job time18:46
jrosserbecasue we waste so many CI hours fighting with it like it is18:46
noonedeadpunkyeah I will try to look into this tomorrow. In the meanwhile I also think that we should move ovs job to the neutron from integrated repo testing or really switch default from lxb to ovs18:47
noonedeadpunkbut not sure about that and if we have enough experience with ovn atm18:48
noonedeadpunks/ovs job/ovn job/18:48
noonedeadpunks/from lxb to ovs/from lxb to ovn18:48
noonedeadpunkwe are looking at ovn now as well, but it's just for some kind of prespective, not sure when exactly this will be done, considering I can't get to trove while was supposed to deploy it till Christmas...18:50
*** luksky has joined #openstack-ansible18:52
jrosseri was just looking at the OVN feature gap list today18:52
jrosserno BGP speaker it seems making ipv6 kind of tricky18:52
*** ioni has quit IRC18:53
jrosserbtw we have snapshots on victoria broken https://bugs.launchpad.net/nova/+bug/191540018:54
openstackLaunchpad bug 1915400 in OpenStack Compute (nova) "Snapshots fail with traceback from API" [Undecided,Incomplete]18:54
*** alinabuzachis has quit IRC19:01
*** alinabuzachis has joined #openstack-ansible19:02
*** alinabuzachis has quit IRC19:16
*** sshnaidm is now known as sshnaidm|afk19:19
openstackgerritMerged openstack/openstack-ansible-lxc_hosts stable/ussuri: Fix lxc_hosts_container_image_url condition  https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/77521819:20
*** spatel has quit IRC19:25
*** spatel has joined #openstack-ansible19:28
*** maharg101 has joined #openstack-ansible19:36
*** maharg101 has quit IRC19:41
*** andrewbonney has quit IRC19:42
*** jpvlsmv has quit IRC19:59
noonedeadpunkoops....20:58
* noonedeadpunk should finally set rally/refstack to run after upgrades21:07
noonedeadpunkbtw, I can't reproduce this bug I guess... http://paste.openstack.org/show/802585/21:13
noonedeadpunkmaybe it's horizon makes some weird calls....21:13
noonedeadpunkoh, hm, but we don't use ssl for rabbit in the region where we have V21:16
noonedeadpunkand my normal sandbox is broken atm to check this out :(21:19
djhankbHey dudes, quick question for you all - my main Openstack deployment is running an old Dell PS4100 iSCSI Array, configured with the now deprecated cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver, which does not use MPIO, and TBH is pretty slow.  I don't have a lot of experience with LVM, although I know its sort of baked-in.   What would be21:31
djhankbthe best approach to setting up some MPIO LUNs on that array that would be backed by the LVM driver? Would I need a dedicated machine for that? or could that run off of a controller node?21:31
* noonedeadpunk prefers ceph as storage backend21:33
noonedeadpunkeventually lvm might be good solution if you're looking for local storage only for your computes21:34
djhankbYeah, I would *love* to get into Ceph at some point in the future, I've been working on this POC with what I've got right now - and storage is sort of painful21:34
djhankbCeph and small scale don't really work if I understand correctly :-)21:35
noonedeadpunkWell, I guess in case you're migrating between drivers, it shouldn't matter much between which? And ceph imo is more universal solution in case you kind of need shared storage and you have more then single compute:)21:36
noonedeadpunkwell, it works. the thing is that it's really recommended to have quorum of ceph monitors21:37
noonedeadpunkbut still, we deploy it on the single VM for CI for instance21:37
djhankbMakes sense - I should probably try to get my feet wet at some point here with it21:38
noonedeadpunkand you can use sparse files or whatever as osd backend. The question is in performance, yes, and in case it's sing;e compute, you probably just don't need shared storage21:38
noonedeadpunkyou can try out with aio on some VM with 4 CPU and 10-12 GB of RAM.21:39
djhankbRight now, I have 2 nodes for Controller, and 2 for Compute21:39
djhankbI would have had more, but power is limited in the Lab room21:40
noonedeadpunkeventually... it's just `git clone https://opendev.org/openstack/openstack-ansible; cd openstack-ansible; ./scripts/gate-check-commit.sh aio_ceph`21:40
noonedeadpunkand it will deploy whole openstack with ceph on your VM21:40
noonedeadpunkoh, and you need 100gb of hard drive on the vm21:40
djhankbinteresting, I have not done an AIO build yet21:41
noonedeadpunkwell, full doc is here https://docs.openstack.org/openstack-ansible/latest/user/aio/quickstart.html but what I provided you is the way how our CI runs things21:41
djhankbI've got 2 machines at home I wanted to spin up Openstack on. If I set up one using an AIO build, am I able to add the other as a compute?21:42
noonedeadpunkyep21:42
noonedeadpunkyou will just need to configure network on it and add to openstack_user_config.yml and run playbboks agains21:43
djhankbCool. I may try that at the house...  but that brings up another point - I can make manual MPIO luns all day long on my PS4100 at the office openstack lab, If I wanted to set up a handful of LUNS for OSDs, would I be able to set up Ceph that way on my 2 controller nodes? Or would that not be recommended?21:44
noonedeadpunkBUT AIO is known to kind of break after reboot, because we have long-standing issues there with network configuration (systemd-networkd conflicts with smth I guess) and loop drives might be lost. So it's eventually more for POC and playing around21:44
noonedeadpunkwe never looked into since it anyway announced for testing only21:44
noonedeadpunkbut, all openstack-ansible configs, inventory and etc are generated absolutely properly. So if you configure networking and storage manually - you can just use these configs to setup openstack as well21:45
noonedeadpunkwell the issue with 2 controllers is that you can catch split brain, which would be nasty21:46
djhankbFor sure. I've been working with OSA for about 2 years now in my free time. Its just so damn vast its easy to get lost in the details21:46
noonedeadpunkyou can even have single controller but you know...21:46
noonedeadpunkit's impossible to have quorum with 2 nodes21:46
noonedeadpunkBUT21:46
djhankbYes, I noticed the split brain - I ran into that with Galera + RabbitMQ21:46
noonedeadpunk^21:47
djhankbI was thinking of adding a VM on a compute just for Galera + RabbitMQ to give a quorum.21:47
noonedeadpunkyou can create 2 raabitmq and galera containers on the single controller node21:47
djhankbAhh that's a good point too.. Does OSA Support Garbd?21:47
djhankbI have run Garbd before in other Galera deployments to give a quorum21:48
noonedeadpunknah21:48
noonedeadpunkyeah, I was running it to, but I guess I used some nasty override for that21:48
djhankbI would do it here too but I like how eveyrthing works together so I don't want to go adding some extra manual thing to my deployment21:48
noonedeadpunkyou can use affinity like this to create more than single container https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/openstack_user_config.yml.aio.j2#L139-L14321:49
djhankbPerfect, I was just going to ask that21:49
noonedeadpunkhere's some doc regarding this https://docs.openstack.org/openstack-ansible/latest/reference/inventory/configure-inventory.html#deploying-0-or-more-than-one-of-component-type-per-host21:51
noonedeadpunksorry, need to head out21:51
djhankbI see - add the affinity under the host section.21:52
djhankbNo problem - thanks for your help as always!21:52
*** klamath_atx has joined #openstack-ansible21:53
*** spatel has quit IRC22:03
*** jbadiapa has quit IRC22:03
*** jpvlsmv has joined #openstack-ansible22:12
jpvlsmvQuick (I think) question, where do I configure my (physical) hosts so that their lxc containers can communicate across hosts?  i.e. on host1_utility_container I can't yet ping host2_utility_container's address.22:21
*** luksky has quit IRC22:25
*** lemko7 has joined #openstack-ansible22:29
*** lemko has quit IRC22:30
*** lemko7 is now known as lemko22:30
djhankbjpvlsmv - this is where those would be set: https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/openstack_user_config.yml.example#L240-L25022:40
*** PrinzElvis has quit IRC22:41
*** PrinzElvis has joined #openstack-ansible22:41
djhankbI think you should end up with an "eth0" in each container bound to an internal-only bridge and an "eth1" that would bridge to your management network22:42
djhankbetc/network/interfaces.d/lxc-net-bridge.cfg is what the eth0 bridges to22:43
jpvlsmvright, the eth0 connects to the lxcbr0 and I can ping the other containers on this host, so host1_utility can ping host1_galera with either the 172.29.x.y or 10.0.x.y galera's address22:43
jpvlsmvis it Neutron that would put the traffic into & out of a tunnel?22:45
djhankbYes for VXLAN I assume? IIRC that is more for Instance traffic to controller22:46
*** waxfire has quit IRC22:46
jpvlsmvah... likely "Tunneling cannot be enabled without the local_ip"...22:47
*** waxfire has joined #openstack-ansible22:47
jpvlsmv(error message from neutron-linuxbridge-agent.log)22:47
djhankbYes, VXLAN works by sending traffic back to a controller node over Multicast, IIRC you need to use regular bridged networks for containers22:48
*** LowKey has quit IRC22:53

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!