Friday, 2021-01-15

*** macz_ has quit IRC00:02
*** gyee has quit IRC00:03
*** ianychoi has quit IRC00:03
*** lemko has quit IRC00:03
*** grabes has quit IRC00:03
*** jpward has quit IRC00:03
*** dwilde has quit IRC00:03
*** cloudnull has quit IRC00:03
*** ThiagoCMC has quit IRC00:03
*** openstackgerrit has quit IRC00:03
*** irclogbot_1 has quit IRC00:03
*** antonym has quit IRC00:03
*** spotz has quit IRC00:03
*** hindret has quit IRC00:03
*** rh-jelabarre has quit IRC00:03
*** bl0m1 has quit IRC00:03
*** guilhermesp has quit IRC00:03
*** jungleboyj has quit IRC00:03
*** fyx has quit IRC00:03
*** persia has quit IRC00:03
*** mubix has quit IRC00:03
*** melwitt has quit IRC00:03
*** snadge has quit IRC00:03
*** Carcer has quit IRC00:03
*** fridtjof[m] has quit IRC00:03
*** Adri2000 has quit IRC00:03
*** rh-jelabarre has joined #openstack-ansible00:04
*** hindret has joined #openstack-ansible00:05
*** snadge has joined #openstack-ansible00:05
*** bl0m1 has joined #openstack-ansible00:05
*** guilhermesp has joined #openstack-ansible00:05
*** jungleboyj has joined #openstack-ansible00:05
*** Carcer has joined #openstack-ansible00:05
*** fyx has joined #openstack-ansible00:05
*** persia has joined #openstack-ansible00:05
*** mubix has joined #openstack-ansible00:05
*** melwitt has joined #openstack-ansible00:05
*** fridtjof[m] has joined #openstack-ansible00:05
*** Adri2000 has joined #openstack-ansible00:05
*** dwilde has joined #openstack-ansible00:06
*** gyee has joined #openstack-ansible00:06
*** cloudnull has joined #openstack-ansible00:06
*** ThiagoCMC has joined #openstack-ansible00:06
*** ianychoi has joined #openstack-ansible00:06
*** lemko has joined #openstack-ansible00:06
*** grabes has joined #openstack-ansible00:06
*** spotz has joined #openstack-ansible00:06
*** jpward has joined #openstack-ansible00:06
*** irclogbot_1 has joined #openstack-ansible00:06
*** antonym has joined #openstack-ansible00:06
*** ioni has quit IRC00:06
*** masterpe has quit IRC00:07
*** bl0m1 has quit IRC00:07
*** fridtjof[m] has quit IRC00:07
*** csmart has quit IRC00:07
*** tosky has quit IRC00:07
*** cyberpear has quit IRC00:09
*** guilhermesp has quit IRC00:09
*** jungleboyj has quit IRC00:09
*** fyx has quit IRC00:09
*** jungleboyj has joined #openstack-ansible00:09
*** guilhermesp has joined #openstack-ansible00:10
*** irclogbot_1 has quit IRC00:10
*** fyx has joined #openstack-ansible00:11
*** irclogbot_0 has joined #openstack-ansible00:11
*** cyberpear has joined #openstack-ansible00:11
*** spatel has joined #openstack-ansible00:12
*** bl0m1 has joined #openstack-ansible00:17
*** spatel has quit IRC00:17
*** csmart has joined #openstack-ansible00:33
*** ioni has joined #openstack-ansible00:42
*** ioni has quit IRC00:51
*** csmart has quit IRC00:51
*** csmart has joined #openstack-ansible01:03
*** fridtjof[m] has joined #openstack-ansible01:27
*** ioni has joined #openstack-ansible01:27
*** masterpe has joined #openstack-ansible01:27
*** cloudnull has quit IRC01:49
*** maharg101 has joined #openstack-ansible02:16
*** spatel has joined #openstack-ansible02:17
*** maharg101 has quit IRC02:21
*** openstackgerrit has joined #openstack-ansible02:34
openstackgerritSatish Patel proposed openstack/openstack-ansible-os_senlin master: This is typo error causing issue to re-install senlin  https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/77087802:34
*** cloudnull has joined #openstack-ansible02:35
*** ianychoi has quit IRC02:42
*** cloudnull has quit IRC02:46
*** jbadiapa has quit IRC03:16
*** cloudnull has joined #openstack-ansible03:21
*** cloudnull has quit IRC03:53
*** cloudnull has joined #openstack-ansible03:55
*** cloudnull has quit IRC04:02
*** cloudnull has joined #openstack-ansible04:05
*** gyee has quit IRC04:12
*** maharg101 has joined #openstack-ansible04:17
*** maharg101 has quit IRC04:22
*** ianychoi has joined #openstack-ansible04:29
*** dave-mccowan has quit IRC04:40
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-ansible05:35
*** pto has joined #openstack-ansible06:05
*** pto has quit IRC06:06
*** pto has joined #openstack-ansible06:07
*** pto_ has joined #openstack-ansible06:15
*** pto has quit IRC06:18
*** maharg101 has joined #openstack-ansible06:18
*** pto_ has quit IRC06:20
*** pto has joined #openstack-ansible06:20
*** maharg101 has quit IRC06:23
*** miloa has joined #openstack-ansible06:38
*** spatel has quit IRC07:03
*** pcaruana has joined #openstack-ansible07:29
*** jbadiapa has joined #openstack-ansible07:34
*** jbadiapa has quit IRC07:39
*** rpittau|afk is now known as rpittau08:09
*** andrewbonney has joined #openstack-ansible08:12
*** MickyMan77 has quit IRC08:33
*** tosky has joined #openstack-ansible08:42
*** maharg101 has joined #openstack-ansible08:53
*** spatel has joined #openstack-ansible09:16
*** spatel has quit IRC09:21
ebbex21:07 < jrosser> well this was the most obvious and smallest change to make https://review.opendev.org/q/topic:%22osa-new-pip%22+(status:open%20OR%20status:merged)09:32
jrosserebbex: morning!09:37
ebbexApologies, catching up in the chat, checking out links. Apparently pasted in the buffer.09:38
ebbexThough I remeber looking at this https://review.opendev.org/c/openstack/openstack-ansible-repo_build/+/348775 for some reason.09:40
ebbexI'm getting nowhere with the https://review.opendev.org/c/openstack/openstack-ansible-ops/+/741997 stuff, i can see the uri request go into kibana/nodejs, out to elasticsearch, then disappear. And then timeout.09:43
*** macz_ has joined #openstack-ansible10:07
*** macz_ has quit IRC10:12
kleiniHow is it possible to configure a common cinder volume backend host name for different cinder volume instances in different cinder volume containers on infra hosts? I use Ceph as backend and all cinder volumes should be able to deal with all Ceph volumes.10:12
*** bbezak has joined #openstack-ansible10:56
admin0kleini, your question is not clear10:57
admin0you have different pools you want to expose ?10:57
admin0because the volume backend name is same for all10:57
kleiniUnfortunately the host name is part of the cinder volume instance and that goes into the host value of the volume. This hostname contains the container IDs11:02
kleinihttp://paste.openstack.org/show/801656/ <- see the os-vol-host-attr:host value11:04
kleiniso the controller2-cinder-volumes-container-640d1209@rbd instance will not feel responsible for the volume11:05
admin0hmm.. never looked into this .11:09
admin0so if the original container is down, it stops to work ?11:09
*** MrClayPole has left #openstack-ansible11:10
jrosserkleini: there is a trail for this that starts here https://github.com/openstack/openstack-ansible-os_cinder/commit/c148d77e29af6faebc1c9b012ae08aed447cd17911:24
kleinithanks jrosser. that looks promising but first I need to fix my not working additional compute11:27
admin0jrosser, how do i use this ? just cherry-pick it ?11:28
jrosseri don't understand, sorry11:30
admin0sorry .. i understand it incrorrectly11:30
kleiniadmin0: you need to check, whether the code of that commit is in your /etc/ansible/roles/os_cinder role on the deployment host. if not, you need to consider to cherry pick that11:31
kleinidepends on where that change is released and backported11:32
admin0 my latest one with ceph is on 21.2.0 .. its not there11:35
admin021.2.1 sorry11:35
admin0i guess  i can just copy this file in the folder and rerun it11:36
CeeMacon a semi-related note11:36
CeeMacis there a way to configure osa to deploy cinder-volume in HA without using ceph?11:36
CeeMaci don't mind active/passive if there are restrictions11:36
*** bbezak has left #openstack-ansible11:37
jrosserkleini: http://paste.openstack.org/show/801658/11:37
jrosserwhat gets put in os-vol-host-attr:host sure is confusing though11:38
kleinidoesn't matter if all volume instances feel responsible for these Ceph volumes11:42
kleinijrosser: there is an additional step in the scripts/add-compute.sh that is not in the documentation for ussuri for scaling the cloud. will try to provide a fix11:44
kleiniplaybooks/openstack-hosts-setup.yml11:45
*** macz_ has joined #openstack-ansible11:58
*** macz_ has quit IRC12:03
*** jamesdenton has quit IRC12:07
*** jamesdenton has joined #openstack-ansible12:07
*** mgariepy has quit IRC12:14
*** spatel has joined #openstack-ansible12:17
*** spatel has quit IRC12:21
*** pto has quit IRC12:53
*** mgariepy has joined #openstack-ansible13:02
*** jbadiapa has joined #openstack-ansible13:06
kleiniI still have no luck with my additional compute node. Either placement or Nova scheduler is not considering this new compute node. Node is listed properly in resource providers, compute services and in cell_v2. I need to take care of different tasks now. Maybe somebody has an idea, what can cause this. Otherwise I have to debug placement and Nova scheduler.13:20
*** spatel has joined #openstack-ansible13:48
spatelnoonedeadpunk: how did you verify this patch? - https://review.opendev.org/c/openstack/senlin/+/74987413:50
spatelits not working for me.13:50
noonedeadpunkwhat do you mean under not working?13:51
spatelI have dedicated HAProxy box and in that scenario senlin try to talk to external interface of HAproxy and not able to communicate because of asymmetric routing13:52
spatelso i use your patch to tell interface=internal but still its using public interface for keystone. (i can see that in tcpdump)13:53
spatellook like senlin acting like normal end user which mostly use external/public endpoint to talk to services.13:54
spatelIf you look at other services they have dedicated section for neutron/nova to define endpoint but in senlin it try to discover itself just like end users.13:55
spatelnoonedeadpunk: ^^13:56
spatelI am going to play in lab to collect more data to understand what is going.13:57
spatelby the way i found my issue related senlin re-install :) patch is here - https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/77087813:57
noonedeadpunkhm.... I think I targeted my patch to auth only through private...13:58
noonedeadpunkBut never checked what endpoints it gets when asks for them13:58
spatelI have noticed in tcpdump it use internal interface to collect all endpoint but then try to use public endpoint to talk to services13:58
noonedeadpunkso yeah, I could have missed smth13:59
noonedeadpunkyeah, I see, doh...13:59
noonedeadpunkWell, I was testing just different thing :(13:59
spatellast night i was talking to erik about that in senlin channel if you see the history14:00
noonedeadpunkyeah, see it14:01
spatelThis is race condition where if you deploy Haproxy on dedicated box then you will hit this bug :)14:02
noonedeadpunkhave you tried to adjust sdk?14:02
admin0kleini, jrosser .. is it safe to use https://github.com/openstack/openstack-ansible-os_cinder/blob/c148d77e29af6faebc1c9b012ae08aed447cd179/templates/cinder.conf.j2 for new builds ? nothing else required to do ?14:02
spatelYes i did but no impact14:02
noonedeadpunkI see14:02
spateli will try again with fresh lab and see14:02
*** jbadiapa has quit IRC14:03
noonedeadpunkwell, I guess it needs checking out senlin code again to see where it goes wrong, and honestly having no time at the moment for this....14:03
noonedeadpunkBut abolutely sure it's smth that needs fixing14:03
noonedeadpunkas it's weird they don't have that for so long14:03
spatelnoonedeadpunk: no big deal :)14:05
*** jbadiapa has joined #openstack-ansible14:05
spatelthis is only impact if someone deploy haproxy on dedicated box otherwise everything should work fine.14:05
noonedeadpunkwell, talking to public endpoint might be a big deal actually14:06
noonedeadpunkthat means that things are exposed14:06
jrossersenlin is on our to-do list and the senlin service has no route at all to the public endpoint14:09
jrosser*in our deployment14:09
spatel+114:14
*** miloa has quit IRC14:15
spatelnoonedeadpunk: can we get review/vote on this patch? - https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/77087814:16
openstackgerritDmitriy Rabotyagov proposed openstack/openstack-ansible-os_senlin master: DNM - test patch for senlin tempest testing  https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/75404514:25
*** dave-mccowan has joined #openstack-ansible14:27
kleiniadmin0: I don't know. I just discovered, that two cinder volume instances can not take care of others instance created volumes14:42
admin0kleini, thanks for the update ..14:43
kleiniI can test that in a staging environment but need to take care now of other tasks14:44
andrewbonneyspatel: Thanks for sharing your neutron policy yesterday. We've worked around our issue as follows if it's useful: http://paste.openstack.org/show/801662/14:46
spatelandrewbonney: beautiful, so i can add that snippet in my user_variable.yml right?14:47
andrewbonneyYeah, obviously just swapping in anything you've changed from the defaults in your own policy file14:48
*** rpittau is now known as rpittau|afk14:48
spatelawesome! i will try it out. we should add that snippet in example config of OSA :)14:48
noonedeadpunkwill V bump ever merge....14:52
*** pcaruana has quit IRC14:56
openstackgerritMerged openstack/openstack-ansible-os_neutron master: Use PCI_SLOT_NAME cut instead of ls -ld cut more precisely  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/76970015:05
*** pcaruana has joined #openstack-ansible15:06
CeeMacnoonedeadpunk: hi15:29
noonedeadpunko/15:29
CeeMacnoonedeadpunk: we're trying to work out if there's been any work on cinder-volume HA outwith ceph and so far it doesn't look like it.15:30
CeeMachave you much experience with that side of things?  I know you've been looking at pacemaker for masakari15:30
*** pcaruana has quit IRC15:31
noonedeadpunknah, I was using only ceph and nfs as cinder-volume backends...15:31
noonedeadpunkand for both of them you don't need anything fancy15:32
CeeMacah, ok thanks15:32
CeeMacjust trying to work through some operational processes and understand cinder a bit more with someone who has joined my team15:32
CeeMacwe're just covering off the volume management aspect, and the impact of the cinder-volume service being unavailable during a storage node reboot or maintenance event15:33
CeeMacand wondering how big an issue (or not) not having HA for cinder-volume is15:34
CeeMaci saw active-active was introduced for ceph in stein15:34
noonedeadpunkif you have shared filesystem, I guess you can probably just set the same `volume_backend_name` across cinder-volumes15:34
CeeMacalso trying to work out how best to implement active/passive with external iscsi15:35
noonedeadpunkwhich will make cinder-volume to work in active/passive i guess15:35
CeeMactrying to work out where the backend_host fits in15:35
CeeMacwell, we have the same backend name across the cinder nodes, but the host name is obviously unique15:36
noonedeadpunkbut actually I see nothing wrong in active/passive since in DB you have volumes owned by the same name, so in case of failure of one service, another one will be able to pick up operations for the volume15:36
CeeMacso host1@backend#pool, host2@backend#pool etc15:36
noonedeadpunk`volume_backend_name` will set that host1 to smth unique15:37
CeeMacis that the same as the backen_host then, or something different15:37
CeeMacits as simple as that, no need for pacemaker on cinder nodes?15:37
CeeMacthe cinder docs are a little confusing/misleading15:38
noonedeadpunkso with active/passive on ceph I had smth like that http://paste.openstack.org/show/801670/15:38
noonedeadpunkoh, sorry, not `volume_backend_name` but `backend_host`15:39
CeeMacah, ok, good, that makes more sense then!15:39
CeeMacso you would have had backend_host=rbd15:39
noonedeadpunkyep, exactly15:39
noonedeadpunkI think the maint point here is that storage should be available acroos cinder-volume instances15:40
CeeMacyeah15:40
CeeMacwe have that15:40
CeeMacit was just the unique service names that was throwing me15:41
noonedeadpunkyeah15:41
noonedeadpunkan cinder-manage has cli that allows to migrate volume to the new name in DB15:41
CeeMacso then, if we had the backend_host configured right for active/passive, and the backend type set right to group the backens15:42
CeeMaceverything is pretty resilient15:42
noonedeadpunkYeah, I guess so15:42
CeeMacthats the 'cinder-manage volume update_host' then?15:42
noonedeadpunkyeah, exactly15:43
CeeMaccool, i'll lab it up and see what breaks :)15:52
CeeMacthanks noonedeadpunk15:52
CeeMacif i added that backend_host in cinder.conf and restarted cinder-volume would it automatically create the new volume service with the new name?15:53
CeeMacs/would/should15:54
*** mgariepy has quit IRC15:55
*** MrClayPole has joined #openstack-ansible16:02
CeeMacnoonedeadpunk: quick follow up question, how do the storage nodes which is active and which is passive for each backend16:07
*** macz_ has joined #openstack-ansible16:09
noonedeadpunkyes, it should create new service automatically16:10
noonedeadpunkwell, in perfect condition, iirc they're using zookeeper for this16:11
noonedeadpunkfor coordination16:11
noonedeadpunkBut they still have some source node name or smth while messaging.. Tbh I'm not sure here. Maybe it even does not matter which cinder-volume will perform an acton since from DB and volume side that should not matter much16:12
*** mgariepy has joined #openstack-ansible16:42
*** maharg101 has quit IRC17:24
*** klamath_atx has quit IRC17:33
*** dave-mccowan has quit IRC17:38
CeeMacyeah i was trying to work out if we need some kind of cluster implementation or not17:53
CeeMacthe cinder HA documentation mentions pacemaker but I don't see any examples for cinder-volume active/passive17:53
CeeMacthanks again noonedeadpunk18:00
*** klamath_atx has joined #openstack-ansible18:02
noonedeadpunkI have no idea why pacemaker should be used...18:02
noonedeadpunki believe there might be scenarious... but dunno18:07
*** gyee has joined #openstack-ansible18:14
CeeMacyeah its a mystery18:15
CeeMaci'll try it out on monday in my lab and see if I can work it through18:17
*** andrewbonney has quit IRC19:00
*** spatel has quit IRC19:03
openstackgerritMerged openstack/openstack-ansible stable/victoria: Bump Victoria for the release  https://review.opendev.org/c/openstack/openstack-ansible/+/77007319:46
noonedeadpunkthanks god19:49
noonedeadpunkI thought that would never happen....19:50
jrosser\o/ awesome19:52
*** jbadiapa has quit IRC22:09
*** logan- has quit IRC22:40
*** ioni has quit IRC22:40
*** logan- has joined #openstack-ansible22:53
*** mwhahaha has quit IRC22:57
*** mwhahaha has joined #openstack-ansible22:57
*** simondodsley has quit IRC22:58
*** simondodsley has joined #openstack-ansible22:59
*** rh-jelabarre has quit IRC22:59
*** rh-jelabarre has joined #openstack-ansible22:59
*** ioni has joined #openstack-ansible23:00
*** bl0m1 has quit IRC23:01
*** jrosser has quit IRC23:01
*** cyberpear has quit IRC23:01
*** bl0m1 has joined #openstack-ansible23:02
*** cyberpear has joined #openstack-ansible23:03
*** jrosser has joined #openstack-ansible23:03
*** logan- has quit IRC23:04
*** logan- has joined #openstack-ansible23:18
*** kleini has quit IRC23:20
*** johanssone has quit IRC23:20
*** crazzy has quit IRC23:20
*** crazzy has joined #openstack-ansible23:21
*** johanssone has joined #openstack-ansible23:24
*** dwilde has quit IRC23:49
*** dwilde has joined #openstack-ansible23:50

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!