Friday, 2023-03-17

admin1\o09:01
noonedeadpunko/09:38
damiandabrowskihey guys, just fyi: i injured my shoulder on a snowboard yesterday. hopefully i will be able to back to work in a week but we will see10:03
admin1damiandabrowski, happy netflix binging .. 12:45
spatelnoonedeadpunk i got discount code, i told them about contribution and they told you qualify for code13:14
noonedeadpunkah, sweet13:14
noonedeadpunkgood news then)13:14
spatelyep!! 13:26
spatelnext week i will book hotel and flight 13:26
spatelwill see you there.. 13:26
spatelnoonedeadpunk are you presenting ?13:26
noonedeadpunknope13:29
noonedeadpunkMy talk was not approved, but I think it's for the best, as it will be combined with PTG, so assume having no time for it anyway13:29
noonedeadpunkI hope that at least OSA onboarding will be approved...13:30
noonedeadpunkjrosser: hm, I wonder how are we going to access facts from other hosts now, since they're not gonna be added to hostvars?14:14
noonedeadpunkoh maybe it's already covered...14:24
jrosserIsn’t it something like hostvars[host].ansible_facts[‘thing’]14:31
jrossertbh I am really not sure about if what I did in the swift patch is right or not14:31
jrosserthe code is pretty odd14:31
spatelnoonedeadpunk +1 my company told me to give talk but i think i am not ready yet.. may be next year. 14:39
noonedeadpunkspatel: it's already quite late to apply :D14:39
spatelYesss all booked 14:39
spatelnext year i will try to do some showoff stuff about what i am doing in my cave14:40
spatelWe are building new datacenter soon and trashing older one.14:41
spatelDeveloper started working on dpdk based guest deployment so hope will see some light end of tunnel. 14:41
lowercasejrosser: https://paste.opendev.org/show/bdT18m6F0reIlrEKG2pg/15:03
lowercasein case anyone ever reports this issue. 15:03
lowercaseThe answer is.15:03
lowercaserm /root/.ansible/plugins/connection/ssh.py15:03
lowercaseSo what's happening is ansible is pulling plugins from different sources. the ssh.py in the /root/.ansible is is from early 2021, so maybe plugins were handled different back then. We saw in 2021 that chroot path was removed from ssh.py and no longer needed. Thus the error. After spending two days tracking it down. It was as simple as purging the local ~/.ansible roles/collections15:05
noonedeadpunkWell... It never supposed to be in ~/.ansible/plugins15:19
noonedeadpunkOh, well, if someone has ran tox on production - that could result in this happening15:20
lowercaseI don't recall ever running tox.15:23
noonedeadpunkjrosser: btw on Xena I don't see anything weird with add-compute.sh. Except this `echo` /o\ https://opendev.org/openstack/openstack-ansible/src/branch/master/scripts/add-compute.sh#L5215:38
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Drop `echo` from add-compute.sh script  https://review.opendev.org/c/openstack/openstack-ansible/+/87781315:44
noonedeadpunk^ this one... So insulting....15:45
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Add openstack_hosts_file tag  https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/87782416:34
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible master: Add documentation on refreshing hosts file  https://review.opendev.org/c/openstack/openstack-ansible/+/87782516:40
Stephen_OSA_Good afternoon17:09
Stephen_OSA_Id like to start and say thanks, I like what OSA can do.17:10
Stephen_OSA_I have a question about magnum17:10
Stephen_OSA_When deploying clusers, my master for kubernetes has a container running a service that is failure trying to auth to keystone with http, but the odd thing is it first makes a request to https and succeeds, then subsequently it tries http and fails twice, and this pattern repeats17:10
noonedeadpunko/17:23
noonedeadpunkI think I saw that in our deployments as well... But just let that happen. To be frank I don't really know the answer, but at the same time I did not dig deep enough to understand the reasons behind the issue17:24
Stephen_OSA_it appears to prevent that service from continuing to the next step ;)17:28
Stephen_OSA_ so the cluster creation times out17:28
noonedeadpunkaha. no, that was not the case with us then :D17:30
noonedeadpunkI just see keystone auth failures in magnum logs for trustee or smth like that17:30
Stephen_OSA_Yeah I was getting those too, but I probably got them at an earlier stage, from what I can tell, magnum\openstack expects magnum to be on the public endpoint, so moving magnum to a public place(with external LB) solved this issue.17:32
noonedeadpunkI think the only valuable thing we have outside of the defaults - https://paste.openstack.org/show/bVO37HNFkEVeazTF75pC/17:32
Stephen_OSA_I did manually try to set magnum to internal but no luck, and probably doesnt make since, becuase it needs the internet for pulling all the container deps.17:33
noonedeadpunkI can also recall patching communication through correct endpoints quite long ago though17:33
noonedeadpunkI think it was that - https://opendev.org/openstack/openstack-ansible-os_magnum/src/branch/master/templates/magnum.conf.j2#L6617:34
noonedeadpunknah, it was smth different17:36
noonedeadpunkprobably I was thinking about https://opendev.org/openstack/openstack-ansible-os_heat/commit/288634ce0bf042bed614b3f764753d7b65a7170f17:37
noonedeadpunkIt was also affecting magnum iirc17:38
jrosserStephen_OSA_: my team did a lot of work on magnum and endpoints which is basically a big mess - we ended up making some bugfixes to heat as well17:54
Stephen_OSA_ahh is that recent jrosser?17:54
jrosserStephen_OSA_: we don't actually run magnum currently but i may be able to look through my notes next week to see what was involved17:54
Stephen_OSA_and thanks btw deadpunk,  yeah I think have those changes you listed17:54
Stephen_OSA_yeah that would be cool, thanks jrosser17:55
jrosserit was a while ago tbh but there are a number of settings involved iirc17:55
jrosserStephen_OSA_: will you be around in irc next week?17:55
Stephen_OSA_I  have been tinkering with this off and on for a couple weeks, had to shelf it, since I wasnt making progress but I keep i coming back to it like and addition17:55
Stephen_OSA_yeah Ill try to start logging in17:56
jrosserour trouble was that the keystone url being injected into the master node for the heat callback was the interal url17:56
jrosserwhen it was not possible to call that endpoint from a user vm (quite sensibly)17:57
Stephen_OSA_hmm yeah I am having a similar issue, where the url is public at first, but then subsequent calls use the wrong protocol(http) vs https, which doesnt pass my proxy ; ;17:57
jrosser^ http proxy?17:58
Stephen_OSA_yes17:58
jrosseroh ha you are in the same position as me17:58
Stephen_OSA_its trying to hit my external haproxy, but it cant reasonably server http and https on port 500017:58
Stephen_OSA_interesting17:58
jrosserok well next week i will check over what we did17:59
jrossertbh i did give up in the end :/17:59
jrosserbut we got it to the point of deploying18:00
Stephen_OSA_;) I am trying to hehe, but I love the ideal so much...though even if I get this working I have a lot of work, as I originally wanted OSA offline, which I have done, but adding magnum, has made me bring it back online(internet access), that great, Id be interested in what ever you find.18:00
jrosseroh that is nice to have an offline install18:01
jrosseri made some useful patch if you use git mirrors btw18:02
jrosserhttps://github.com/openstack/openstack-ansible/commit/df4758ab1b68a0aa0eb850fbafc6433eb1acfd0b18:03
jrossertheres also now a way to do the same for the OSA ansible roles and also ansible collections all to come from local mirrors18:04
Stephen_OSA_nice, Ive bookmarked that, yeah Ill have to refactor using that approach... ;)18:07
jrosserStephen_OSA_: for debugging the magnum thing really all to do is grep through all the data dropped through cloud-init for the container agent and try to find the "wrong" urls18:07
jrosseror the word 'internal', because it might also be using the service catalog to determine the endpoint18:08
Stephen_OSA_Ill have to look into cloud-init I havent messed with that before, Ive been jumping on the master and looking at journalctl.  Yeah the interesting thing is, I presume the service catalog, should match openstack endpoint list? which doesnt have an http version of my endpoint, which is what I expect18:10
jrosserit's useful to look through how all the data lands in the master node18:11
jrossertheres a slew of shell scripts (i had to submit patches to magnum to make those support proxies as well)18:12
jrosserso it could easily be that they've broken those as i know there is no formal testing with a proxy18:12
Stephen_OSA_ahh, yeah I imagine it complicates things18:13
jrosserStephen_OSA_: also this but i've not finished it yet https://review.opendev.org/c/openstack/openstack-ansible/+/87082018:18
jrosserif there are any other difficulties with offline deployments then i'm interested in those18:18
Stephen_OSA_yeah the big things were acquiring the packages and pip files but perhaps there was a better way. I recall I had to pull the baseline container image and reference it locally vs, pulling it from online. Also so packages like vnc? I think I had to pull that, as its used to access the vms from the web page for example. Id like to do a blog about it some day...18:21
Stephen_OSA_for cloud init and magnum, is that out of the box configured, with OSA?18:23
Stephen_OSA_The packages and pip files are host on my offline deployment node, so that works for me.... and the repo container that is build on deployment, I saved it, so that it doesnt have to be rebuilt each deployment(pulling items form the internet) I now just deploy that container...but your way of  having the repos offline should handle some of it.18:25

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!