Monday, 2024-03-25

farbodHi, i get this error for mount on infra repo: https://paste.opendev.org/show/bWzrGQqEnKzT3KFDRi9Y/ 06:38
farbodi am deploying on cloud vms and networking seems ok. I tried multiple times and get the same error06:38
farbodand here are the logs: https://paste.opendev.org/show/bpSMO7sV9Ypid97w61DJ/06:56
farbodany idea?06:56
farbodhere are more logs for var/www : https://paste.opendev.org/show/bwKeDT1hgDzVqHkWdO5v/07:51
noonedeadpunkhey08:27
noonedeadpunkfarbod: looks like gluster cluster is not really healthy based on the output08:28
farbodi checked the logs but didn't found something useful. Any suggestions? Also i tried reinstalling and deleting lxc containers but it didn't work08:31
noonedeadpunkhm, I can recall some patch to docs regarding gluster, but can't find it now08:31
noonedeadpunkfarbod: you deleted containers with removing all data?08:32
farbodyes i did08:33
noonedeadpunkugh, I don't have gluster anywhere on my deployments to check some commands for debug....08:33
noonedeadpunkand you tried to destroy repo_cotnainers?08:35
farbodyes 08:35
noonedeadpunkthrough lxc-containers-destroy playbook? Just verifying:)08:36
jrosseri think that this is going to be troublesome08:36
farbodnoonedeadpunk: yes08:37
noonedeadpunk(I think it should try to remove also all bind-mounted paths)08:37
farbodhow?08:37
farbodWant me to provide configs?08:38
jrosserfarbod: this is not about the config08:38
jrosserit is more that the deployment really expects the environment to be properly clean when you run the playbooks08:38
jrosseryou might still have /openstack/glusterd/<stuff> left from before, which will certainly make trouble for you if you are trying to do a completely new deployment on the same host08:39
noonedeadpunkyeah, but that should be dropped with lxc-containers-destroy...08:40
noonedeadpunk(at least I'd expect it to be)08:40
jrossereven the data on the host /openstack/ directory?08:40
noonedeadpunkI think if you're answering YES on questions08:40
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/containers-lxc-destroy.yml#L66-L7408:41
jrosserlooks like that would not cover `pool/openstack on /var/lib/glusterd type zfs (rw,xattr,posixacl)`08:41
noonedeadpunkoh well08:42
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/repo_all.yml#L2208:42
noonedeadpunkit's not covering indeed.08:42
noonedeadpunkprobably it should?08:42
jrosserprobably..... 08:43
noonedeadpunkfarbod: yeah, so try to drop containers, rm -rf /openstack/glusterd and create them again?08:44
farbodخن08:45
farbodok08:45
noonedeadpunkor at least worth documenting that08:45
noonedeadpunkreally wonder what the expected behaviour should be in this case08:46
noonedeadpunkLike - gluster is kinda container data08:46
jrosserimho this is going to then fail during re-deployment of the repo container in a HA deployment08:51
jrosseras the gluster UUID of the node will change and need to be removed from all the other cluster members before continuing08:51
jrosseri think that andrewbonney spent quite some time here looking to fix this case when we did our Focal->Jammy upgrades which amount to the same thing (loss of gluster data when re-pxe the hosts)08:53
noonedeadpunkBut I guess you then just destroy containers without removing a data?09:03
farbodI encountered another problem: https://paste.opendev.org/show/bPEppMD7tkGs8iQGLuZP/09:09
farbodwhat should i check?09:09
noonedeadpunkfarbod: are you sure everything is fine with networking?09:10
noonedeadpunkI guess you really should check on gluster state and why it's not oprational09:10
farbodi have ping from deployment host to infras and containers.09:11
noonedeadpunkwhat's `gluster volume status`, `gluster peer status`, `gluster pool list`?09:12
noonedeadpunkfarbod: well, ping doesn't mean you're not filtering 24007/24008 TCP/UDP ports09:13
noonedeadpunk*doesn't show09:13
farbodhttps://paste.opendev.org/show/bXTbukfbMi9CysLis5bv/09:14
farbod``gluster volume status` doesn't execute09:15
farbodroot@infra1-repo-container-ddba2e05:~# gluster volume status09:16
farbodStaging failed on infra2-repo-container-fc3b079d. Please check log file for details.09:16
farbodStaging failed on infra3-repo-container-d9264ea4. Please check log file for details.09:16
noonedeadpunkI really have quite limited experience with gluster as we're jsut using cephfs instead of it09:24
farbodi am using ceph as my backend storage. is there a problem?09:24
noonedeadpunkno, it's not related 09:25
noonedeadpunkit more that - openstack-ansible installs glusterfs by default, but it can be pretty much any shared FS instead09:25
noonedeadpunklike NFS or CephFS09:26
noonedeadpunkand you can disable installation of glusterfs in favor of some different fs09:26
noonedeadpunk(which we did as we had cephfs anyway)09:26
noonedeadpunkso, and what's in logs?09:27
farbodwhich logs? glusterd?09:27
farbodhttps://paste.opendev.org/show/bjJaEC9BMKm8j0nqkvhZ/09:28
noonedeadpunkso... and this time you dropped all 3 repo containers at the same time and wiped /openstack/glusterd on ech controller?09:31
farbodyes09:31
noonedeadpunkhuh09:32
noonedeadpunkit's weird kinda, as apparently it expects some metadata that's not there.....09:33
noonedeadpunkI guess I'd need to try to reproduce flushing repo container to help you out...09:33
jrosserisnt that missing data becasue /openstack/glusterd got wiped?09:56
jrosserand thats pretty much the expected behaviour when re-creating repo containers in this situation, i.e re-initialise everything in /var/lib/glusterd09:57
jrosserfarbod: it looks like you have perhaps trouble with gluster locks? https://docs.gluster.org/en/main/Troubleshooting/troubleshooting-filelocks/10:08
noonedeadpunkjrosser: but I thought that re-running role should get gluster configured fully?10:20
jrosseri can only guess that there is some left over state somewhere10:20
jrosserthe playbooks deploy from clean host -> working host, and as far as i can see farbod is trying to deploy onto a partially cleaned / re-cycled host10:21
jrosseras i say andrewbonney already put a large effort into trying to automate the case where you totally redeploy one host, and it is not really possible10:22
jrossermanual intervention is required10:22
farbodjrosser: what should i do for locks?10:25
jrosseri can only suggest using the glusterfs documentation and following their debug steps10:25
jrosserthe openstack-ansible playbooks are not able to fix some partially broken gluster setup, they just deploy it from a known clean state10:26
noonedeadpunkjrosser: yeah, but we should be able to clean up state for all hosts with looasing data relatively easily?10:29
noonedeadpunkas I guess that's what intention here10:29
jrosseryes that should be possible10:30
jrosseri can try an infra AIO10:30
jrosserfarbod: what operating system do you use?10:31
farbodubuntu 22.0410:31
jrosserfarbod: it seems to be OK in my test here https://paste.opendev.org/show/brhn8zbVktnw6npEil3Y/11:29
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_neutron stable/2023.2: Restart OVN on certificate changes  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/91401311:36
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_neutron stable/2023.1: Restart OVN on certificate changes  https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/91401411:37
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_horizon stable/2023.2: Do not change mode of files recursively  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/91401511:38
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_horizon stable/2023.1: Do not change mode of files recursively  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/91401611:38
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-os_horizon stable/zed: Do not change mode of files recursively  https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/91401711:39
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add service policies defenition  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/91408613:08
noonedeadpunkThat is going to be quite big topic I assume....13:59
noonedeadpunkI need some help deciding on logic of how policies should be defined actually....14:25
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-plugins master: Leave only unique policies for __mq_policies  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/91409214:26
noonedeadpunkLike - I'm not sure if service policies should override or extend default policies...14:27
noonedeadpunkand where to define them. as now state feels very confusing14:27
noonedeadpunkso we have `oslomsg_rpc_policies` defined in group_vars: https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/all/oslo-messaging.yml#L2114:27
noonedeadpunkAnd then we kinda merge things here: https://opendev.org/openstack/openstack-ansible-plugins/src/branch/master/roles/mq_setup/tasks/main.yml#L2114:28
noonedeadpunkbut we kinda don't have _oslomsg_rpc_policies defined anywhere now, and it was assumed that we pass it during role include14:29
noonedeadpunkand basically the question: https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/91408614:30
noonedeadpunkso I proposed doing `_oslomsg_rpc_policies: "{{ cinder_oslomsg_rpc_policies }}"`14:30
noonedeadpunkbut thinking about it - it's kinda obscure, as cinder_oslomsg_rpc_policies is an empty list, but does not mean vhost won't have any policies14:31
noonedeadpunkas it will have - defined defaults of oslomsg_rpc_policies14:31
noonedeadpunkSo thinking about it I was guessing, if we should set `cinder_oslomsg_rpc_policies: "{{ oslomsg_rpc_policies }}"` by default...14:32
noonedeadpunkbut then we should name somehow different or remove list merging from the role, and both are not perfect14:32
noonedeadpunkor leave it this obscure way, and just promote as ability to merge/partially override defaults, which is quite neat...14:33
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add variable to globally control notifications enablement  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/91410015:18
ThiagoCMCjrosser, just a quick update... I managed to deploy OSA AIO (2023.2) with Ceph (stable-8) last week but, IBM changed it again, removing a lot of other things from `ceph-ansible`, so, it's failing again lol15:56
ThiagoCMCHave you tried it again? Or busy with other stuff...?15:57
jrosserThiagoCMC: well i did say before that the thing to do was concentrate on deploying Reef with stable-7.0 and find the vars needed for that15:57
jrosserthere is no point using stable-8.0 for anything because it is not a released version yet and will keep changing15:58
ThiagoCMCSure, I did that too, it works if OSA does not PIN ceph_community_pin.pref, ceph_community_pin.pref, and ceph_client_pin.pref, since it needs to come from Ubuntu's UCA. That's why I went to stable-8 now... I'm aiming the next releases of all pieces anyway... It's okay! I'll keep trying.16:01
jrosserThiagoCMC: but what do you want for the next OSA release?16:02
jrosserit would be really great if you were able to make that deploy reef properly.....16:03
jrosserfeels like 99% of the understanding is ready16:03
jrosserif you make the changes to the OSA vars to default to reef, and to move the pins for the repos from quincy to reef then we can have that in the next release of OSA16:05
noonedeadpunk++16:08
noonedeadpunkpinning doesn't sound like real blocker - we should be able to patch that nicely16:09
jrosserindeed it should be easy16:09
jrosserthis is kind of nice low hanging fruit patch, if you already worked out something thats good16:09
jrosserceph-ansible stable8.0 is kind of a distraction to all this really16:11
ThiagoCMCI hope to deploy OpenStack Caracal and Ceph Reef with next OSA and Ceph Ansible (hopefuly with stable-8.0) on Ubuntu 24.04.16:11
ThiagoCMCHowever, if you folks think it's best to stick with Ceph Ansible stable-7.0, I can focus on it a bit more, and make a quick step-by-step guide.16:13
jrosserceph-ansible stable-8.0 is not released and will continue to change16:14
jrosserif you're going to spend time on it please submit patches to OSA to switch from quincy to reef using the stable-7.0 branch16:15
jrosseryou know also that OSA C wont be supporting/tested at all on 24.0416:16
ThiagoCMCOkay. BTW, I joined Ceph Slack too, I'll keep in touch with IBM folks, in case we need more help.16:16
ThiagoCMCHmmm... Do you know which limitations are likely to exist on OSA C with 24.04?16:17
jrosserthats a bit chicken/egg situation until we have a CI image for it16:17
ThiagoCMCGot it =P16:17
jrosserand also none of the actual openstack projects (nova / cinder etc) have done any testing on 24.04, and those release for Caracal next week16:19
ThiagoCMCRight, but Ubuntu 24.04 plans to include Caracal on it...16:21
ThiagoCMChttps://discourse.ubuntu.com/t/noble-numbat-release-notes/3989016:21
ThiagoCMCPerhaps even Ceph 19! lol16:21
jrosserAny they will provide you support on that of course16:22
ThiagoCMCFor 5 years =P16:23
jrosserI'm just saying that Canonical are prepared to provide packages for Caracal on 24.04, and provide you support on an openstack/python version that the upstream openstack does not support16:23
ThiagoCMCGood point...16:23
jrosserso if you deploy OSA Caracal on 24.04, find it broken due to python 3.12, and ask nova team for help, you probably wont get it16:24
ThiagoCMCThank you for pointing this out!16:24
jrosserhere is where that supported version stuff gets defined https://governance.openstack.org/tc/reference/runtimes/2024.1.html16:25
ThiagoCMCCool! Well, Python 3.11 is still available on 24.04 (dev)16:27
noonedeadpunkThiagoCMC: available or default?16:40
ThiagoCMC`/usr/bin/python3: symbolic link to python3.12` =P16:42
noonedeadpunkWell, I've heard that py3.12 breaking openstack quite heavily in multple askects16:42
noonedeadpunkthere was eventlet topic as one example16:43
ThiagoCMCYeah, I wasn't thinking about this until today, TBH.16:43
noonedeadpunkand then quite some things related to setuptools iiirc16:44
noonedeadpunkand then stop providing uwsgi scripts due to that, which  was discussed to track as a community goal16:44
noonedeadpunkI'm not really sue how Canonical is handling that actually16:45
ThiagoCMCIt seems wise to build a new Cloud with 22.04/Bobcat/Reef instead of going crazy with 24.04 at this time. Perhaps Caracal and Ceph 19 will be available in UCA for 22.04, then there'll be more time to figure things out with 24.04, ceph ansible, etc...16:46
noonedeadpunk`This is not mandatory testing in the 2024.2 cycle, and there is no guarantee that the OpenStack 2024.2 release will support Python 3.12.`16:46
noonedeadpunkhttps://governance.openstack.org/tc/reference/runtimes/2024.2.html#python16:46
noonedeadpunkSo actually, OpenStack might not be ready for 3.12 even with Dalmatian16:46
ThiagoCMCNice link!16:47
noonedeadpunkSo really no idea how canonical are going to do that....16:47
ThiagoCMCSnaps... lol16:48
noonedeadpunklol16:51
noonedeadpunkyou need to pack code anyway16:51
noonedeadpunkand code is not there16:51
noonedeadpunkso if they fix all issues dowstream - well... 16:51
noonedeadpunk(and not contribute them back)16:51
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add variable to globally control notifications enablement  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/91410018:03
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Implement variables to address oslo.messaging improvements  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/91414318:03
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add variable to globally control notifications enablement  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/91410018:15
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Implement variables to address oslo.messaging improvements  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/91414318:15
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-plugins master: Rename _oslomsg_configure_* to _oslomsg_*_configure  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/91414418:23
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add variable to globally control notifications enablement  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/91410018:23
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Implement variables to address oslo.messaging improvements  https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/91414318:24
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-plugins master: Leave only unique policies for __mq_policies  https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/91409218:25
noonedeadpunkit would be really nice if someone could have a quick look over this topic ^ and leave some comments before I proceed with more services18:25
noonedeadpunkas I'm not 100% sure about some things I've mentioned earlier, but we kinda need that for 2024.1 I guess...18:26
noonedeadpunkbut that should be done with switching to quorum queues for sure...18:29
noonedeadpunkas all these are breaking and would be good to be done just on fresh vhost, what we're doing with quorum migration18:30
opendevreviewJames Denton proposed openstack/openstack-ansible-os_skyline master: A new override, `skyline_client_max_body_size`, has been introduced to support large image uploads via the Skyline dashboard. The default value of 1100M supports upstream Ubuntu and Rocky Linux images, but can be increased to support larger images or decreased to encourage the use of the CLI.  https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/18:57
opendevreviewJames Denton proposed openstack/openstack-ansible-os_skyline master: Support large uploads via Skyline  https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/91414918:58
*** jamesdenton_ is now known as jamesdenton18:58
noonedeadpunkjamesdenton_: would be nice if you could review https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/912333 :)19:36
jamesdenton_oh surrrre19:37
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_skyline master: Add EL distro support  https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/91237019:38
noonedeadpunkfwiw, this is also super close I guess https://review.opendev.org/c/openstack/openstack-ansible/+/85944619:38
noonedeadpunkthe only thing I spotted lately, which I guess just skyline bug, is that it's not possible to create networks as user19:39
jamesdenton_hmm, i can try to verify in our env19:40
opendevreviewJames Denton proposed openstack/openstack-ansible-os_skyline master: Support large uploads via Skyline  https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/91414919:40
noonedeadpunkjamesdenton_: would be quite nice frankly speaking, as I'm a bit /o\ wtf19:41
jamesdenton_do you get an error?19:42
noonedeadpunkwell. when trying to create there're just no AZs19:42
jamesdenton_ahh, and does it error on no AZ selected?19:42
noonedeadpunkand when trying to open some network created in horizon there's an error, yes19:42
jamesdenton_there is a more recent skyline patch for that19:42
noonedeadpunkit just does not let you proceed19:43
jamesdenton_right, it was required with *19:43
jamesdenton_that has been fixed upstrea,19:43
noonedeadpunkbut you don't get error as admin trying to open networks19:43
noonedeadpunkI guess I tried even with master like.... 3 days ago?19:43
jamesdenton_hmm19:43
noonedeadpunkmaybe missed smth ofc.... like skyline-console... hm19:43
noonedeadpunkgood point19:44
jamesdenton_yes, it's skyline-console19:44
noonedeadpunkfwiw, in 859446 I made skyline on 80/443, while horizon works under /horizon on same ports19:46
jamesdenton_oh nice19:46
noonedeadpunkdoes that sound like fair/logical thing?19:46
jamesdenton_absolutely19:46
noonedeadpunkas I was not able to run skyline under /skyline as they do kinda hardcode things in static under console19:47
jamesdenton_it's wonky19:47
noonedeadpunkas originally wanted to do /horizon and /skyline and then some "default"19:47
noonedeadpunkyeah19:47
noonedeadpunkjamesdenton_: nah, looks the same as before - just upgraded 19:49
jamesdenton_ok hmm19:49
noonedeadpunkwhen I enter network as user - it says `network 8a59dddc-94c0-45e7-aaea-0b900b582602 could not be found.` while it shows in list19:50
noonedeadpunkand AZ - no data, while it's a required drop-down :(19:50
jamesdenton_well, i do show the networking availabilty zone is blank, but it doesn't stop me from creating a network or network+subnet19:50
jamesdenton_that's on the NETWORK wizard, right?19:50
noonedeadpunkum....19:52
jamesdenton_noonedeadpunk https://bugs.launchpad.net/skyline-console/+bug/203501219:52
noonedeadpunkhm19:53
jamesdenton_https://review.opendev.org/c/openstack/skyline-console/+/89579719:53
noonedeadpunkyeah, looking at it already19:54
jamesdenton_i am also working on a patch to provide customizations - but the process itself is sorta ugly19:54
noonedeadpunkI kinda wonder... where it ends up....19:55
noonedeadpunkor we're doing installation in a completely wrong way19:55
jamesdenton_Well, it doesn't lend itself to a lot of customizations, which is the first issue19:57
jamesdenton_you use 'yarn' to build it, which generates the static assets, including js and images19:57
noonedeadpunkbut this should have been released I guess....19:58
jamesdenton_and then you pip install it when done19:58
noonedeadpunkor well....19:58
noonedeadpunkI guess we're just trying to install as python package right now: https://opendev.org/openstack/openstack-ansible-os_skyline/src/branch/master/defaults/main.yml#L87-L8919:59
jamesdenton_I guess you can replace the static image assets, which is what kolla seems to do19:59
jamesdenton_yes19:59
noonedeadpunkso this jsut use outdated static?20:00
hamburgleris the goal to always build it from source instead of already built python packages? When I had it deployed was just using the pip packages20:00
noonedeadpunkhah, well. Now I see I guess....20:01
jamesdenton_well, we're not trying to puild the python package, but build the actual skyline react files, i think20:01
noonedeadpunkskyline_console was last updated 2y ago20:01
hamburglerahhh20:01
jamesdenton_We're installing from here AFAIK: https://github.com/openstack/skyline-console20:02
noonedeadpunkyeah, but pip installs just this https://github.com/openstack/skyline-console/tree/master/skyline_console20:02
noonedeadpunkwhich has nothing to do with reality20:02
jamesdenton_hmm20:03
noonedeadpunkok, gotcha, I will play with yarn tomorrow20:04
jamesdenton_cool, a 'yarn run build' did it for me. then i committed all to my fork and installed from that and it seemed to work well20:04
hamburglernginx being completely removed for osa deployment?20:05
jamesdenton_it's using nginx20:05
jamesdenton_in fact, i had to bump the upload size20:05
jamesdenton_https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/91414920:06
opendevreviewJames Denton proposed openstack/openstack-ansible-os_skyline master: Support large uploads via Skyline  https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/91414920:06
noonedeadpunkhamburgler: well, we still use for repo server. and skyline atm...20:08
noonedeadpunkand skyline have quite some assumptions about nginx....20:08
noonedeadpunkugh, skyline jsut got slightly more complicated now :D20:10
noonedeadpunkIt takes ages to build it....20:10
noonedeadpunkand frankly - that feels like being a target for repo container.....20:11
noonedeadpunkhm... and where 'yarn run build' does put the result? :)20:14
noonedeadpunkah20:26
noonedeadpunkugh, it really takes a while.... something smart should be done here for sure....20:29
noonedeadpunkok, got it working :) thanks jamesdenton_20:36
noonedeadpunkand, I think I found how to make it work under /skyline....20:40
noonedeadpunkhttps://opendev.org/openstack/skyline-console/src/branch/master/config/webpack.prod.js#L45-L4620:41
hamburglernoonedeadpunk: rabbit changes look good to me :), already running QQ as you know, but I have no issue wiping vhost again anyways to use new updates for fanout etc.20:47
hamburglerwill just do off hours20:48
jamesdenton_woot21:01
opendevreviewMerged openstack/openstack-ansible-os_skyline master: Re-add Zuul testing to the project  https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/91233321:27
jrossernoonedeadpunk: I am sure that my first patches for skyline did the yarn build21:35
jrosserthey might be in the history21:35
jrosserah here we go https://github.com/jrosser/openstack-ansible-os_skyline/commit/82b1f5a5e6eff9df441c96677e0aa6d578bc8552#diff-7ae20663f88c2ee2e49e28cecf7c0eeb99efdb53ec0faf27c0a50ce3dcaf237022:12

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!