*** hamalq has quit IRC | 03:56 | |
*** sboyron has joined #opendev-meeting | 07:04 | |
*** hashar has joined #opendev-meeting | 09:26 | |
*** hashar has quit IRC | 12:27 | |
*** hashar has joined #opendev-meeting | 12:53 | |
fungi | #startmeeting opendev-maint | 12:59 |
---|---|---|
openstack | Meeting started Fri Nov 20 12:59:28 2020 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. | 12:59 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 12:59 |
*** openstack changes topic to " (Meeting topic: opendev-maint)" | 12:59 | |
openstack | The meeting name has been set to 'opendev_maint' | 12:59 |
fungi | #status notice The Gerrit service at review.opendev.org will be offline starting at 15:00 UTC (roughly two hours from now) for a weekend upgrade maintenance: http://lists.opendev.org/pipermail/service-announce/2020-October/000012.html | 13:01 |
openstackstatus | fungi: sending notice | 13:01 |
-openstackstatus- NOTICE: The Gerrit service at review.opendev.org will be offline starting at 15:00 UTC (roughly two hours from now) for a weekend upgrade maintenance: http://lists.opendev.org/pipermail/service-announce/2020-October/000012.html | 13:01 | |
openstackstatus | fungi: finished sending notice | 13:04 |
fungi | #status notice The Gerrit service at review.opendev.org will be offline starting at 15:00 UTC (roughly one hour from now) for a weekend upgrade maintenance: http://lists.opendev.org/pipermail/service-announce/2020-October/000012.html | 13:59 |
openstackstatus | fungi: sending notice | 13:59 |
-openstackstatus- NOTICE: The Gerrit service at review.opendev.org will be offline starting at 15:00 UTC (roughly one hour from now) for a weekend upgrade maintenance: http://lists.opendev.org/pipermail/service-announce/2020-October/000012.html | 13:59 | |
openstackstatus | fungi: finished sending notice | 14:02 |
clarkb | morning! | 14:25 |
clarkb | fungi: I think I'll go ahead and put gerrit and zuul in the emregency file now. | 14:28 |
clarkb | and thats done. Please double check I got all the hostnames correct (digits and openstack vs opendev etc) | 14:31 |
clarkb | and when y ou're done with that do you think we should do the belts and suspenders route of disabling the ssh keys for zuul there too? | 14:31 |
clarkb | fungi: also do you want to start a root screen on review? maybe slightly wider than normal :P | 14:38 |
fungi | done, `screen -x 123851` | 14:40 |
clarkb | and attached | 14:41 |
fungi | not sure what disabling zuul's ssh keys will accomplish, can you elaborate? | 14:41 |
clarkb | it will prevent zuul jobs from ssh'ing into bridge and making unexpected changes to the system should something "odd" happen | 14:41 |
clarkb | I think gerrit being down will effectively prevent that even if zuul managed to turn back on again though | 14:41 |
fungi | oh, there, i guess we can | 14:41 |
fungi | i thought you meant its ssh key into gerrit | 14:41 |
clarkb | sorry no ~zuul on bridge | 14:42 |
fungi | sure, i can do that | 14:44 |
clarkb | fungi: just move aside authorized_keys is probably easiest? | 14:44 |
fungi | as ~zuul on bridge i did `mv .ssh/{,disabled_}authorized_keys` | 14:45 |
clarkb | fungi: can you double check the emergency file contents too (just making sure we've got this correct on both sides then that way if one doesn't work as expected we've got a backup) | 14:45 |
clarkb | my biggest concern is mixing up a digit eg 01 instead of 02 and openstack and opendev in hostnames | 14:45 |
clarkb | I think I got it right though | 14:46 |
fungi | hostnames in the emergency file look correct, yes, was just checking that | 14:46 |
clarkb | thanks | 14:47 |
clarkb | I've just updated the maintenance file that apache will serve from the copy in my homedir | 14:47 |
fungi | checked them against our inventory in system-config | 14:47 |
clarkb | I plan to make my first cup of tea during the first gc pass :) | 14:53 |
fungi | yeah, i'm switching computers now and will get more coffee once that's underway | 14:58 |
clarkb | fungi: I've edited the vhost file on review. When you're at the other computer I think we check that then restart apache at 1500? | 14:58 |
clarkb | then we can start turning off gerrit and zuul | 14:58 |
fungi | lgtm | 15:00 |
fungi | ready for me to reload apache? | 15:00 |
clarkb | my clock says 1500 now I think so | 15:00 |
fungi | done | 15:01 |
fungi | maintenance page appears for me | 15:01 |
clarkb | me too | 15:01 |
clarkb | next we can stop zuul and gerrit. I don't think the order matters too much | 15:01 |
fungi | status notice or status alert? wondering if we want to leave people's irc topics altered all weekend given there's also a maintenance page up | 15:01 |
clarkb | ya lets not change the topics | 15:02 |
clarkb | if we get too many questions we can flip to topic swapping | 15:02 |
fungi | #status notice The Gerrit service at review.opendev.org is offline for a weekend upgrade maintenance, updates will be provided once it's available again: http://lists.opendev.org/pipermail/service-announce/2020-October/000012.html | 15:02 |
openstackstatus | fungi: sending notice | 15:02 |
-openstackstatus- NOTICE: The Gerrit service at review.opendev.org is offline for a weekend upgrade maintenance, updates will be provided once it's available again: http://lists.opendev.org/pipermail/service-announce/2020-October/000012.html | 15:03 | |
clarkb | if you get the gerrit docker compose down I'll do zuul | 15:03 |
fungi | i guess we should save queues in zuul? | 15:03 |
clarkb | eh | 15:03 |
fungi | and restore at the end of the maintenance? or no? | 15:03 |
clarkb | I guess we can? | 15:03 |
clarkb | I hadn't planned on it | 15:04 |
clarkb | given the long period of time between states I wasn't entirely sure if we wanted to do that | 15:04 |
fungi | i guess don't worry about it. we can include messaging reminding people to recheck changes with no zuul feedback on thenm | 15:04 |
fungi | gerrit is down now | 15:04 |
fungi | i'll comment out crontab entries on gerrit next | 15:05 |
openstackstatus | fungi: finished sending notice | 15:05 |
*** corvus has joined #opendev-meeting | 15:06 | |
clarkb | `sudo ansible-playbook -v -f 50 /home/zuul/src/opendev.org/opendev/system-config/playbooks/zuul_stop.yaml` <- is what I'll run on bridge to stop zuul | 15:06 |
clarkb | actually I'll start a root screen there and run it there without the sudo | 15:06 |
fungi | i've got one going on bridge now | 15:07 |
fungi | if you just want to join it | 15:07 |
clarkb | oh I just started one too. I'll join yours | 15:07 |
fungi | ahh, okay, screen -list didn't show any yet when i created this one, sorry | 15:07 |
clarkb | hahaha we put them in the emergency file so the playbook doesn't work | 15:08 |
clarkb | I'll manually stop them | 15:08 |
fungi | oh right | 15:08 |
fungi | heh | 15:08 |
clarkb | scheduler and web are done. Now to do a for loop for the mergers and executors | 15:09 |
fungi | i'll double-check the gerrit.config per step 1.6 | 15:11 |
corvus | clarkb: could probably still do "ansible -m shell ze*"; or edit the playbook to remove !disabled | 15:12 |
fungi | serverId, enableSignedPush, and change.move are still in there, though you did check them after we restarted gerrit earlier in the week too | 15:12 |
corvus | but i bet you already started the loop | 15:12 |
clarkb | yup looping should be done now if anyone wants to check | 15:13 |
fungi | i'll go ahead and start the db dump per step 1.7.1, estimated time is 10 minutes | 15:13 |
clarkb | fungi: ya I expected that one to be fine after our test but didn't remove it as it seemed like a good sanity check | 15:13 |
fungi | mysqldump command is currently underway in the root screen session on review.o.o | 15:14 |
fungi | in parallel i'll start the rsync update for our 2.13 ~gerrit2 backup in a second screen window | 15:15 |
clarkb | fungi: we don't want to start that until the db dump is done? | 15:15 |
clarkb | that way the db dump is copied properly too | 15:15 |
fungi | oh, fair, since we're dumping into the homedir | 15:16 |
fungi | yeah, i'll wait | 15:16 |
fungi | i guess we could have dumped into the /mnt/2020-11-20_backups volume instead | 15:16 |
clarkb | oh good point | 15:16 |
clarkb | oh well | 15:16 |
fungi | it'll be finished any minute now anyway, based on my earlier measurements | 15:22 |
fungi | mysqldump seems to have completed fine | 15:23 |
clarkb | ya I think we can rsync now | 15:23 |
fungi | 1.7gb compressed | 15:23 |
clarkb | is that size in line with our other backups? | 15:24 |
fungi | rsync update is underway now, i'll compare backup sizes in a second window | 15:24 |
clarkb | yes it is | 15:24 |
clarkb | I checked outside of the screen | 15:24 |
fungi | yeah, they're all roughly 1.7gb except teh old 2.13-backup-1505853185.sql.gz from 2017 | 15:25 |
fungi | which we probably no longer need | 15:25 |
fungi | in theory this rsync should be less than 5 minutes | 15:26 |
fungi | though could be longer because of the db dump(s)/logrotate i suppose | 15:26 |
clarkb | even if it was a full sync we'd still be on track for our estimated time target | 15:27 |
fungi | yeah, fresh rsync starting with nothing took ~25 minutes | 15:27 |
clarkb | I think the gerrit caches and git dirs change a fair bit over time | 15:34 |
clarkb | in addition to the db and log cycling | 15:34 |
fungi | and it's done | 15:35 |
fungi | yeah periodic git gc probably didn't help either | 15:35 |
fungi | anybody want to double-check anything before we start the aggressive git gc (step 2.1)? | 15:36 |
clarkb | echo $? otherwise no I can't think of anything | 15:36 |
fungi | yeah, i don't normally expect rsync to silently fail | 15:37 |
fungi | but it exited 0 | 15:37 |
clarkb | yup lgtm | 15:37 |
clarkb | I think we can gc now | 15:37 |
fungi | i have the gc staged in the screen session now | 15:37 |
fungi | and it's running | 15:37 |
clarkb | after the gc we can spot check that everything is still owned by gerrit2 | 15:38 |
fungi | estimates time at this step is 40 minutes, so you can go get your tea | 15:38 |
clarkb | yup I'm gonna go start the kettle now. thanks | 15:38 |
fungi | i don't see any obvious errors streaming by anyway | 15:38 |
clarkb | keeping timing notes on the etherpad too because I'm curious to see how close the estimates particularly for today are | 15:38 |
fungi | good call, and yeah that's more or less why i left the time commands in most of these | 15:39 |
*** melwitt has joined #opendev-meeting | 15:44 | |
fungi | probably ~15 minutes remaining | 16:01 |
clarkb | I'm back fwiw just monitoring over tea and toast | 16:01 |
fungi | estimated 5 minutes remaining on this step | 16:13 |
clarkb | it is down to 2 repos | 16:13 |
clarkb | of course one of them is nova :) | 16:13 |
fungi | the other is presumably either neutron or openstack-manuals | 16:13 |
clarkb | it was airshipctl | 16:13 |
fungi | oh | 16:13 |
fungi | wow | 16:13 |
clarkb | I think it comes down to how find and xargs sort | 16:13 |
clarkb | I think openstack manuals was the third to last | 16:14 |
fungi | looks like we're down to just nova now | 16:14 |
fungi | here's hoping these rebuilt gerrit images which we haven't tested upgrading with are still fine | 16:15 |
clarkb | I'm not too worried about that, I did a bunch of local testing with our images over the last few months and the images moved over time and were always fine | 16:16 |
fungi | yeah, the functional exercises we put them through should suffice for catching egregious problems with them, at the very least | 16:17 |
clarkb | then ya we also put them through the fake prod marathons | 16:17 |
clarkb | before we proceed to the next step it appears that the track upstream cron fired? | 16:20 |
clarkb | fungi: did that one get disabled too? | 16:20 |
fungi | and done | 16:20 |
fungi | i thought i disabled them both, checking | 16:20 |
fungi | oh... it's under root's crontab not gerrit2's | 16:21 |
clarkb | we should disable that cron then kill the running container for it | 16:21 |
clarkb | I think the command is kill | 16:22 |
fungi | like that? or is it docker kill? | 16:22 |
clarkb | to line up with ps | 16:22 |
fungi | yup | 16:22 |
fungi | okay, it's done | 16:22 |
clarkb | we should keep an eye on those things because they use the explicit docker image iirc | 16:22 |
clarkb | the change updates the docker image version in hiera which will apply to all those scripts | 16:23 |
clarkb | granted they don't really run gerrit things just jeepyb in gerrit so its probably fine for them to use the old iamge accidentally | 16:23 |
fungi | the only remaining cronjobs for root are bup, mysqldump, and borg(x2) | 16:23 |
clarkb | ok I think we can proceed? | 16:23 |
fungi | and confirmed, the cronjobs for gerrit2 are both disabled still | 16:24 |
fungi | we were going to check ownership on files in the git tree | 16:24 |
clarkb | ++ | 16:24 |
fungi | everything looks like it's still gerrit2, even stuff with timestamps in the past hour | 16:25 |
clarkb | that spot check looks good to me | 16:25 |
fungi | so i think we're safe (but also we change user to gerrit2 in our gc commands so it shouldn't be a problem any longer) | 16:25 |
clarkb | ya just a dobule check since we had problems with that on -test before we udpated the gc commands | 16:26 |
clarkb | I think its fine and we can proceed | 16:26 |
fungi | does that look right? | 16:26 |
clarkb | yup updated to opendevorg/gerrit:2.14 | 16:26 |
clarkb | on both entries in the docker compose file | 16:27 |
fungi | okay, will pull with it now | 16:27 |
fungi | how do we list them before running with them? | 16:27 |
clarkb | docker image list | 16:27 |
fungi | i need to make myself a cheatsheet for container stuff, clearly | 16:27 |
fungi | opendevorg/gerrit 2.14 39de77c2c8e9 22 hours ago 676MB | 16:28 |
fungi | that seems right | 16:28 |
clarkb | yup | 16:28 |
fungi | ready to init? | 16:28 |
clarkb | I guess so :) | 16:29 |
fungi | and it's running | 16:29 |
clarkb | around now is when we would expect this one to finish, but also this was the one with the least consistent timing | 16:37 |
fungi | taking longer than our estimate, yeah | 16:37 |
clarkb | we theorized its due to hashing the http passwds | 16:37 |
clarkb | and the input for that has changed a bit recently | 16:38 |
clarkb | (but maybe we also need entrpoy? I dunno) | 16:38 |
fungi | should be far fewer of those now though | 16:38 |
corvus | it seems pretty idle | 16:39 |
clarkb | ya top isn't showing it be busy | 16:39 |
clarkb | the first time we ran it it took just under 30 minutes | 16:40 |
fungi | could also be that the server instance or volume or (more likely?) trove instance we used on review-test performed better for some reason | 16:40 |
fungi | the idleness of the server suggests to me that maybe this is the trove instance being sluggish | 16:41 |
corvus | | 460106 | gerrit2 | 10.223.160.46:56540 | reviewdb | Query | 716 | copy to tmp table | ALTER TABLE change_messages ADD real_author INT | | 16:41 |
corvus | | Id | User | Host | db | Command | Time | State | Info | | 16:41 |
corvus | ^ column headers | 16:41 |
clarkb | ah ok so it is the db side then? | 16:42 |
corvus | fungi: so yeah, looks like | 16:42 |
corvus | yep that's "show full processlist" | 16:42 |
corvus | in mysql | 16:42 |
mordred | yeah - sounds like maybe the old db is tuned/sized differently | 16:42 |
mordred | or just on an old host or something | 16:42 |
* fungi blames mordred since he created the trove instance for review-test ;) | 16:42 | |
mordred | totally fair :) | 16:42 |
clarkb | this is one reason why we allocated tons of extra time :) | 16:42 |
fungi | s/blames/thanks/ | 16:43 |
clarkb | as long as we can explain it (and sounds like we have) I'm happy | 16:43 |
clarkb | though its a bit disappointing we're investing in the db when we're gonna discard it shortly :) | 16:43 |
mordred | right? | 16:44 |
fungi | i'll just take it as an opportunity to catch up on e-mail in another terminal | 16:44 |
corvus | there should be a word for blame/thanks | 16:46 |
fungi | the germans probably have one | 16:46 |
corvus | mordred: _____ you very much for setting up that trove instance! | 16:47 |
fungi | deutsche has all sorts of awesome words english is missing | 16:47 |
mordred | schadendanke perhaps? (me making up new words) | 16:48 |
fungi | doch (the positive answer to a negative question) is in my opinion the greatest example of potentially solvable vagueness in english | 16:48 |
mordred | yup | 16:48 |
corvus | omg i need that in my life | 16:49 |
mordred | it fills the "no, yes it is" | 16:49 |
mordred | role | 16:49 |
fungi | somehow english, while a germanic language, decided to just punt on that | 16:49 |
mordred | yup | 16:49 |
mordred | I blame the normans | 16:49 |
corvus | | 460106 | gerrit2 | 10.223.160.46:56540 | reviewdb | Query | 1227 | rename result table | ALTER TABLE change_messages ADD real_author INT | | 16:50 |
fungi | mordred: sshhhh, ttx might be listening | 16:50 |
corvus | changed from "copy" to "rename" sounds like progress | 16:50 |
corvus | | 460106 | gerrit2 | 10.223.160.46:56540 | reviewdb | Query | 5 | copy to tmp table | ALTER TABLE patch_comments ADD real_author INT | | 16:50 |
corvus | new table | 16:50 |
corvus | i wonder what the relative sizes of those 2 tables are | 16:51 |
mordred | also - in newer mysql that should be able to be an online operation | 16:51 |
mordred | but apparently not in the version we're running | 16:51 |
mordred | so it's doing the alter by making a new table with the new column added, copying all the data to the new table and deleting the old | 16:52 |
mordred | yay | 16:52 |
clarkb | ya our mysql is old. we used old mysql on review-test and it was fine so I dind't think we should need to upgrade first | 16:52 |
fungi | maybe the mysql version for the review-test trove instance was newer than for review? | 16:52 |
clarkb | fungi: I'm 99% sure I checked that | 16:52 |
clarkb | and they matched | 16:52 |
fungi | ahh, so that did get checked | 16:52 |
clarkb | but maybe I misread the rax web ui or something | 16:52 |
mordred | maybe they both did the copy and hte new one is just on better hypervisor | 16:52 |
fungi | or the dump/src process optimizes the disk layout a lot compared to a long-running server | 16:53 |
clarkb | I'm trying to identify which schema makes this change btu the way they do migrations doesn't make that easy for all cases | 16:53 |
clarkb | they guice inject db specific migrations from somewhere | 16:53 |
clarkb | I can't find the somewhere | 16:54 |
clarkb | anyway its proceeding I'll chill | 16:54 |
mordred | fungi: yeah - that's also potentially the case | 16:54 |
mordred | clarkb: they guice inject *everything* | 16:54 |
clarkb | I don't think the notedb conversion will be very affected by that either since its all db reads | 16:54 |
clarkb | so hopeflly the very long portion of the upgrade continues to just be long and not longer | 16:55 |
corvus | oof, it also looks like they're doing one-at-a-time | 16:55 |
corvus | | 460106 | gerrit2 | 10.223.160.46:56540 | reviewdb | Query | 15 | copy to tmp table | ALTER TABLE patch_comments ADD unresolved CHAR(1) DEFAULT 'N' NOT NULL CHECK (unresolved IN ('Y','N')) | | 16:55 |
corvus | second update to same table | 16:55 |
corvus | which, to be fair, is the way we usually do it too | 16:55 |
corvus | but now i feel more pressure to do upgrade rollups :) | 16:56 |
mordred | yah - to both | 16:56 |
fungi | "we" being zuul/nodepool? | 16:56 |
fungi | er, i guess not nodepool as it doesn't use an rdbms | 16:56 |
clarkb | ya still having no luck figuring out where the Schema_13X.java files map to actual sql stuff | 17:04 |
clarkb | I wonder if it automagic based on their table defs somewhere | 17:04 |
corvus | fungi: yes (also openstack) | 17:04 |
clarkb | I'm just trying to figure out what sort of progress we're making relative to the stack of schema migrations. Unfortunately it prints out all the ones it will do at the beginning then does them so you don't get that insight | 17:05 |
fungi | i would not be surprised if these schema migrations aren't somehow generated at runtime | 17:05 |
mordred | corvus: I think nova decided to do rollups when releases are cut - so if you upgrade from icehouse to juno it would be a rollup, but if you're doing CD between icehouse and juno it would be a bunch of individual ones | 17:06 |
mordred | which seems sane - I'm not sure how that would map into zuul - but maybe something to consider in the v4/v5 boundaries | 17:06 |
corvus | mordred: ++ | 17:07 |
fungi | yay! | 17:07 |
fungi | it's doing the data migrations now | 17:07 |
clarkb | ok cool | 17:08 |
fungi | looks like it's coming in around 40 minutes? | 17:08 |
clarkb | seems like things may be slower but not catastrophically so | 17:08 |
fungi | (instead of 8) | 17:08 |
clarkb | 142 is the hashing schema change iirc | 17:09 |
clarkb | yup confirmed that one has content in the schema version java file because they hash java side | 17:11 |
clarkb | corvus: is it doing interesting db things at the moment? I wonder if it is also doing some sort of table update for the hashed data | 17:19 |
clarkb | rather than just inserting records | 17:19 |
fungi | looks like there's a borg backup underway, that could also be stealing some processor time... though currently the server is still not all that busy | 17:19 |
clarkb | ya I think it must be busy with mysql again | 17:19 |
mordred | db schema upgrades are the boringest | 17:19 |
clarkb | also note that we had originally thought that the notedb conversion would run overnight. Based on how long this is taking that may be the case again, but we've already buitl in that buffer so I don't think we need to rollback or anything like that yet | 17:20 |
clarkb | just need to be patient I guess (something I am terrible at accomplishing) | 17:21 |
corvus | clarkb: "UPDATE account_external_ids SET" | 17:21 |
fungi | that looks like what we expect, yeah | 17:21 |
corvus | then some personal info; it's doing lots of those individually | 17:21 |
clarkb | yup | 17:21 |
clarkb | db.accountExternalIds().upsert(newIds); <- is the code that should line up to | 17:22 |
clarkb | oh you know what | 17:22 |
fungi | yeah this is the stage where we believe it's replacing plaintext rest api passwords with bcrypt2 hashes | 17:22 |
clarkb | its updating every account even if they didn't have a hashed password | 17:22 |
corvus | yes | 17:22 |
corvus | i just caught it doing one :) | 17:22 |
clarkb | List<AccountExternalId> newIds = db.accountExternalIds().all().toList(); | 17:23 |
corvus | password='bcrypt:... | 17:23 |
clarkb | rather than finding the ones with a password and only updating them | 17:23 |
clarkb | I guess that explains why this is slow | 17:23 |
fungi | is it hashing null for 99.9% of the accounts? | 17:23 |
clarkb | no it only hashes if the previous value was not null | 17:23 |
fungi | or just skipping them once it realizes they have no password? | 17:23 |
clarkb | but it is still upserting them back again | 17:23 |
clarkb | rather than skipping them | 17:23 |
corvus | it's doing an update to set them to null | 17:23 |
fungi | ahh, okay that's better than, you know, the other thing | 17:23 |
corvus | (which mysql may optimize out, but it'll at least have to go through the parser and lookup) | 17:24 |
clarkb | corvus: do you see sequential ids? if so that may give us a sense for how long this will take. I think we have ~36k ids | 17:24 |
corvus | ids seem random | 17:24 |
corvus | may be sorted by username though: it's at "mt.." | 17:25 |
corvus | now p.. | 17:25 |
fungi | so maybe ~halfway | 17:25 |
corvus | hah, i saw 'username:rms...' and started, then moved the rest of the window in view to see 'username:rmstar' | 17:26 |
corvus | mysql is idle | 17:26 |
fungi | and done | 17:26 |
clarkb | it reports done on the java side | 17:26 |
fungi | exited 0 | 17:27 |
clarkb | yup from what we can see it lgtm | 17:27 |
fungi | anything we need to check before proceeding with 2.15? | 17:27 |
clarkb | I think we can proceed and just accept these will be slower. Then expect notedb to run overnight again | 17:27 |
fungi | 57m11.729s was the reported runtime | 17:27 |
clarkb | ya I put that on the etherpad | 17:28 |
fungi | updated compose file for 2.15, shall i pull? | 17:28 |
clarkb | yes please pull | 17:28 |
fungi | opendevorg/gerrit 2.15 bfef80bd754d 23 hours ago 678MB | 17:29 |
fungi | looks right | 17:29 |
clarkb | yup | 17:29 |
fungi | ready to init 2.15? | 17:29 |
clarkb | I'm ready | 17:29 |
fungi | it's running | 17:30 |
clarkb | schema 144 is the writing to external ids in all users | 17:31 |
clarkb | 143 is opaque due to guice | 17:31 |
clarkb | anyway I shall continue to practice patience | 17:32 |
* fungi finds a glass full of opaque juice | 17:32 | |
clarkb | the java is very busy on 144 | 17:33 |
clarkb | (as expected given its writing back to git) | 17:33 |
fungi | huh, it's doing a git gc now | 17:34 |
clarkb | only on all-users | 17:34 |
fungi | of all-users i guess | 17:34 |
clarkb | ya | 17:34 |
mordred | busy busy javas | 17:34 |
clarkb | you still need it for everything else to speed up the reindexing aiui | 17:34 |
fungi | sure | 17:35 |
fungi | this one's running long too, compared to our estimate | 17:38 |
fungi | but i have a feeling we're still going to wind up on schedule when we get to the checkpoint | 17:38 |
clarkb | 151 migrates groups into notedb I think | 17:39 |
fungi | we baked in lots of scotty factor | 17:40 |
clarkb | ya I think it "helps" that there was no way we thought we'd get everything done in one ~10 hour period. So once we assume an overnight being able to slot a very slow process in there makes for a lot of wiggle room | 17:40 |
clarkb | mordred: you've just reminded me that mandalorian has a new episode today. I know what I'm doing during the notedb conversion | 17:42 |
clarkb | busy busy jawas | 17:42 |
mordred | haha. I'm waiting until the whole season is out | 17:42 |
fungi | and done | 17:42 |
clarkb | just under 13 minutes | 17:42 |
fungi | 12m47.295s | 17:43 |
fungi | anybody want to check anything before i work on the 2.16 upgrade? | 17:43 |
clarkb | I don't think so | 17:43 |
fungi | proceeding | 17:44 |
fungi | good to pull images? | 17:44 |
clarkb | 2.16 lgtm I think you should pull | 17:44 |
fungi | opendevorg/gerrit 2.16 aacb1fac66de 24 hours ago 681MB | 17:44 |
fungi | also looks right | 17:44 |
clarkb | yup | 17:45 |
fungi | ready to init 2.16? | 17:45 |
clarkb | ++ | 17:45 |
fungi | running | 17:45 |
fungi | time estimate is 7 minutes, no idea how accurate that will end up being | 17:45 |
*** hashar has quit IRC | 17:46 | |
* mordred is excited | 17:46 | |
fungi | after this we have another aggressive git gc followed by an offline reindex, then we'll checkpoint the db and homedir in preparation for the notedb migration | 17:47 |
fungi | this theoretically gives us a functional 2.16 pre-notedb state we can roll back to in a pinch | 17:47 |
clarkb | then depending on what time it is we'll do 3.0, 3.1, and 3.2 this evening or tomorrow | 17:47 |
fungi | yup | 17:48 |
clarkb | sort of related, I feel like notedb is sort of a misleading name. None of the db stuff lives in what git notes thinks are notes as far as I can tell | 17:49 |
clarkb | its just special refs | 17:49 |
clarkb | this had me very confused when I first started looking at the upgrade | 17:49 |
fungi | yeah, i expect that was an early name which stuck around long after they decided using actual git notes for it was suboptimal | 17:50 |
*** hamalq has joined #opendev-meeting | 17:51 | |
fungi | i think we'll make up some of the lost time in our over-estimate of the checkpoint steps | 17:53 |
fungi | glad we weren't late starting | 17:54 |
clarkb | ++ I never want to wake up early but having the extra couple of hours tends to be good for buffering ime | 17:56 |
fungi | happy to anchor the early hours while your tea and toast kick in | 17:56 |
fungi | in exchange, it's your responsibility to take up my slack later when my beer starts to kick in | 17:57 |
clarkb | ha | 17:58 |
fungi | sporadic java process cpu consumption at this stage | 17:59 |
clarkb | migration 168 and 170 are opaque due to guice. 169 is more group notedb stuff | 18:01 |
*** hamalq has quit IRC | 18:01 | |
clarkb | not sure which one we are on now as things scrolled by | 18:02 |
clarkb | oh did it just finish? | 18:02 |
clarkb | oh interesting | 18:02 |
*** hamalq has joined #opendev-meeting | 18:02 | |
clarkb | the migrations are done but now it is reindexing? | 18:02 |
fungi | no, i was scrolling back in the screen buffer to get a feel for where we are | 18:02 |
fungi | it's been at "Index projects in version 4 is ready" for a while | 18:03 |
clarkb | ya worrying about whati t may be doing since it said 170 was done right? | 18:03 |
fungi | though maybe it's logging | 18:03 |
fungi | yeah, it got through the db migrations | 18:03 |
fungi | and started an offline reindex apparenrly | 18:03 |
fungi | there it goes | 18:03 |
fungi | done finally | 18:03 |
clarkb | ya that was expected for projects and accounts and groups | 18:03 |
clarkb | because accountsa nd groups and project stuff go into notedb but not changes | 18:04 |
fungi | 18m19.111s | 18:04 |
clarkb | yup etherpad updated | 18:04 |
clarkb | exit code is zero I think we can reindex | 18:04 |
fungi | ready to do a full aggressive git gc now? | 18:04 |
clarkb | er sorry not reindex | 18:04 |
clarkb | gc | 18:04 |
clarkb | getting ahead of myself | 18:04 |
fungi | yup | 18:05 |
fungi | okay, running | 18:05 |
fungi | 41 minutes estimated | 18:05 |
clarkb | the next reindex is a full reindex because we've done skip level upgrades | 18:05 |
clarkb | with no intermediate online reindexing | 18:05 |
fungi | should be a reasonably accurate estimate since no trove interaction | 18:05 |
clarkb | and we did one prior to the upgrades which was close in time too | 18:06 |
fungi | yup | 18:07 |
*** gouthamr_ has quit IRC | 18:30 | |
clarkb | one thing the delete plugin lets you do which I didn't manage to have time to test is to archive repos | 18:30 |
clarkb | it will be nice to test that a bit more for all of the dead repos we've got and see if that improves things like reindexing | 18:30 |
*** yoctozepto has quit IRC | 18:37 | |
*** yoctozepto has joined #opendev-meeting | 18:38 | |
clarkb | down to nova and all users now | 18:40 |
fungi | yup | 18:41 |
*** gouthamr_ has joined #opendev-meeting | 18:46 | |
fungi | done | 18:48 |
clarkb | looks happy | 18:48 |
clarkb | time for the reindex now? | 18:48 |
fungi | anything we should check before starting the offline reindex? | 18:48 |
clarkb | I don't think so. UNless you want to check file perms again | 18:48 |
fungi | we want to stick with 16 threads? | 18:48 |
clarkb | yes | 18:49 |
clarkb | I think so anyway | 18:49 |
fungi | file perms look okay still | 18:49 |
clarkb | one of the things brought up on the gerrit mailing list is that thread for these things use memory and if you overdo the threads you oom | 18:49 |
clarkb | so sticking with what we know shouldn't oom seems like a good idea | 18:49 |
clarkb | its 24 threads on the notedb conversion but 16 on reindexing | 18:49 |
fungi | yeah, i'm fine with sticking with the count we tested with | 18:49 |
fungi | okay, it's running | 18:50 |
fungi | estimates time to completion is 35 minutes | 18:50 |
fungi | gc time was ~43 minutes so close to our estimate. i didn't catch the actual time output | 18:51 |
clarkb | oh I didn't look, I should've | 18:52 |
fungi | for those watching the screen session, the java exceptions are about broken changes which are expected | 18:52 |
clarkb | ya we reproduced the unhappy changes on 2.13 prod | 18:52 |
clarkb | its just that newer gerrit complains more | 18:52 |
fungi | stems from some very old/early history lingering in the db | 18:53 |
clarkb | it is about a quarter of the way through now so on track for ~40 minutes | 19:01 |
fungi | fairly close to our estimate in that case | 19:01 |
clarkb | just over 50% now | 19:14 |
clarkb | just crossed 90% | 19:32 |
clarkb | down to the last hundred or so changes to index now | 19:37 |
fungi | and done | 19:37 |
clarkb | ~48minutes | 19:38 |
fungi | 47m51.046s yeah | 19:38 |
clarkb | 2.16 db dump now? | 19:38 |
fungi | yup, ready for me to start it? | 19:38 |
clarkb | yes I am | 19:38 |
fungi | and it's running | 19:39 |
clarkb | then we backup again, then start the notedb offline transition | 19:39 |
clarkb | such excite | 19:39 |
fungi | it all over my screen | 19:42 |
fungi | (literally) | 19:42 |
ianw | o/ | 19:43 |
ianw | sounds like it's going well | 19:43 |
clarkb | ianw: slower than expected but no major issues otherwise | 19:43 |
* fungi hands everything off to ianw | 19:43 | |
fungi | [just kidding!] | 19:44 |
clarkb | we're at our 2.16 checkpoint step. backing up the db then copying gerrit2 homedir aside | 19:44 |
clarkb | the next step after the checkpoint is to run the offline notedb migration | 19:44 |
* ianw recovers after heart attack | 19:44 | |
fungi | yeah, i think we're basically on schedule, thanks to minor miracles of planning | 19:44 |
clarkb | whcih is good beacuse I'm getting hungry for lunch and notedb migration step is perfect time for that :) | 19:44 |
fungi | other than the trove instance being slower than what we benchmarked with review-test, it's been basically uneventful. no major issues, just tests of patience | 19:45 |
ianw | clarkb: one good thing about being in .au is the madolorian comes out at 8pm | 19:45 |
clarkb | ianw: hacks | 19:45 |
* fungi relocates to a different hemisphere | 19:46 | |
fungi | i hear there are plenty of island nations on that side of the equator which would be entirely compatible with my lifestyle | 19:47 |
clarkb | internet connectivity tends to be the biggest issue | 19:48 |
fungi | i can tolerate packet loss and latency | 19:48 |
fungi | okay, db dump is done | 19:49 |
fungi | rsync next | 19:49 |
fungi | ready to run? | 19:49 |
clarkb | let me check the filesize | 19:49 |
clarkb | still 1.7gb lgmt | 19:50 |
clarkb | I think you can run the rsync now | 19:50 |
fungi | oh, good call, thanks | 19:50 |
fungi | running | 19:50 |
fungi | the 10 minute estimate there is very loose. could be more like 20, who knows | 19:50 |
clarkb | we'll find out :) | 19:50 |
fungi | if it's accurate, puts us right on schedule | 19:51 |
fungi | and done! | 20:01 |
fungi | 10m56.072s | 20:01 |
fungi | reasonably close | 20:01 |
corvus | \o/ | 20:01 |
clarkb | only one minute late | 20:01 |
corvus | hopefully not 10% late | 20:01 |
clarkb | well one minut against the estimated 10 minutes but also ~20:00UTCwas when I guessed we would start the notedb transition | 20:02 |
fungi | okay, notedb migration | 20:02 |
fungi | anything we need to check now, or ready to start? | 20:03 |
clarkb | just that the command has the -Xmx value which it does and the threads are overridden and they are. I can't think of anything else to check since we aren't starting 2.16 and interacting with it | 20:03 |
clarkb | I think we are ready to start notedb migration | 20:03 |
fungi | okay, running | 20:04 |
fungi | eta for this is 4.5 hours | 20:04 |
fungi | no idea if it will be slower, but seems likely? | 20:04 |
fungi | that will put us at 00:35 utc at the earliest | 20:05 |
clarkb | we should check it periodically too just to be sure it hasn't bailed out | 20:05 |
fungi | i can probably start planning friday dinner now | 20:05 |
clarkb | ++ I'm going to work on lunch as well | 20:05 |
clarkb | also the docs say this process is resumable should we need to do that | 20:05 |
clarkb | I don't think we tested that though | 20:05 |
ianw | is this screen logging to a file? | 20:06 |
fungi | yeah, it always worked in the tests i observed | 20:06 |
fungi | ianw: no | 20:06 |
fungi | i can try to ask screen to start recording if you think that would be helpful | 20:06 |
ianw | might be worth a ctrl-a h if you like, ... just in case | 20:06 |
clarkb | what does that do? | 20:07 |
clarkb | (I suspect I'll learn something new about screen) | 20:07 |
ianw | actually it's a captial-H | 20:07 |
fungi | done. ~root/hardcopy.0 should have it | 20:07 |
ianw | clarkb: just keeps a file of what's going on | 20:07 |
fungi | okay, ~root/screenlog.0 now | 20:07 |
clarkb | TIL | 20:08 |
clarkb | alright I'm going to find lunch now then will check in again | 20:08 |
fungi | it's mostly something i've accidentally hit in the past and then later had to delete, though i appreciate the potential usefulness | 20:08 |
fungi | for folks who haven't followed closely, this is the "overnight" step, though if it completes at the 4.5 hour estimate (don't count on it) i should still be around to try to complete the upgrades | 20:35 |
fungi | the git gc which follows it is estimated at 1.5 hours as well though, will will be getting well into my evening at that point | 20:36 |
clarkb | ya as noted on the etherpad I kind of expected we'd finish with the gc then resume tomorroe | 20:37 |
clarkb | that gc is longer becauseit packs all the notedb stuff | 20:37 |
fungi | if both steps finish on schedule somehow, i should still be on hand to drive the rest assuming we don't want to break until tomorrow | 20:37 |
clarkb | ya I can be around to push further if you're still around | 20:38 |
fungi | the upgrade steps after the git gc should be fast | 20:38 |
fungi | the real risk is that we turn things back on and then there are major unforeseen problems while most of us are done for the day | 20:38 |
corvus | clarkb, fungi: etherpad link? | 20:38 |
fungi | https://etherpad.opendev.org/p/opendev-gerrit-3.2-upgrade-plan | 20:38 |
corvus | #link https://etherpad.opendev.org/p/opendev-gerrit-3.2-upgrade-plan | 20:39 |
clarkb | ya I dont think weturn it on even if we get to that point | 20:39 |
fungi | ooh, thanks for remembering meetbot is listening! | 20:39 |
clarkb | because we'll want to be around for that | 20:39 |
fungi | i definitely don't want to feel like i've left a mess for others to clean up, so am all for still not starting services up again until some point where everyone's around and well-rested | 20:40 |
corvus | we might be able to get through the 3.2 upgrade tonight and let it sit there until tomorrow | 20:41 |
fungi | that seems like the ideal, yes | 20:41 |
corvus | like stop at 5.17 | 20:41 |
fungi | sgtm | 20:41 |
corvus | (i totally read that as "stop at procedure five decimal one seven") | 20:42 |
clarkb | ya I think that would be best. | 20:43 |
clarkb | fun fact: this notedb transion is running with the "make it faster" changes too | 20:43 |
clarkb | s/transion/migration/ | 20:44 |
fungi | i couldn't even turn on the kitchen tap without filling out a twenty-seven b stroke six, bloody paperwork | 20:44 |
clarkb | I got really excited about those changes tehn realized we were already testing with it | 20:44 |
clarkb | hrm the output indicates we may be closer to finishing than I would've expected? | 21:17 |
clarkb | Total number of rebuilt changes 757000/760025 (99.6%) | 21:17 |
fungi | i'm not falling for it | 21:17 |
clarkb | ya its possible there is multiple passes to this or someting | 21:18 |
clarkb | the log says its switching primary to notedb now | 21:18 |
clarkb | I will continue to wait patiently but act optimistic | 21:18 |
clarkb | oh ya it is a multipass thing | 21:21 |
clarkb | I remember now that it will do another reindex | 21:21 |
clarkb | built in to the migrator | 21:21 |
clarkb | got my hopes up :) | 21:21 |
clarkb | [2020-11-20 21:21:59,798] [RebuildChange-15] WARN com.google.gerrit.server.notedb.PrimaryStorageMigrator : Change 89432 previously failed to rebuild; skipping primary storage migration | 21:22 |
clarkb | that is the causeof the tracback we see | 21:23 |
clarkb | (this was expected for a number of changes in the 10-20 range) | 21:23 |
ianw | don't know why kswapd0 is so busy | 21:40 |
clarkb | ya was just going to mention that | 21:40 |
clarkb | we're holding steady at ~500mb swap use and have ~36gb memory available | 21:40 |
clarkb | but free memory is only ~600mb | 21:40 |
ianw | i've seen this before and a dop_caches sometimes helps | 21:40 |
clarkb | dop_caches? | 21:41 |
ianw | echo 3 > /proc/sys/vm/drop_caches | 21:41 |
fungi | dope caches dude | 21:42 |
clarkb | "This is a non-destructive operation and will only free things that are completely unused. Dirty objects will continue to be in use until written out to disk and are not freeable. If you run "sync" first to flush them out to disk, these drop operations will tend to free more memory. " says the internet | 21:43 |
* fungi goes back to applying heat to comestible wares | 21:43 | |
corvus | do we want to clear the caches? | 21:43 |
clarkb | presumably gerrit/java/jvm will just reread what it needs back itno the kernel caches when it needs them? | 21:44 |
clarkb | whether or not that will be a problem I don't know | 21:44 |
corvus | i guess that might avoid having the vmm write out unused pages to disk because more ram is avail? | 21:44 |
ianw | yeah, this has no affect on userspace | 21:44 |
ianw | well, other than temporal | 21:45 |
corvus | except indirectly | 21:45 |
corvus | (iow, if we're not using caches because sizeof(git repos)>sizeof(ram) and it's just churning, then this could help avoid it making bad decisions; but we'd probably have to do it multiple times.) | 21:45 |
corvus | (if we are using caches, then it'll just slow us down while it rebuilds) | 21:46 |
ianw | 2019-08-27 | 21:47 |
ianw | * look into afs server performace; drop_caches to stop kswapd0, | 21:47 |
ianw | monitor | 21:47 |
ianw | that was where i saw it going crazy before | 21:47 |
corvus | ianw, clarkb: i think with near zero iowait and low cpu usage i would not be inclined to drop caches | 21:48 |
clarkb | the buff/cache value is going up slowly as the free value goes down slowly. But swap usage is stable | 21:48 |
clarkb | corvus: that makes sense to me | 21:48 |
corvus | could this be accounting for the cpu time spent performing cache reads? | 21:48 |
clarkb | I'm not sure I understand the question | 21:49 |
corvus | i don't know what actions are actually accounted for under kswapd | 21:49 |
ianw | https://www.suse.com/support/kb/doc/?id=000018698 ; we shoudl check the zones | 21:50 |
corvus | so i'm wondering if gerrit performing a bunch of io and receiving cache hits might cause cpu usage under kswapd | 21:50 |
corvus | ianw: If the system is under memory pressure, it can cause the kswapd scanning the available memory over and over again. This has the effect of kswapd using a lot of CPU cycles. | 21:50 |
corvus | that sounds plausible | 21:50 |
clarkb | my biggest concern is that the "free" number continue to fall slowly | 21:53 |
clarkb | do we think the cache value may fall on its own if we start to lose even more "free" space? | 21:53 |
corvus | clarkb: i think that's the immediate cause for kswapd running, but there's plenty of available memory because of the caches | 21:53 |
clarkb | corvus: I see, so once we actually need memory it should start to use what is available? | 21:54 |
clarkb | ah yup free just went back up to 554 | 21:54 |
clarkb | from below 500 (thats MB) | 21:54 |
clarkb | so ya I think your hypothesis matches what we observe | 21:54 |
corvus | clarkb: yeah; i expect free to stay relatively low (until java exits) | 21:54 |
corvus | but it won't cause real memory pressure because the caches will be reduced to make more room. | 21:55 |
clarkb | in that case the safest thing may be to just let it run? | 21:55 |
corvus | i think so; if we were seeing cpu or io pressure i would be more inclined to intervene, but atm it may be working as designed. no idea if we're benefitting from the caches on this workload, but i don't think it's hurting us. | 21:56 |
corvus | the behavior just changed | 21:57 |
corvus | (all java cpu no kswapd) | 21:57 |
clarkb | it switched to gc'ing all users | 21:58 |
clarkb | then I think it does a reindex | 21:58 |
ianw | yeah i think that dropping caches is a way to short-circuit kswapd0's scan basically, which has now finished | 21:59 |
clarkb | this is all included in the tool (we've manualyl done it in other context too, just clarifying that it is choosing these things) | 21:59 |
*** sboyron has quit IRC | 22:02 | |
fungi | also with most of this going on in a memory-preallocated jvm, it's not clear how much fiddling with virtual memory distribution within the underlying operating system will really help | 22:06 |
clarkb | fungi: that 20GB is spoken for though aiui | 22:06 |
clarkb | which is about 1/3 of our available memory | 22:07 |
clarkb | (we should have plenty of extra) | 22:07 |
clarkb | I think this gc is single threaded. When we run the xargs each gc gets 16 threads and we do 16 of them | 22:08 |
clarkb | which explains why this is so much slower. I wonder if jgit gc isn't multithreaded | 22:09 |
clarkb | kids are out of school now. I may go watch the mandalorian now if others are paying attention | 22:10 |
clarkb | I'll keep an eye on irc but not the screen session | 22:10 |
clarkb | I just got overruled, great british bakeoff is happening | 22:12 |
fungi | i feel for you | 22:19 |
fungi | back to food-related tasks for now as well | 22:20 |
* fungi find noel fielding to be the only redeeming factor for the bakeoff | 22:39 | |
corvus | i've never seen a bakeoff, but i did recently acquire a pet colony of lactobacillus sanfranciscensis | 22:40 |
* fungi keeps wishing julian barratt would appear and then it would turn into a new season of mighty boosh | 22:40 | |
fungi | i have descendants of lactobacillus newenglandensis living in the back of my fridge which come out periodically to make more sandwich bread | 22:42 |
corvus | fungi: i await the discovery of lactobacillus fungensis. that won't be confusing at all. | 22:43 |
fungi | it would be a symbiosis | 22:45 |
ianw | apropos the Mandalorian, the plant he's trying to reach is Corvus | 22:45 |
fungi | the blackbird! | 22:45 |
* clarkb checked between baking challenges, it is on to reindexing now | 22:47 | |
clarkb | iirc the reindexing is the last step of the process | 22:47 |
clarkb | it is slower than the reindexing we just did. I think beacuse we just added a ton of refs and haven't gc'd but not sure of that | 22:48 |
corvus | ianw: wow; indeed i was looking up https://en.wikipedia.org/wiki/Corvus_(constellation) to suggest where the authors may have gotten the idea to name a planet that and google's first autocomplete suggestion was "corvus star wars" | 22:51 |
ianw | my son's obsessions are Gaiman's Norse Mythology, with odin's ravens, and the Mandalorian, who is going to Corvus, and I have a corvus at work | 22:51 |
corvus | ianw: you have corvids circling around you | 22:52 |
ianw | (actually he's obsessed with thor comics, but i told him he had to read the book before i'd start buying https://www.darkhorse.com/Comics/3005-354/Norse-Mythology-1 :) | 22:53 |
corvus | wow the radio 4 adaptation looks fun: https://en.wikipedia.org/wiki/Norse_Mythology_(book) | 22:55 |
fungi | you'll have to enlighten me on gaiman's take on norse mythology, i read all his sandman comics (and some side series) back when they were in print, but he was mostly focused on greek mythology at the time | 22:57 |
fungi | clearly civilization has moved on whilst i've been dreaming | 22:58 |
fungi | i think i have most of sandman still in mylar bags with acid-free backing boards | 22:59 |
fungi | delirium was my favorite character, though she was also sort of a tank girl rip-off | 23:01 |
ianw | fungi: it's a very easy read book, a few chuckles | 23:04 |
ianw | you would probably enjoy https://www.audible.com.au/pd/The-Sandman-Audiobook/B086WR6FG8 | 23:05 |
ianw | https://www.abc.net.au/radio/programs/conversations/neil-gaiman-norse-mythology/12503632 is a really good listen on the background to the book | 23:06 |
fungi | now i'm wondering if there's a connection with dream's raven "matthew" | 23:09 |
clarkb | oof only to 5% now. I wonder if this reindex will expand that 4.5 hour estimate | 23:11 |
* clarkb keeps saying to himself "we only need to do this once so its ok" | 23:12 | |
fungi | follow it up with "so long as it's done when i wake up we're not behind schedule" | 23:12 |
corvus | it is all out on the cpu | 23:12 |
corvus | we have 16 cpus and our load average is 16 | 23:12 |
clarkb | ya its definitely doing its best | 23:12 |
fungi | sounds idyllic | 23:12 |
clarkb | ideally we get to run the gc today too, I can probably manage to hit the up arrow key a few times in the screen and start that if its too late for fungi :) | 23:19 |
clarkb | but ya as long as thats done before tomorrow we're still doing well | 23:19 |
clarkb | s/before/by/ | 23:19 |
fungi | yeah, if this ends on schedule i should have no trouble initiating the git gc, but... | 23:20 |
clarkb | if this pace keeps up its actually on track for ~10 hours from now? thats rough napkin math, so I may be completely off | 23:25 |
clarkb | also if I remember correctly it does the biggest projects first then the smaller ones so maybe the pace will pick up as it gets further | 23:26 |
clarkb | since the smaller projects will have fewer changes (and thus refs) impacting reindexing | 23:26 |
clarkb | anyway its only once and should be done by tomorrow :) | 23:26 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!