pabelanger | remote: https://review.openstack.org/527272 Add pip as requirement | 00:02 |
---|---|---|
pabelanger | should fix ptgbot | 00:02 |
jeblair | lgtm | 00:02 |
ianw | pabelanger: is that going to install pip3? | 00:02 |
ianw | yes, after looking at it :) | 00:05 |
pabelanger | ianw: actually, I think we might need pip::python3 | 00:06 |
pabelanger | looking | 00:06 |
pabelanger | oh | 00:06 |
pabelanger | no, you are right | 00:06 |
pabelanger | we do both | 00:06 |
ianw | oh good, that was my reading of puppet-pip :) | 00:07 |
pabelanger | ianw: :) mind upgrading to a +3 | 00:07 |
ianw | pabelanger: np, or just merge when ci returns | 00:08 |
*** baoli_ has quit IRC | 00:17 | |
*** baoli has joined #openstack-sprint | 00:17 | |
*** baoli has quit IRC | 00:22 | |
ianw | launch status01.o.o | 00:41 |
ianw | oh, bah, that probably would have worked if i fixed up the hiera groups, doing that now | 01:00 |
fungi | also apparently need 527280 for the subunit worker | 01:03 |
ianw | oh, status.o.o needs nodejs too | 01:21 |
ianw | i'm pretty sure https://review.openstack.org/#/c/526978/ (update puppet nodejs to 2.3.0) is safe ... but will test more | 01:22 |
Shrews | hrmm, anyone know why our dns TTLs are different on some logstash workers? the one i did earlier was 5m, the two I just did are 60m | 01:36 |
clarkb | Shrews: I think our script defaults to 60m but rax web ui defaults to 5m | 01:38 |
clarkb | so likelywhere records were made? | 01:38 |
Shrews | clarkb: should I change them? | 01:39 |
clarkb | ya its probably worthwhile to go to 60m (our default) but also not urgent I dont think | 01:39 |
Shrews | k | 01:39 |
* Shrews notes that ttl is changed for logstash-worker02 from 5 to 60 | 01:42 | |
*** baoli has joined #openstack-sprint | 01:47 | |
*** jhesketh has quit IRC | 01:59 | |
Shrews | logstash-worker19 and logstash20 are done | 02:29 |
* Shrews calls it a night | 02:29 | |
Shrews | err, logstash-worker20 | 02:29 |
*** larainema has joined #openstack-sprint | 02:50 | |
*** baoli has quit IRC | 02:54 | |
*** baoli has joined #openstack-sprint | 02:55 | |
*** baoli has quit IRC | 03:00 | |
*** baoli has joined #openstack-sprint | 03:03 | |
*** ianychoi has joined #openstack-sprint | 03:08 | |
dmsimard | jeblair: did I ever tell you those zuul graphs are beautiful ? They're beautiful. | 03:16 |
dmsimard | I'll take logstash-worker04 through 06 since the logstash queue is low | 03:18 |
ianw | hmm, is there something magic in heira/common.yaml | 03:27 |
dmsimard | ianw: I'm sure there's plenty magic to be had | 03:28 |
dmsimard | :D | 03:28 |
ianw | Error: Could not find data item elasticsearch_nodes in any Hiera data file and no default supplied | 03:28 |
dmsimard | ianw: where are you seeing that ? | 03:28 |
ianw | from a manual puppet run on new status host | 03:28 |
dmsimard | ianw: hmm, unrelated but is paste.o.o is loading for you ? | 03:29 |
ianw | dmsimard: it appears to not be | 03:30 |
clarkb | ianw I think we set the hiera load path to find it but I'd have to read ansible puppet to be sure | 03:31 |
dmsimard | hah, I'd paste you the error | 03:32 |
dmsimard | but hey, paste is down :D | 03:32 |
dmsimard | https://etherpad.openstack.org/p/XYJJyds9L8 | 03:32 |
dmsimard | looks like it's back ? | 03:33 |
ianw | yeah, it does that sometimes | 03:33 |
dmsimard | clarkb: unrelated but do you know if we're enabling the mqtt ansible callback on puppetmaster.o.o ? | 03:34 |
dmsimard | clarkb: it looks like it tries to load the mqtt lib but it's not there http://paste.openstack.org/raw/628671/ | 03:35 |
clarkb | I want to say that is a known issue | 03:37 |
clarkb | fungi and mtreinish likely kniw more | 03:37 |
ianw | inability to build netifaces ... fungi was that the problem you saw before? | 04:00 |
dmsimard | rings me a bell | 04:01 |
clarkb | yes with subunit worker | 04:23 |
*** baoli has quit IRC | 04:26 | |
ianw | arrgghh damn it. i just spent 20 minutes wondering why puppet doesn't work locally and it was because i didn't prefix it with 'sudo ' | 04:30 |
ianw | anyway, my instructions are wrong to run puppet locally, just want to use the environment | 04:30 |
ianw | updated on the etherpad ... confusingly it works if you *don't* access common heira data | 04:31 |
dmsimard | infra-root: I wrote what was essentially my bash history in playbook format: https://review.openstack.org/#/c/527301/ | 04:55 |
dmsimard | I wanted to do it only for the workers, but I figured it might be generic enough | 04:55 |
dmsimard | The payoff might not be immediate but guess what comes out in 18.04 | 04:56 |
dmsimard | Oh, that's a server re-installation playbook by the way :D | 04:56 |
* dmsimard sleep | 04:58 | |
ianw | frickler: thanks for https://review.openstack.org/527144 ... progress has been painful but getting there slowly ... i think now it succesfully installs nodejs and npm | 05:06 |
ianw | you might like to start looking at other ::nodejs users ; e.g. https://review.openstack.org/527302 | 05:06 |
*** skramaja has joined #openstack-sprint | 05:07 | |
*** baoli has joined #openstack-sprint | 05:29 | |
*** fungi has quit IRC | 05:45 | |
*** baoli has quit IRC | 05:46 | |
*** fungi has joined #openstack-sprint | 05:48 | |
frickler | ianw: cool, I'm not sure about the symlink issue though, looking at the code it is set to true for 16.04, but should in fact only be for system packages | 05:55 |
frickler | ianw: I'd like to redeploy ethercalc01 from scratch to verify the new stuff is still working fine then | 05:56 |
*** skramaja has quit IRC | 05:58 | |
ianw | frickler: sure; one thing is it's a bit of a pain to deploy it all with all the dependent changes | 06:14 |
ianw | frickler: but feel free to destroy 93b2b91f-7d01-442b-8dff-96a53088654a ; or just recreate it from image or whatever. any testing i have is now in changes | 06:18 |
frickler | ianw: o.k., I'm still processing backlog, but will do in a bit. probably launch another new instance first before removing that one | 06:20 |
frickler | ianw: on a different note, I think https://review.openstack.org/524459 could proceed now, zuul seems to have changed in the meantime | 06:21 |
ianw | yes, i meant to unwip them after a zuul restart | 06:22 |
ianw | thanks for reminding me :) | 06:24 |
frickler | oh, cool, just received the first bunch of root cron log mails | 06:25 |
*** baoli has joined #openstack-sprint | 06:44 | |
*** baoli has quit IRC | 06:49 | |
*** AJaeger has joined #openstack-sprint | 08:44 | |
*** baoli has joined #openstack-sprint | 08:46 | |
-openstackstatus- NOTICE: Our CI system Zuul is currently not accessible. Wait with approving changes and rechecks until it's back online. Currently waiting for an admin to investigate. | 08:49 | |
*** baoli has quit IRC | 08:51 | |
-openstackstatus- NOTICE: Zuul is back online, looks like a temporary network problem. | 09:09 | |
*** baoli has joined #openstack-sprint | 09:22 | |
*** baoli has quit IRC | 09:26 | |
*** skramaja has joined #openstack-sprint | 09:30 | |
*** AJaeger has quit IRC | 09:51 | |
*** AJaeger has joined #openstack-sprint | 09:51 | |
*** jhesketh has joined #openstack-sprint | 10:04 | |
*** baoli has joined #openstack-sprint | 10:23 | |
*** baoli has quit IRC | 10:27 | |
*** baoli has joined #openstack-sprint | 11:24 | |
*** baoli has quit IRC | 11:28 | |
*** jkilpatr has quit IRC | 11:32 | |
*** baoli has joined #openstack-sprint | 11:40 | |
*** baoli has quit IRC | 11:44 | |
*** jkilpatr has joined #openstack-sprint | 12:05 | |
frickler | deployed logstash-worker0[1789] and configured rdns, waiting with the remainder for someone to watch how I break things ;) | 12:15 |
*** baoli has joined #openstack-sprint | 12:41 | |
*** baoli has quit IRC | 12:45 | |
*** baoli has joined #openstack-sprint | 13:37 | |
Shrews | picking up logstash-worker 17 & 18 | 13:43 |
dmsimard | Shrews: oi | 13:45 |
dmsimard | Shrews: I starting writing this last night: https://review.openstack.org/#/c/527301 | 13:45 |
*** baoli has quit IRC | 13:46 | |
*** baoli has joined #openstack-sprint | 13:46 | |
dmsimard | Shrews: It's WIP and actually not tested yet (I'll test it today), I wrote it after reinstalling three workers.. cause, you know, I'm not doing this by hand for all the servers (especially again once 18.04 is out) | 13:46 |
Shrews | i suspect you'll have issues automating the dns changes | 13:47 |
Shrews | because of the things clarkb explained yesterday | 13:47 |
dmsimard | I think we'll always want to do reverse but forward is manual, yeah | 13:48 |
*** skramaja has quit IRC | 13:49 | |
dmsimard | It's more than likely possible to do a delete and a create of the forward | 13:49 |
dmsimard | but we probably don't want to do that before there's been some amount of verifications | 13:49 |
dmsimard | http://git.openstack.org/cgit/openstack-infra/system-config/tree/launch/dns.py already has the logic to show which commands to run and it's something available within Ansible already (server IPs, etc.) | 13:51 |
Shrews | FWIW, after we get done with the logstash workers, I need to switch gears back to zuul things. There are some things that I MUST get done this week since I'll be off for a couple of weeks starting next week. If I get those done, I can switch back to helping with the upgrades. | 13:52 |
dmsimard | Shrews: feel free to focus on zuul after the two you're on | 13:55 |
dmsimard | I should be able to pick a few easily with the playbook :D | 13:55 |
fungi | ianw: where else did you encounter netifaces getting dragged in from pypi? | 14:04 |
frickler | ianw: 93b2b91f-7d01-442b-8dff-96a53088654a is the original ethercalc01, I assume we still need to migrate the data from there somehow. | 14:23 |
frickler | ianw: 079b34ea-4c2d-4c05-a984-a044ab69b0d8 is the one you launched which I think can be removed now, 43fa686e-12a4-4c51-ad3b-d613e2417ff3 is my second launch, which I now deployed successfully with some more fixes to our patch | 14:25 |
clarkb | frickler: o/ I'm around now if you want to walk through dns canges | 14:28 |
frickler | clarkb: cool, I think I can follow what is in backlog, I'd start by changing dns via web gui for the above four instances | 14:33 |
frickler | clarkb: although I've wondered whether it would make sense to stop the old instances first | 14:34 |
clarkb | frickler: it shouldn't make a difference because it is the firewall update that effectively switches "control" from the old instance to the new instance. And this way you can always roll back to the old instance if you need to (though at this point I Think we have a lot of confidence in the new logstash-workers | 14:35 |
clarkb | frickler: questions like that and the weird dns situation are why we don't really automate all of this as its largely per service what makes sense | 14:36 |
clarkb | dmsimard: re automating dns one of the biggest issues has been the client/api and the fact that we share the domain with the foundation | 14:36 |
frickler | clarkb: though maybe we could use a different domain for most instances? like openstack-infra.org? or something unbranded once we have the current specs proposal done | 14:39 |
clarkb | frickler: ya we've talked about that. I think that will likely happen as part of the host other name servers in jeblairs spec | 14:39 |
-openstackstatus- NOTICE: We're currently seeing an elevated rate of timeouts in jobs and the zuulv3.openstack.org dashboard is intermittently unresponsive, please stand by while we troubleshoot the issues. | 14:39 | |
clarkb | (probably not immediately, but gives us a lot more contorl and ability to do things like that) | 14:40 |
frickler | clarkb: o.k., updated 8 dns records, confirmed with dmsimard's ansible command on all target hosts, will do firewall restarts next | 14:51 |
clarkb | also re dns I thinkthe ideal would be that it went through code review or at least revision control of some sort which we currently lack as well | 15:00 |
frickler | clarkb: yes, having the zones in git managed by gerrit sounds pretty nice I think. probably with a post job to take care of SOA updates and zone reloads, but that's fine tuning ;) | 15:01 |
clarkb | frickler: and re email flood before the summit I started sifting through it and attempting to address things that looked like problems, but that lost all momentum during travel for summit. That is probably something I should try and pick up again during the slow weeks around holidays | 15:03 |
clarkb | things like unattended upgrade package lists are noisy but probably useful (easy to grep my inbox for when things updated) and don't indicate issues | 15:07 |
frickler | clarkb: yeah, some cleanup would be nice | 15:07 |
frickler | clarkb: so I've done the firewall restart and refreshed hostkeys on puppetmaster, now checking what I might have missed | 15:08 |
frickler | clarkb: so I think I'm done, ready to delete the remaining nodes, waiting for confirmation on that. I'd then continue with the remainder of the workers while everyone else chases zuul ;) | 15:16 |
clarkb | frickler: after a quick look around it seems like the new workers are happy. Logstash job queue is small too | 15:17 |
clarkb | frickler: I think you cna go ahead and remove the trusty nodes when you are ready | 15:17 |
clarkb | frickler: did you catch my notes on that from yesterday? you have to use the uuid due to the name conflict. I personally like to show $uuid, confirm its the right one then delete $uuid | 15:18 |
clarkb | (the big drawback to non unique names is this can get a little confusing) | 15:18 |
pabelanger | dmsimard: should look into cloud-launcher, there is some logic already create to bootstrap a server, but was never finished. Missing part is running puppet after server was launched | 15:19 |
pabelanger | dmsimard: we'd then store server into in yaml format | 15:19 |
frickler | clarkb: yeah, I saw that show/delete thing, seems useful | 15:21 |
*** baoli has quit IRC | 15:32 | |
frickler | clarkb: o.k., so there are 7 workers remaining now, should I do them all in one batch, split them in 3+4, leave some/all for others? or wait until zuul is fixed? | 15:33 |
Shrews | frickler: if you want to take the rest as a batch, i say go for it. | 15:35 |
clarkb | I would split them up if only to keep quota useage low. I don't think you need to wait for zuul | 15:35 |
frickler | o.k., so I'll start with the next 4 | 15:36 |
Shrews | cool. i'm going to focus on some zuul things | 15:37 |
dmsimard | frickler: can you leave me a few ? I'd like to test my playbook | 15:37 |
dmsimard | I'll claim them on the pad | 15:38 |
frickler | dmsimard: ack | 15:44 |
* dmsimard current status: creating mail filters for 300 mails | 16:03 | |
pabelanger | remote: https://review.openstack.org/527447 Support ubuntu xenial plugins directory | 16:09 |
pabelanger | should fix puppet-meetbot plugins issue^ | 16:09 |
clarkb | pabelanger: commented, | 16:11 |
pabelanger | clarkb: thanks! | 16:12 |
pabelanger | fixing | 16:12 |
pabelanger | remote: https://review.openstack.org/527447 Support ubuntu xenial plugins directory | 16:30 |
pabelanger | clarkb: ^ | 16:30 |
pabelanger | I've moved on to files02.o.o, until we can land ^ for eavesdrop01.o.o | 17:14 |
frickler | taking a break now, but planning to be back for the meeting | 17:14 |
clarkb | pabelanger: +2 on meetbot change | 17:24 |
fungi | dmsimard: did we mailbomb you with cronspam? welcome to infra root ;) | 17:24 |
*** baoli has joined #openstack-sprint | 17:27 | |
pabelanger | remote: https://review.openstack.org/527469 Update files node to support numeric hostnames | 17:30 |
pabelanger | needed to bring files02.o.o online | 17:30 |
dmsimard | fungi: yeah DoSing my inbox :) | 17:42 |
clarkb | pabelanger: can we abandon https://review.openstack.org/#/c/527469/ in favor of https://review.openstack.org/#/c/527186/ ? | 18:04 |
pabelanger | clarkb: yup, if you want to +3 the other, I'll abandon mine | 18:06 |
clarkb | ok | 18:06 |
fungi | dmsimard: i don't know what sort of mail filtering system you use, but if it helps here's the relevant snippet from my exim .forward filter file: http://paste.openstack.org/show/628758/ | 18:08 |
fungi | though i have earlier rules which explicitly handle stuff coming from mailing lists and code review so that's mostly a fallthrough catch-all | 18:09 |
clarkb | really need to make cacti graph creation quieter | 18:10 |
clarkb | maybe I'll amke a patch for that today | 18:10 |
fungi | and that one's actually an elif branch in a long series | 18:10 |
clarkb | but also the exim warning email we get from all over | 18:11 |
pabelanger | clarkb: yah, I pushed up something a few weeks ago to help cut down on size of emails | 18:21 |
pabelanger | clarkb: but, I most of it we can just remove some debug print commands | 18:21 |
pabelanger | at least for size of the email | 18:21 |
clarkb | pabelanger: or just log it on cacti instead and not email it? | 18:21 |
pabelanger | clarkb: yah, I'd be okay with that. I assumed somebody was actually looking at the emails | 18:22 |
fungi | more likely it's an indication nobody's looking at it, or it wouldn't still be so large | 18:36 |
dmsimard | fungi: gmail :/ | 18:45 |
dmsimard | fungi: for stuff like this, you can just batch select a bunch of messages with similar patterns and there's an option "filter messages like these" and it proposes a filter based on what you selected (maybe it's a mailing list header, maybe it's a bunch of different "from", etc.) | 18:46 |
fungi | interesting | 18:48 |
fungi | curious to hear how that goes for you. i bet their ai is powerful | 18:48 |
fungi | given how much money they probably sink into developing that service | 18:48 |
*** baoli has quit IRC | 19:01 | |
*** baoli has joined #openstack-sprint | 19:01 | |
dmsimard | it's not really that smart, but I guess it's something that'll improve over time | 19:03 |
dmsimard | like most data-driven things | 19:03 |
*** baoli has quit IRC | 19:42 | |
*** baoli has joined #openstack-sprint | 19:56 | |
ianw | after some breakfast i'll take a look at codesearch | 20:01 |
ianw | and also review frickler's changes to ethercacalc | 20:01 |
clarkb | I need lunch but ya was going to dig into sprint topic reviews after | 20:02 |
ianw | fungi: i'm certain i saw netifaces failing to build during one of our ethercalc runs ... but there's been a lot going on and i can't find it right now, but i'll keep an eye | 20:02 |
pabelanger | going to look at files02.o.o and eavesdrop01.o.o again | 20:02 |
fungi | ianw: speaking of netifaces, still hammering on fixing it for puppet-subunit2sql... latest fix is https://review.openstack.org/527280 | 20:04 |
frickler | dmsimard: I'd be ready for the next set of iptables restarts, have you started touching dns yet? | 20:07 |
dmsimard | frickler: no, I haven't touched my batch of 3 workers yet | 20:07 |
dmsimard | maybe in an hour or so | 20:07 |
frickler | dmsimard: o.k., so I'll do mine now and should be finished with that soon | 20:08 |
dmsimard | ack | 20:08 |
ianw | fungi: /usr/local/bin isn't in the default path? | 20:08 |
pabelanger | clarkb: https://review.openstack.org/507266/ too when you have a moment | 20:09 |
fungi | ianw: previously we hard-coded to /usr/bin/pip | 20:09 |
fungi | which doesn't exist (per the tracebacks i got when launching the server) | 20:10 |
pabelanger | fungi: ianw: https://review.openstack.org/527447/ could also use a review to address plugins directory for puppet-meetbot | 20:10 |
fungi | i included the path in there on the assumption that puppet may not set one otherwise | 20:10 |
ianw | right, but is line 42 in https://review.openstack.org/#/c/527280/2/manifests/init.pp strictly necessary? | 20:10 |
pabelanger | https://review.openstack.org/527274/ is an easy +3 for somebody too | 20:10 |
-openstackstatus- NOTICE: The zuul scheduler has been restarted after lengthy troubleshooting for a memory consumption issue; earlier changes have been reenqueued but if you notice jobs not running for a new or approved change you may want to leave a recheck comment or a new approval vote | 20:16 | |
fungi | i had copied the original implementation from puppet-zuul and adapted it for python 2.7... not sure why it was calling /usr/bin/pip3 even though our normal pip is at /usr/local/bin/pip | 20:18 |
frickler | logstash-worker1[0-3] done and /me is done for today, too ;) | 20:21 |
fungi | thanks frickler! | 20:22 |
fungi | pabelanger: i had one minor concern noted inline on https://review.openstack.org/527447 but didn't block the change for that | 20:26 |
pabelanger | fungi: yah, I tried looking this morning, but didn't see a way to define an external plugin directory | 20:27 |
fungi | the cheap solution would just be to switch on version ranges like >=16.04 | 20:28 |
fungi | which is what some changes in other modules ended up going with | 20:28 |
pabelanger | yah | 20:28 |
pabelanger | I am guessing python3.6 might be the next path too | 20:28 |
fungi | quite possibly, if meetbot even has/gets py3k support | 20:29 |
*** jkilpatr has quit IRC | 20:33 | |
*** dteselkin has quit IRC | 20:35 | |
*** dteselkin has joined #openstack-sprint | 20:40 | |
jeblair | i didn't hear any screams about grafana, so i will delete the old server now | 20:44 |
pabelanger | okay, trying files02.o.o again | 20:44 |
pabelanger | jeblair: +1 | 20:44 |
*** clarkb has quit IRC | 20:45 | |
*** clarkb has joined #openstack-sprint | 20:46 | |
jeblair | done | 20:48 |
ianw | frickler: ok, my original ethercalc 079b34ea-4c2d-4c05-a984-a044ab69b0d8 deleted now | 20:58 |
ianw | https://172.99.116.13/ seems to be working | 21:02 |
pabelanger | okay, files02.o.o is online | 21:03 |
pabelanger | I'm going to update the DNS for files.o.o and point to files02.o.o | 21:05 |
pabelanger | unless somebody objects | 21:05 |
fungi | go for it | 21:08 |
pabelanger | done | 21:08 |
pabelanger | I can see some traffic already | 21:08 |
fungi | guess we should try hitting docs.o.o or something to make sure it comes up | 21:08 |
fungi | wfm and is resolving via cname | 21:08 |
pabelanger | yay | 21:09 |
clarkb | ianw: cool does that mean nodejs is sorted out now? | 21:11 |
clarkb | lunch has been consumed now to the reviews also apparently I lost connectivity to freenode? | 21:12 |
pabelanger | remote: https://review.openstack.org/527519 Remove files01.o.o from hiera | 21:13 |
ianw | clarkb: we'll need the updated version from https://review.openstack.org/#/c/526978/ | 21:13 |
pabelanger | I'll delete files01.o.o in an hour or so. Once http traffic has stopped hitting it | 21:14 |
clarkb | ianw: cool I'll start reviewing there | 21:14 |
ianw | well that's a just a version number bump ... i'm pretty sure it's backwards compatable but i'm not sure how to prove that | 21:14 |
clarkb | was just going to ask about that | 21:14 |
clarkb | ianw: due to how puppet moduels are global I think what we've donein the past is read the docs and do uor based to read the interfaces we use to have the pretty sure assertion then check after the fact if anything broke | 21:15 |
clarkb | ianw: puppet tends to fail gracefully making it relatively safe to push those things in and see what happens | 21:15 |
clarkb | (because puppet when failing does nothing rather than something) | 21:15 |
clarkb | I've +2'd the change, if other people are ok with ^ then I think we can go ahead and approve it | 21:16 |
ianw | yeah, also any backwards compat issues would be with trusty hosts, so presumably temporary | 21:17 |
ianw | if anyone has thoughts on migrating redis databases over and above https://redis.io/topics/persistence ... which suggests you can just move the rdb file ... i'm open to suggestions :) | 21:17 |
clarkb | ianw: my understanding is that is how you redis | 21:17 |
clarkb | I think what you would do is stop the service in front of redis so that it doesn't get updates, force a write (or just wait for one) then copy the db file | 21:18 |
clarkb | fungi: any raeson this wasn't approved earlier https://review.openstack.org/#/c/527175/1 ? just the zuul situation? | 21:19 |
ianw | it does say it's safe to copy at any time due to it being renamed(). so i'll do any initial ethercalc copy, test a few, and then if it's ok, shutdown apache on ethercalc, final copy, redirect dns | 21:19 |
clarkb | sounds good to me | 21:20 |
fungi | ianw: oh, approved now. i probably just missed that it already had a +2 | 21:20 |
ianw | clarkb: i'd say the ethercalc puppet is ready for review too -> https://review.openstack.org/#/c/527144 | 21:30 |
clarkb | ianw: left some comments on that, the first set are for the -1 | 21:46 |
clarkb | and I'm just now realizing I am blind | 21:46 |
clarkb | there are TWO anchors only one went away | 21:46 |
clarkb | ianw: curious what you think about the persistent journald storage thing though. I remember poking at this in the past and deciding there didn't seem to be a great way to do that | 21:47 |
clarkb | ianw: aiui the two ways to do that are to make sure the /var/log/journald dir exists with all the correct permissions (and selinux labels) or to chagne the journald config to persistent (instead of the default auto) and then journald creates the dir for iteslf with the correct settings | 21:52 |
clarkb | The problem with the first is I think different distros do perms/users/etc differently. THe problem with the second is doing the whole restart dance and making sure it picks it up and does the work for itself | 21:53 |
clarkb | but maybe we just do the second and restart the service and call it good | 21:53 |
*** jkilpatr has joined #openstack-sprint | 22:04 | |
fungi | our launch script reboots the server at the end anyway, so it'll be restarted at that point right? | 22:10 |
clarkb | fungi: ya so fresh servers will be fine but not existing ones like translate or logstash workers | 22:13 |
clarkb | I think we can sort it out I just need enough time to sort out what the process is | 22:13 |
clarkb | or someone else, feedback welcome :) | 22:13 |
ianw | hmm, yeah i hadn't considered that. setting the config seems best maybe | 22:18 |
ianw | i think maybe we redirect like the upstart logs for ethercalc, though | 22:22 |
ianw | frickler: 527144 updated the service file to put the logs in the same place, moved the rotation back out | 22:27 |
jeblair | 1 year memory max for paste.o.o is 486M. do we want to stick with the current 2G flavor or drop to 1G? | 22:29 |
jeblair | (it has almost no cpu usage) | 22:29 |
clarkb | I guess that also drops to single cpu? but with little cpu usage thats probably fine | 22:30 |
jeblair | ya | 22:30 |
jeblair | remote: https://review.openstack.org/527536 Create paste hiera group | 22:33 |
jeblair | easy +3 | 22:33 |
pabelanger | remote: https://review.openstack.org/527447 Support ubuntu xenial plugins directory | 22:35 |
clarkb | jeblair: in that case I think 1GB is fine | 22:35 |
pabelanger | need to run and eat, but ^ updated for puppet syntax issue. Would like a re-review please | 22:35 |
clarkb | I've queued up both cahnges for review | 22:35 |
jeblair | fungi: can you +3 527536? | 22:37 |
clarkb | pabelanger: I left a -1 can you take a look when you get back? | 22:37 |
jeblair | clarkb, pabelanger: i responded to clarkb but left a -1 for a different reason | 22:40 |
jeblair | (but related) | 22:41 |
clarkb | jeblair: you may be interested in my comment on https://review.openstack.org/#/c/526946/ as I think there is an intersection between retired projects and zuul v3 config | 22:43 |
jeblair | clarkb: yeah, why don't we do that in project-config, so the repo can still be empty | 22:45 |
clarkb | ya that may make it cleaner in the retired repo | 22:46 |
pabelanger | thanks, looking | 22:46 |
clarkb | I guess the end state is to remove the repo from zuuls config entirely? | 22:47 |
clarkb | so that would just be temporary no matter where it lives? | 22:47 |
pabelanger | jeblair: good point | 22:47 |
*** baoli has quit IRC | 23:25 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!