Monday, 2020-02-10

*** sgw has quit IRC00:01
*** cloudnull has quit IRC01:04
*** wxy-xiyuan has joined #zuul01:05
*** cloudnull has joined #zuul01:05
*** openstackstatus has joined #zuul01:17
*** ChanServ sets mode: +v openstackstatus01:17
*** jamesmcarthur has joined #zuul03:10
*** bhavikdbavishi has joined #zuul03:25
*** bhavikdbavishi1 has joined #zuul03:28
*** bhavikdbavishi has quit IRC03:29
*** bhavikdbavishi1 is now known as bhavikdbavishi03:29
*** jamesmcarthur has quit IRC03:40
*** jamesmcarthur has joined #zuul03:44
*** zxiiro has quit IRC03:45
*** jamesmcarthur has quit IRC03:49
*** sgw has joined #zuul03:53
*** sgw has quit IRC04:08
*** sgw has joined #zuul04:26
*** evrardjp has quit IRC05:34
*** evrardjp has joined #zuul05:34
*** raukadah is now known as chkumar|rover05:44
*** bhavikdbavishi has quit IRC06:53
*** bhavikdbavishi has joined #zuul06:53
*** dpawlik has joined #zuul06:55
*** dpawlik has quit IRC07:16
*** dpawlik has joined #zuul07:19
*** tosky has joined #zuul07:35
*** saneax has joined #zuul07:37
reiterativemasterpe I have installed Zuul on bionic, so feel free to ping me if you hit a problem. I also came close to persuading the WIP Gitlab driver to work, but still have some problems to resolve.08:17
*** carli has joined #zuul08:31
*** jpena|off is now known as jpena08:54
*** hashar has joined #zuul09:19
*** bhavikdbavishi has quit IRC09:32
*** sshnaidm has quit IRC09:41
*** sshnaidm has joined #zuul09:42
zbra review on https://review.opendev.org/#/c/705049/ would be really appreciated..09:51
zbris very easy to test via dasboard, low risk and improves ability to compare current run with previous ones.09:52
*** jpena is now known as jpena|brb09:58
*** hashar has quit IRC10:20
carlihello, I'm looking to find some stats about the number of deployments of openstack (if it's possible to see it for specific projects like for tripleo, devstack, kolla-ansible that would be even better) over a long period of time, and I've been looking at the logstash on logstash.openstack.org and checking the information on the zuul pages, does anyone have an idea how I can get that information?10:25
mnasercarli: i think what you're looking for is the openstack user survey10:29
carlithe user survey talks about what various actors use everywhere, it does not display the number of times openstack(in its various iterations) is deployed for CI, or at least I haven't really seen the information (you're talking about https://www.openstack.org/analytics, right?)10:31
carliI've seen slides by Monty Taylor where there's some information (on this http://inaugust.com/talks/zuul.html#/openstack-scale), but I'm looking to have a numer of deployments done  (for integration testing) to use it as a statistics (I'm working on a paper with the use case of deploying openstack (kolla-ansible) faster than without our tool and we have the deployment times (as we have both deployed it with10:37
carliour tool and without), but to give it some perspective we would have liked to know how often openstack gets deployed over a certain period (for example a year, a month) to be able to give an estimation of a time gain10:37
*** Defolos has joined #zuul10:39
*** jpena|brb is now known as jpena10:43
*** hashar has joined #zuul10:44
mnasercarli: oh i see, then you can probably rely on the zuul ci build logs and going from there10:48
openstackgerritMatthieu Huin proposed zuul/zuul master: Authorization rules: add templating  https://review.opendev.org/70519311:14
openstackgerritJan Kubovy proposed zuul/zuul master: WIP: Store unparsed branch config in Zookeeper  https://review.opendev.org/70571611:21
*** bhavikdbavishi has joined #zuul11:41
*** bhavikdbavishi has quit IRC11:46
*** hashar has quit IRC11:56
carlimnaser: i might not understand this correctly, but the zuul ci build logs only indicate current jobs, i don't really see how I can get an information over time. I've looked at the Zuul API to see if there was a way to send a request for specifics, but the API only indicates current available jobs12:02
*** wxy-xiyuan has quit IRC12:05
mnasercarli: have a look at http://zuul.openstack.org/builds12:33
mnasercarli: more specifically http://zuul.openstack.org/openapi for /api/builds12:33
*** jpena is now known as jpena|lunch12:35
*** electrofelix has joined #zuul12:44
*** vivobg has joined #zuul12:45
*** rlandy has joined #zuul12:50
vivobgHi, all. Is there a way to flush the current, in-progress job queue in Zuul?12:53
openstackgerritMatthieu Huin proposed zuul/zuul master: Authorization rules: add templating  https://review.opendev.org/70519313:00
openstackgerritMatthieu Huin proposed zuul/zuul master: Authorization rules: add templating  https://review.opendev.org/70519313:00
fungivivobg: the easiest way to flush everything is to restart the scheduler13:20
fungivivobg: alternatively, you can take a queue dump and then transform that into a set of corresponding `zuul dequeue ...` rpc client commands13:21
*** bhavikdbavishi has joined #zuul13:22
*** jpena|lunch is now known as jpena13:30
openstackgerritTobias Henkel proposed zuul/zuul master: Offload repo reset to processes  https://review.opendev.org/70682713:34
*** rfolco has joined #zuul13:42
*** avass has joined #zuul13:44
*** sgw has quit IRC13:52
*** webknjaz has joined #zuul13:59
*** zxiiro has joined #zuul14:08
Shrewsfungi: wonderful article14:13
fungihope i didn't get anything major wrong in it14:15
fungiand thanks!14:15
Shrewsthe eavesdrop link was like a flashback!14:17
Shrewsnice touch14:17
Shrewsbut it did bring up bad java-based memories for me when i looked through it  :(14:17
fungiheh, indeed14:17
fungiit was all a work in progress ;)14:18
* fungi couldn't help himself, sorry!14:18
Shrewsi'm still waiting (apparently) for that zuul option from corvus to randomly drop log entries14:18
*** Goneri has joined #zuul14:35
openstackgerritFelix Schmidt proposed zuul/zuul master: Implement basic github checks API workflow  https://review.opendev.org/70516814:39
openstackgerritFelix Schmidt proposed zuul/zuul master: Implement basic github checks API workflow  https://review.opendev.org/70516814:44
openstackgerritFelix Schmidt proposed zuul/zuul master: Implement basic github checks API workflow  https://review.opendev.org/70516814:47
openstackgerritTobias Henkel proposed zuul/zuul master: Cap virtualenv to <20.0.0  https://review.opendev.org/70686014:47
tobiashzuul-maint: virtualenv just broke tests so cap it for now ^14:48
openstackgerritFelix Schmidt proposed zuul/zuul master: Implement basic github checks API workflow  https://review.opendev.org/70516814:52
vivobg@fungi Thanks. We tried a scheduler service restart, but that didn't clear the queue. We had a queued job, removed the project from the tenant config, restarted the scheduler and that caused all jobs to get a null reference and zuul was in a bad state. We ended up doing a complete redeploy, with fresh instances to restore service.14:56
vivobgwe are on 3.10.214:57
fungithat's... odd. zuul keeps all queued builds in the scheduler's memory, so if the scheduler restarts it should lose track of them entirely14:57
fungiunless however you're restarting the scheduler is also reenqueuing a saved dump of the previously-queued builds14:58
Shrewstobiash: oh fun14:59
openstackgerritTobias Henkel proposed zuul/zuul master: Uncap virtualenv  https://review.opendev.org/70687115:02
tobiashcurious if this does the trick ^15:03
fungitobiash: locally when i create an environment with virtualenv 20 it has setuptools included and i can import pkg_resources just fine: http://paste.openstack.org/show/789376/15:09
fungii wonder what's different about how we're creating the ansible virtualenvs15:12
mordredfungi: we're calling python -m virtualenv ... is that maybe different?15:13
tobiashfungi: no idea, it's just a shot into the dark in a first try to avoid a deeper investigation ;)15:13
fungioh! i bet this happens only with distro-packaged python which splits pkg_resources out to a separate distro package15:13
* fungi tries something15:13
mordredfungi: ugh. distro-packaged-python again15:13
fungithis is going to be harder to test since i already have python3-pkg-resources installed on most of my systems15:15
fungiand uninstalling it wants to remove lots of other packages which depend on it15:15
fungido we have an example failure?15:15
* fungi goes hunting15:15
tobiashfungi: e.g. https://review.opendev.org/706827 (all test failures)15:21
tobiashlooks like in the ansible 2.6 venv even the setup module is broken with virtualenv 20.0.015:22
tobiashhttps://95195c19b679153113ec-4f1ab06e880eb7499aa5249ca5a98f04.ssl.cf1.rackcdn.com/706827/1/check/tox-py35/eac5637/testr_results.html15:22
tobiashseems like the ara callback needs it15:23
*** jamesmcarthur has joined #zuul15:50
*** carli has quit IRC15:56
openstackgerritMerged zuul/zuul master: Cap virtualenv to <20.0.0  https://review.opendev.org/70686015:57
clarkbMy Project Gating with Zuul talk has been accepted at linux fest northwest16:15
clarkbnot sure if anyone else will be there (maybe jlk)?16:15
*** chkumar|rover is now known as raukadah16:15
*** avass has quit IRC16:20
corvusclarkb: \o/16:27
*** mattw4 has joined #zuul16:34
*** jamesmcarthur has quit IRC16:40
*** jamesmcarthur has joined #zuul16:41
*** saneax has quit IRC16:44
*** tosky has quit IRC16:48
*** jamesmcarthur has quit IRC16:57
*** jamesmcarthur has joined #zuul16:57
*** Defolos has quit IRC17:22
corvusi tried to use the k8s module in an untrusted context and it said "Executing local code is prohibited"17:29
corvusi don't see the k8s module in zuul/ansible.  how did that hit that?17:29
corvusoh, is that via the action module ?17:33
*** evrardjp has quit IRC17:34
*** evrardjp has joined #zuul17:34
corvusso we would need to add a handle_k8s ?17:34
corvus(the "normal" action module i should say)17:35
*** sshnaidm is now known as sshnaidm|afk17:35
tobiashcorvus: yes17:39
openstackgerritJames E. Blair proposed zuul/zuul master: Allow more k8s actions in untrusted context  https://review.opendev.org/70694017:45
corvustobiash, mordred, tristanC: ^ can you take a look at that and tell me if that seems reasonable/secure?17:45
*** vivobg has quit IRC17:45
mordredcorvus: I think it seems reasonable17:47
tristanCcorvus: commented17:47
tristanCmordred: could you please check this operator specification update: https://review.opendev.org/70663917:48
tobiashcorvus: commented17:49
*** electrofelix has quit IRC17:49
tristanC(i meant to use the spec content to start the zuul-operator documentation)17:49
corvustristanC: that's a good idea, though we didn't do that for the other modules.  i don't know if we had a reason or not.  maybe we should think about doing that on all of them?  it might be easier to maintain?17:50
corvustobiash: good idea, i'll update17:50
tristanCcorvus: we could re-purpose the begining of the add_host run function to be re-used? though that may be a lot of work to ensure the change is backward compatible...17:52
mordredcorvus: alternately to whitelisting - we've also discussed dropping the ansible-level exclusions and relying on bubblewrap - should we maybe write up a spec about that?17:53
tristanCunless there is an easy way to get the protected modules attributes per ansible version? last time i checked it involved doc parsing...17:53
tristanCmordred: it seems like we need to protect some secrets exposed in the bubblewrap, iirc the winrm key17:53
corvusmaybe we should start a spec just to track the problem, even if we think it's not ready17:54
tristanCcorvus: yes, good idea17:54
openstackgerritJames E. Blair proposed zuul/zuul master: Allow more k8s actions in untrusted context  https://review.opendev.org/70694017:55
mordredcorvus: ++17:55
*** openstackstatus has quit IRC17:57
*** openstack has joined #zuul18:01
*** ChanServ sets mode: +o openstack18:01
corvushrm, i think i see why you put job in there -- because it doesn't just bind the volume to the executor, it also adds it to the ro path...18:02
corvusor rw path18:02
corvusok.  the word "job" confused me slightly at first, but i see it's value and i don't have a better suggestion.  :)18:03
mordredsame18:04
*** jpena is now known as jpena|off18:09
jlkclarkb: I hadn't planned on going to LFNW. I'll be at Deconstruct conference in Seattle the Thursday / Friday before18:32
*** sgw has joined #zuul18:34
*** bhavikdbavishi has quit IRC18:40
clarkbShrews: what triggers the cleanup of failed image build zk records?19:07
clarkbwe've had a sad new cloud due to networking problems and that results in our zk having many failed records for image uploads19:07
openstackgerritMerged zuul/zuul master: Allow more k8s actions in untrusted context  https://review.opendev.org/70694019:10
Shrewsclarkb: i think it's periodic for some period of N19:17
clarkbok some of these records are more than a week old. Not sure how large of an N we use19:19
Shrewsclarkb: looks like every minute, so it's strange there would be lots of FAILED records19:19
clarkbhrm I wonder if this is fallout from switching servers19:20
clarkbmaybe only the old server could have deleted them (thought it should have deleted them by now as they are way more than a minute old)19:20
Shrewsclarkb: it *might* be possible that if there is a failed record, but no corresponding upload, that the ZK record remains19:20
Shrews(just a guess)19:21
clarkbthere are 7844 such records19:21
Shrewsoh, there was a server switch?19:21
clarkbShrews: yes, but yesterday and these records are up to a week old19:21
clarkbShrews: the old server couldn't talk to the glance api of the new cloud region so we built a new server that could19:22
clarkball of the failed records (the 7844 of them) seem to have stuck around frmo the old server that was having api access trouble19:22
Shrewsclarkb: hrm, can't debug that code path in my head then. i'd have to dig19:23
Shrewsclarkb: nb03 ?19:23
clarkbShrews: yes19:23
clarkbShrews: note these failures were due to tcp not being able to create a connection for the glance api requests19:25
Shrewsclarkb: looks like nb03 is super busy deleting atm19:25
Shrewsoh , no. that was a couple of hours ago19:25
clarkbthat 7844 number seems pretty stable over the last little while19:26
clarkband I think the only things that need deleting are the failed zk records19:26
openstackgerritJames E. Blair proposed zuul/zuul master: Allow template lookup in untrusted context  https://review.opendev.org/70696319:27
corvustobiash, tristanC, mordred: ^ another access barrier i ran into that i think we can open up19:27
clarkbis jinja turing complete?19:28
clarkbthat might be a little bit more dangerous (though I suppose if ansible is evaluating them on the executor side for all jobs then it doesn't matter, and i'm not sure where it does that evaluation)19:28
Shrewsclarkb: oh, i bet we didn't copy over the builder_id.txt file19:28
clarkbShrews: probably not, but these records should've been deleted by the original builder a week ago if the timeout is a minute19:29
clarkb(I don't think the server switch is the problem here)19:29
Shrewswell, if the old server couldn't delete the uploads, the records wouldn't be deleted19:29
clarkbthere were no uploads19:30
clarkbbeacuse the glance api was completely unreachable19:30
clarkb(tcp did not work)19:30
Shrewsthen I return to my original suspicion that if there is no upload, the zk record remains19:30
Shrewswhich is something we should probably fix19:31
clarkbit is possible the server switch would prevent those records frmo being deleted now if it was working otherwise. But I think the underlying bug is probably something lik^ and was present in the original server19:31
corvusclarkb: i think we already allow templating on the executor, we just don't allow it in a lookup plugin.  i think the side-effects that can be performed with a jinja template are limited (ie, i don't think you can read/write arbitrary files, other than using these lookup plugins (filters))"19:32
*** jamesmcarthur has quit IRC19:34
*** igordc has joined #zuul19:35
mordredcorvus: yeah - I think you're right about that19:41
*** jamesmcarthur has joined #zuul19:43
mordredShrews, clarkb: I could imagine the situation being that there was no upload (because tcp errors) - which is, to nodepool, a retriable condition for the most part. that means it's actually most of the time the _right_ thing for nodepool to do to keep the record19:44
mordredexcept for the case where a builder is deployed that is not able to talk to its clouds over TCP - which isn't a circumstance that tends to emerge under normal operating conditions19:44
clarkbmordred: but we get a new record when it retries right19:45
clarkbwe indexby attempt19:45
mordredhrm. yeah - if that's what we're keeping around I can see us fixing that - was more pointing out that nb not being able to delete the thing that's in-flight that needs to go away is a decent reason to keep the record around - so that it knows it needs to delete the remote object19:46
mordredor, rather, that our current failure is somewhat pathological - so figuring out the "right" way to deal with it might be ... complicated19:47
Shrewsdid the provider name change?19:49
Shrewsfrom linaro-us to linaro-london maybe?19:50
clarkbno, they are different regions19:50
Shrewsk19:50
clarkb(and both continue to exist)19:51
*** jamesmcarthur has quit IRC19:52
*** igordc has quit IRC19:53
*** jamesmcarthur has joined #zuul19:57
Shrewsclarkb: mordred: ok, i think i might have a handle on what happened. the old server was continually failing, generating lots of failed records (which would have automatically been cleaned up later if the upload condition corrected itself). The server rebuild did not carry over the unique nodepool builder id file (https://nb03.openstack.org/images/builder_id.txt), so when the new server goes to cleanup, it sees those records and says (oh, these20:00
Shrewsaren't my uploads, so i can't safely remove them). I think cleaning up will be a manual process.20:00
Shrewsi'll code up a script that checks for the old server ID and removes the records20:01
Shrewsi don't think we can safely program nodepool to do that for us20:01
mordredShrews: so if we'd copied the builder_id.txt file things would have cleaned up naturally after themselves20:01
Shrewsmordred: yes20:01
Shrewsi *thought* we had that documented somewhere, but i am failing to find it20:02
mordredcool. i agree - I don't think there's a good safe way to update nodepool to do that automatically\20:02
Shrewsmordred: now, we *could* have that file contain multiple IDs...20:03
Shrewsoh, maybe not20:03
Shrewsb/c then it wouldn't know which one to use going forward... though i suppose it could just pick whichever20:03
Shrewsor we could tag one as primary, others as alternate20:04
* Shrews waves arms wildly in air in true mordred fashion20:05
Shrewsanyway, will write that script now...20:05
Shrewshrm, i may have to take that evaluation back. we also compare hostname20:12
Shrewsand that's the same. the mystery deepens20:12
tobiashcorvus: I have a comment on 70696320:15
mordredtobiash: oh - good catch20:31
corvustobiash: wow, thanks, that is indeed what went wrong :)20:42
openstackgerritJames E. Blair proposed zuul/zuul master: llow template lookup in untrusted context  https://review.opendev.org/70696320:44
openstackgerritJames E. Blair proposed zuul/zuul master: Allow template lookup in untrusted context  https://review.opendev.org/70696320:45
corvuslet's see if that looks right20:45
mordredcorvus: lgtm20:50
mordredcorvus: of course, I missed the symlink thing before, so it's relaly mostly important that it looks good to tobiash20:50
*** rh-jelabarre has joined #zuul20:51
*** jamesmcarthur has quit IRC20:53
ianw$ nodepool image-list | grep linaro- | grep failed | wc -l20:54
ianw791720:54
ianw| 0000086134 | 0000000622 | linaro-london        | ubuntu-xenial-arm64  | None                            | None                                 | failed    | 07:18:08:51  |20:54
ianwe.g. ^20:54
ianwdoes anyone know how to get rid of these?20:54
clarkbianw: ya was talking to shrews and mordred about it earlier, and I think shrews thinks it may be a bug in nodepool20:57
clarkbianw: the problem is that the failed uploads should be cleaned up once successful ones happen (we now have successful uploads but no cleanup)20:57
Shrewsftr, i have absolutely NO idea why these failure records aren't being deleted right now20:57
ianwahh, sorry, ok checking scrollback21:00
*** jamesmcarthur has joined #zuul21:00
Shrewsthis is exceedingly frustrating to debug21:05
Shrewsclarkb: ianw: you know what... i *think* nb03 might be stuck trying to delete an instance21:12
Shrewsthe very last log entry is: 2020-02-10 17:18:21,993 INFO nodepool.builder.CleanupWorker.0: Deleting image build debian-buster-arm64-0000080896 from linaro-london21:13
mordredShrews: and thus can't do the cleanup, beause it's stuck?21:13
Shrewsclarkb: ianw: and it still shows as 'active' with openstack --os-cloud=linaro-london --os-region-name=London image show c45a9a08-1e75-4647-beeb-5e4a3a74f8c0 (which is the corresponding instance)21:13
clarkbShrews: the london delete failures are a known issue and part of the motivation for allowing us to delete files on disk early (as soon as upload records transition to deleting, before they actually delete)21:14
Shrewsmordred: yeah21:14
*** jamesmcarthur_ has joined #zuul21:14
clarkbwouldn't that be handled by a separate thread though as they are in different providers?21:14
clarkb(I thought cleanups were per provider, but I could be wrong about that)21:14
ianwalso ... nb03 only started fresh yesterday; so *it* is having issues deleting in linaro london?  i.e. linaro london glance is borked?21:15
clarkbianw: yes, but so did the old nb0321:15
Shrewsclarkb: we have a single cleanup thread for all providers21:15
clarkbianw: I've also manually tried to delete these images in the past21:15
Shrewsand the image delete is not a new thread21:15
clarkbah ok21:16
ianwclarkb: ok ... i hadn't considered that.  i can loop in with kevinz on that21:16
Shrewsso no further cleanup can happen if that request gets "stuck"21:16
clarkbI've sent email to kevinz about those in the past21:16
clarkblet me see if I can find it21:16
clarkbhrm did I only mention it on irc?21:16
*** jamesmcarthur has quit IRC21:17
clarkbianw: in any case I believe all those stale images in london region can be deleted, nodepool and I have tried in the past but they just won't delete21:18
clarkbit does make me wonder if some other tenant is using them as BFV or something21:18
mordredclarkb: we didn't list them as public ... so other tenants shouldn't be able to do that21:19
Shrewsbtw, i said "because it can't delete an instance" but meant "image", but that's probably clear now..  sorry21:19
clarkbShrews: yup your pasted example clarified it :)21:20
clarkbShrews: we should probably update nodepool to be greedy there and not short circuit?21:20
Shrewsclarkb: eh?21:21
clarkbShrews: after the image delete fails for that stale image, continue trying to delete the next things21:22
clarkband eventually it should get to deleting those failed uploads righ?21:22
Shrewsclarkb: the problem is it is NOT failing21:22
ianwclarkb: does it fail, or not fail but also not delete?21:22
Shrewsif it failed, it would continue on deleting stuff21:22
clarkbShrews: so we only try to delete one thing per cleanup run?21:22
Shrewsno21:23
*** rfolco has quit IRC21:23
clarkbthen why doesn't it continue to the next upload records?21:23
mordredin each run we try to delete the things - but we delete them serially21:23
mordredclarkb: because it's hung21:23
Shrewseach run, it collects all the cruft that needs cleaned up and deletes each in turn. but this delete request is not returning, apparently21:23
clarkbhrm when I tried to manually delete those in the past it did return21:23
clarkbit just never actually deleted things21:23
clarkbmaybe osc and shade do it differently though21:24
mordredclarkb: there are many ways in which cloud APIs can break21:24
fungithat could be openstack's official slogan21:24
mordredclarkb: they _definitely_ do  things differenlty. although gtema does have a patch up to swtich osc to using sdk for image operations21:24
mordredat which point they'll be much more similar21:25
ianwhttp://paste.openstack.org/show/789394/ <- linaro-london's image list right now21:25
Shrewsyep. there should be a max of 2 per image21:26
Shrewsthat list was my clue something was up  :)21:26
ianwi'm trying to manual delete 6b195f5f-abb5-445a-a248-1da30ef3a26b21:27
clarkbya we normally see that in the BFV clouds which is why I theorized that could be related here21:27
ianwit is, i would say, hanging21:27
clarkbbecause some instance will hang on to the image with its volume preventing the image from deletin21:27
Shrewsianw: are you using osc or an sdk script?21:29
clarkbunrelated I see the problem with the virtualenv update21:30
fungioh?21:31
ianwShrews: osc21:31
Shrewsianw: cool, so broken in both code paths21:31
clarkbfungi: https://virtualenv.pypa.io/en/latest/cli_interface.html#section-seeder seems new virtualenv is more conservative about installing new packages?21:32
clarkbhrm actually rereading that it is not going todownload newer setuptools but should bundle a version?21:32
fungii mean, when i use virtualenv 20 to create and environment, it preinstalls setuptools (45.x)21:33
ianwShrews: yep, running with debug it's just a "curl -g -i -X DELETE -H 'b'Content-Type': b'application/octet-stream'' -H 'b'X-Auth-Token'" that is never returning.  we'll have to take it up with the cloud i guess21:33
fungier, create AN environment21:33
Shrewsall: fwiw, i am taking the rest of the week off in a use-it-or-lose-it vacation scheme i seem to be mixed up in21:33
clarkbShrews: seems like you should use it then21:33
mordredShrews: are you going to do anything fun?21:34
fungivacation ponzi schemes are the worst21:34
Shrewsmordred: as soon as i requested the days, the weather decided that it would rain for the entirety of the duration.... so, who knows21:34
clarkbfungi: ya I think I misread the don't download default as meaning no install but that just means don't update by default but use the bundled setuptools instead21:34
clarkbfungi: do you have a pkg_resources when you do that?21:35
fungiit's part of stdlib21:35
fungibut yes21:35
mordredfungi: go get 5 people to give you some of their vacation, and give me a small percentage of what you get- then get those people to do the same - and by the time the scheme has worked its magic, you'll have permanent vacation!21:35
mordredShrews: "yay"!21:35
corvusmordred, tristanC, tobiash, paladox|UKInEU: gerrit's zuul is self-deploying!  https://gerrit-zuul.inaugust.com/t/gerrit/build/ec3af27521ba4c759b9c99c142074905/console21:35
* fungi is on permanent vacation, aerosmith style21:35
clarkbfungi: pkg_resources is part of setuptools not stdlib21:35
clarkbfungi: thinking that maybe the bundled setuptools lacks that potentially21:36
mordredcorvus: \o/21:36
fungiclarkb: oh, but installed as a separate top-level module from the same package?21:36
paladox|UKInEUcorvus \o\o/21:36
clarkbfungi: ya21:36
*** rfolco has joined #zuul21:37
clarkbI am able to import pkg_resources on a virtualenv 20.0.1 installed venv though21:37
fungiindeed, lib/python3.8/site-packages/pkg_resources within the virtualenv i created is a symlink to ~/.local/share/virtualenv/seed-v1/3.8/image/SymlinkPipInstall/setuptools-45.1.0-py3-none-any/pkg_resources21:41
clarkbfungi: mine too, could this be a simple permissions thing?21:44
fungimayhaps21:44
clarkbya I bet that is it because bwrap21:44
clarkbif we have symlinks in the bwrap context to things that aren't bind mounted in they disappear21:44
fungii woudln't be shocked to discover that's the case anyway21:44
clarkbwe could run it with allows copy set to true then I bet it would work21:45
mordredoh  - yeah21:45
*** jamesmcarthur_ has quit IRC21:45
*** Goneri has quit IRC21:46
clarkbI think new virtualenv is trying to be smart and dedup common libs like setuptools and pkg_resources, but when we mount with bwrap we break that21:46
fungipip's going to want to use ~/.local for caching as well, so getting that working might help performance anyway21:46
clarkbfungi: well the idea here is that we preinstall everything so you don't do any of those installs at job urn time21:46
fungiunless that opens a security risk21:46
fungiahh, yeah21:47
mordredmind blown at the virtualenv deduping21:47
fungiwe definitely don't want to share data cached by one build in another21:47
fungithat would be a serious cache poisoning hole21:48
mordredthat actually _completely_ violates assumptions I operate under on how virtualenv works - glad I now know21:48
* mordred edits system libraries with print statements inside of virtualenvs because it's "safe" to do so21:48
clarkbthat poses two new problems for us. The first is that old virtualenv and new virtualenv may not have the same configuration/arguments so we will need to detect that potentially. The other is that we probably do want symlinks of python itself but not the libs21:49
fungimordred: i just `python3 -m venv` normally and have only touched virtualenv locally for python2.7 things since ages21:49
mordredat least it only does that for the pip/setuptools basics21:49
fungithe venv module does not symlink foo/lib/python3.8/site-packages/pkg_resources to anywhere special, i just confirmed21:50
clarkbfungi: ok so maybe we just use the venv module. I don't remember why it wsn't used initially. I seem to recall there was a reason though21:50
mordredyeah - there's some $reason right?21:51
clarkbya I remember it coming up in reviews21:51
clarkbI want to say it had to do with distros like ubuntu not actually packaging it properly21:51
openstackgerritMerged zuul/zuul master: Allow template lookup in untrusted context  https://review.opendev.org/70696321:51
clarkbbut that memory is fuzzy21:51
clarkbI wonder if we set it to downloda those libs if they would go in the common lib dir or in the virtualenv directly21:52
mordredyeah - I think that's right - like, distro python excludes venv or something21:52
fungiclarkb: python2.7 maybe?21:52
fungiand yes, ubuntu nees python3-venv installed because they separate it out (because debian does)21:53
clarkbfungi: well zuul requires python3. Ansible can run under both. I thought venv can create a python2 venv it just needs to be started from python321:53
fungisame for python3-pip21:53
mordredyup21:53
fungiand python3-pkg-resources21:53
fungiwhich they separate out of python3-setuptools21:53
fungianyway, it looks to me like virtualenv is only symlinking easy_install, pip, pkg_resources, setuptools and wheel21:54
fungiat least by default21:54
clarkbya its the seed packages21:54
mordredyah21:54
mordredso it's a known set21:55
fungii expect if you specify other stuff with --seed-packages those will get the same treatment21:55
clarkbthere are two --seeder options too21:55
clarkbdefault is app-data the other is pip21:55
clarkbI wonder if we used pip if it would install normally21:55
clarkbbecause pip wouldn't know about the weird bunlding and symlinking21:55
* clarkb tests21:55
fungiindeed, that solves it21:56
fungi--seeder=pip gets rid of the symlinks21:56
clarkbyup confirmed locally21:56
fungi--seeder=app-data makes them come back21:56
fungiso that or python3 -m venv may be the simplest workarounds21:57
clarkbya21:57
fungiexcept you need to detect whether virtualenv has a --seeder option21:57
clarkbold virtualenv has no seeder option21:57
clarkbjinx21:57
mordredI think I like --seeder=pip of the two21:57
mordredJOY21:57
clarkbwe can assert we need current virtualenv to deal with that21:58
mordredI mean - we can just specify a min in requirements yeah?21:58
mordredclarkb: jinx21:58
fungiwfm21:58
fungii don't think anyone's doing distro packaging of zuul yet, right?21:58
clarkbfedora people may be?21:58
fungithough maybe they're using zuul on platforms where they want to provide virtualenv via system packages21:58
mordredI'm pretty sure SF are deploying from packages they build21:59
fungiin which case they're going to need to package a bleeding-edge virtualenv (or switch to managing ansible environments manually?)21:59
mordredI'm not sure how that interacts with multi-python virtualenv21:59
clarkbvirtualenv --version is valid in both new and old virtualenv22:00
fungiat least zuul doesn't insist on doing that for you if you want to supply your own22:00
clarkbof course the format is different in both :)22:00
mordreds/multi-python virtualenv/multi-ansible virtualenv/22:00
clarkbbut we could switch on that22:00
mordredclarkb: I tried virtualenv --version a little bit ago and it did not work22:00
clarkbmordred: it is working for me22:00
clarkbmordred: what version of virtualenv do you have? mine are 20.0.1 and 16.7.522:01
mordredoh - no - I misspelled it22:01
mordredapparenlty --verison doesn't work22:01
fungiooh i know, we could write a little python utility to ask pkg_resources to return the virtualenv version and parse that... just need a virtualenv to install it in22:01
fungimmm, --venison22:01
clarkbfungi: pkg_resources is really slow too22:01
mordredhow about we start with a requirements pin and --seeder=pip22:01
mordredsince that's the most direct expression of what we _want_ to have happen here22:01
fungiclarkb: sure, i was mostly creating a catch-22 scenario22:01
clarkbmordred: that seems reasonable22:02
clarkbshould I update tobiash's chnage?22:02
mordredbut let's wait for the SF folks to review it and tell us if it makes things unworkable22:02
mordredbefore landing22:02
fungiyep, wfm22:02
fungitristanC: ^ when you're around22:02
clarkbhttps://review.opendev.org/#/c/706871/1 is the change. I'll push up a new ps22:02
mordredclarkb: and yeah - go ahead22:02
corvusftr, i have caught up on this issue in scrollback and.  wow.22:03
mordredcorvus: ikr?22:04
fungiit is quite wow-inducing22:05
fungito be fair, the virtualenv maintainers announced a 20 beta a couple weeks ago complete with release notes and asked for folks to test it out22:06
fungii didn't consider at the time the fact that zuul was using it to manage ansible environments22:07
fungithough i still feel like eventually switching to the stdlib venv module might be a friendlier solution22:07
openstackgerritClark Boylan proposed zuul/zuul master: Uncap virtualenv  https://review.opendev.org/70687122:07
clarkbI've tried to capture what the isse is in the commit message there22:08
corvuszuul-maint: remote:   https://review.opendev.org/706989 Add gcp-authdaemon to Zuul22:11
*** dpawlik has quit IRC22:14
*** Defolos has joined #zuul22:15
*** dmsimard is now known as dmsimard|off22:17
corvusmordred: i'd like to install 'kubectl' and 'oc' in the zuul-executor image (because i think [especially since it's a *container* image] those commands are very likely to prove useful to people using the image).  what do you think is the best way to do that?22:27
openstackgerritJames E. Blair proposed zuul/zuul master: Install kubectl/oc into executor container image  https://review.opendev.org/70699522:44
corvusmordred, tristanC: ^22:44
*** openstackgerrit has quit IRC22:46
*** jamesmcarthur has joined #zuul23:22
*** openstackgerrit has joined #zuul23:29
openstackgerritJames E. Blair proposed zuul/zuul master: web: link to index.html if index_links is set  https://review.opendev.org/70558523:29
*** rh-jelabarre has quit IRC23:37
*** jamesmcarthur has quit IRC23:39
openstackgerritJames E. Blair proposed zuul/zuul master: Install kubectl/oc into executor container image  https://review.opendev.org/70699523:43
*** jamesmcarthur has joined #zuul23:50
*** jamesmcarthur has quit IRC23:52
*** Defolos has quit IRC23:53
openstackgerritJames E. Blair proposed zuul/zuul master: Install kubectl/oc into executor container image  https://review.opendev.org/70699523:53
*** sgw has quit IRC23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!