Wednesday, 2018-12-19

corvusSpamapS: :(  i think option #2 from earlier may be the best thing for now00:01
clarkbyou can set the merge strategy to prefer the master side00:02
clarkbor the prod side dpeneding00:02
clarkb(though not sure if you can do that on a per file level)00:02
corvusevrardjp: i'm warming to your idea of a pipeline flag to ignore files -- i think it might be good to make an exploratory change and see how it looks.  are you interested in hacking on that, or did you just want to get the idea out there?00:03
clarkbcorvus: Shrews: do you know what the time from node going ready to node being unlocked in nodepool is spent doing?00:15
clarkbin our integration job that takes a full 4 minutes00:15
clarkband just over 3 minutes in another case00:16
clarkbhttp://logs.openstack.org/44/626044/1/check/nodepool-functional-py35-src/670251d/ shows the devstack service trimming should work00:18
corvusclarkb: that's zuul getting around to starting the job.  scheduler delay basically.00:18
clarkbthinking that may make things more reliable over all just by trimming the total number of moving parts using cpu and disk and memory00:18
*** rlandy has quit IRC00:18
clarkbcorvus: but there is no zuul in the functional job00:18
corvusclarkb: oh, you said *unlocked* sorry00:18
corvusclarkb: it should happen very quickly, so it's probably thread/cpu contention00:19
clarkbok https://review.openstack.org/626044 is an attempt at reducing the cpu contention on the host at least. Its not a massive reduction I don't think, but should be something00:22
clarkbother than that we may want to look at that path from being marked ready to unlocked to see if we have an obvious bugs beacuse ya I would expect it to be much quicker00:23
clarkbalso another idea I have thinking about this is we may want to do a multinode job where one host runs the cloud and the other host runs nodepool and zk00:23
clarkb(but that is probably more last resort if things continue to be "weird")00:23
*** quiquell|off has quit IRC00:29
clarkbmordred: related to https://review.openstack.org/626044 I wonder if the sdk jobs would find that useful too potentially?00:30
clarkbmordred: not sure if the sdk/shade tests care about vnc or metadata service00:31
mordredclarkb: sdk/shade test do not care about vnc or metadta service00:35
mordredclarkb: so yeah - totes00:36
mordredclarkb: https://review.openstack.org/626058 adds the same to sdk - thanks00:42
clarkbmordred: as i noted in 626044's commit messages I also wanted to disable etcd since nodepool doesn't cinder and cinder is the only thing using etcd I think. But at the same time any service could start using etcd at any time00:47
clarkbDoes make me think the right way to add dependencies is when you actually depend on them though :)00:48
mordredclarkb: go figure00:53
clarkbtobiash: https://review.openstack.org/#/c/611920/ is the geard ipv6 change I referred to before00:55
tristanCcorvus: what is your suggestion for https://review.openstack.org/535549 again?01:22
tristanCclarkb: corvus: Shrews: we could expand the connection_port or rename it yes, (this was suggested on the k8s review, but it got merged with connection_port...)01:32
tristanCwould you prefer it to be named connection_details then?01:33
clarkbI think it will help communicate what the intent is there to someone that ends up debugging the nodeset data at a low level01:33
tristanCwhat about having new znode keys for each detail?01:36
clarkbconnection_port connection_namespace connection_user and so on?01:37
clarkbthat may also work ,though the data structure might get noisy depending on how many attributes future systems depend on01:38
tristanCi agree, but then shouldn't we move cloud, az, region, image_id to connection_details too?01:40
clarkbor maybe into some other tree. They don't really mater when it comes to making a connection (of course at this point probably overthinking the color of the shed)01:41
*** bhavikdbavishi has joined #zuul02:39
openstackgerritIan Wienand proposed openstack-infra/nodepool master: [wip] Use bindep for devstack jobs  https://review.openstack.org/62606802:39
openstackgerritIan Wienand proposed openstack-infra/nodepool master: [wip] Add dogpile.cache master to the -src tests  https://review.openstack.org/62545703:30
*** bhavikdbavishi1 has joined #zuul03:54
*** bhavikdbavishi has quit IRC03:56
*** bhavikdbavishi1 is now known as bhavikdbavishi03:56
*** chkumar|out is now known as chandankumar04:12
Shrewsclarkb: corvus: if it's a multinode request, a ready node will remain locked until all nodes in the request are ready04:14
Shrewsclarkb: corvus: could also be delay in processing the finished request handlers. iirc, tobiash did some work around that recently04:19
Shrewsto improve it, that is04:19
*** bjackman has joined #zuul04:23
clarkbits single node request04:24
clarkbjust the min ready request in thr functional job04:25
*** evrardjp_ has joined #zuul05:05
*** evrardjp has quit IRC05:07
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: Fix container job documentation typo  https://review.openstack.org/62609305:38
openstackgerritMerged openstack-infra/nodepool master: Run devstack zookeeper on tmpfs  https://review.openstack.org/62603805:46
*** evrardjp has joined #zuul06:01
*** evrardjp_ has quit IRC06:03
*** mgagne has quit IRC06:35
*** mgagne has joined #zuul06:40
*** ianw is now known as ianw_pto06:43
*** quiquell has joined #zuul07:13
evrardjpcorvus: I was first asking if that idea was valid, then I think I'd love doing it as it might be low impact/low priority, but I will probably require some help, as I've never contributed here07:40
*** pcaruana has joined #zuul08:03
*** bhavikdbavishi has quit IRC08:10
openstackgerritJens Harbott (frickler) proposed openstack-infra/nodepool master: [wip] Use bindep for devstack jobs  https://review.openstack.org/62606808:17
*** pcaruana has quit IRC08:24
*** bhavikdbavishi has joined #zuul08:25
*** pcaruana has joined #zuul08:33
*** pcaruana has quit IRC08:41
*** jpena|off is now known as jpena08:47
*** dkehn has quit IRC09:08
*** bjackman has quit IRC09:17
*** bjackman has joined #zuul09:20
*** hashar has joined #zuul09:34
*** bhavikdbavishi has quit IRC09:52
*** gtema has joined #zuul10:15
*** bjackman has quit IRC10:19
openstackgerritSorin Sbarnea proposed openstack-infra/zuul master: Adds ServerAliveInterval to ssh_args to prevent frozen connections  https://review.openstack.org/62613810:21
*** bjackman has joined #zuul10:21
*** ssbarnea|rover has quit IRC10:34
*** ssbarnea|rover has joined #zuul10:35
*** hashar has quit IRC10:44
*** bjackman has quit IRC11:52
*** bjackman has joined #zuul11:52
*** neilsun has joined #zuul11:56
*** rfolco has quit IRC11:59
*** rf0lc0 has joined #zuul11:59
*** bhavikdbavishi has joined #zuul12:06
*** jpena is now known as jpena|lunch12:29
*** hogepodge has quit IRC12:29
*** hogepodge has joined #zuul12:30
ssbarnea|roverjhesketh: can you please have a look at https://review.openstack.org/#/c/626138/ ?12:35
jheskethssbarnea|rover: seems reasonable to me :-)12:37
*** bhavikdbavishi has quit IRC12:42
ssbarnea|roverjhesketh: there is also the GSS part and eventually a different time than 30s which could prove to be too small. tripleo is using 270s.12:45
ssbarnea|roveralso i see no option for retry there and we use retries=3 which could prove very useful for occasional networking hiccups, also allows us to trace it on logstash.12:47
ssbarnea|roveranyway, its current for should be enough for current issue.12:48
*** bjackman has quit IRC12:48
tobiashssbarnea|rover: you mean you want to configure the number of retries?12:50
ssbarnea|rovertobiash: is more of a question, would it make sense?12:50
tobiashwe try to avoid config options where there is no need to if we can come up with a default that should work everywhere12:51
tobiashand I think the default of 3 retries is sane?12:51
*** jpena|lunch is now known as jpena13:01
ssbarnea|rovertobiash: more than one and less than a big value that could slow things down a lot in case of network downtimes. probably any value between 2-5 would be ok.13:02
*** rlandy has joined #zuul13:03
ssbarnea|roverthe idea is to retry once, so we can see if doing it again fixed the problem. if we have direct failure, we cannot really know if it was recoverable or not.13:03
ssbarnea|rovertobiash: anyway lets just do the proposed change for the moment and see how it behaves. it should take no more than two days to get enough data if this is fixing the POST timeouts.13:12
tobiashsounds good13:12
ssbarnea|roverout of curiosity, how long does it takes from the moment a zuul change is merged until it reach openstack production one?13:13
*** gtema has quit IRC13:14
tobiashIt will be automatically installed, but to make it effective it needs to be restarted manually13:18
tobiashHowever I don't know how long it takes until the Installation happened13:18
*** dkehn has joined #zuul13:49
*** ssbarnea|rover has quit IRC14:40
*** ssbarnea has joined #zuul14:40
*** hashar has joined #zuul14:40
*** bhavikdbavishi has joined #zuul14:42
tobiashmordred: is this another munch problem in openstacksdk? http://paste.openstack.org/show/737739/14:47
tobiashopenstacksdk version is 0.21.014:48
tobiashShrews: ^14:51
pabelangertobiash: I was getting that failure too, but want to say you thought it was also fixed15:00
pabelangerssbarnea: tobiash: we've set retries in ansible.cfg on executor to 3 by default15:01
ssbarneapabelanger: perfect. so lets merge that patch and hope that this will fix the POST failures.15:02
tobiashpabelanger: yes, there were some munch issues that were fixed but I don't know anymore if the flavor.id was this exactly15:06
tobiashat least it fails for me now with latest openstacksdk15:06
pabelangertobiash: yah, I had a patch locally to use flavor['id'] but reverted back to 0.18.0 (I think). would be good to confirm with mordred on where to properly fix15:08
tobiashso am I right that currently there is no working release of openstacksdk because <0.21.0 has the dogpile issue and 0.21.0 the flavor munch issue?15:08
*** rlandy is now known as rlandy|rover15:08
tobiash(working release for nodepool)15:09
pabelangerI can reboot my local nodepool and confirm, but would seem so15:09
pabelangertobiash: another option would be to tag 0.19.1 with dogpile cap for openstacksdk too15:13
tobiashpabelanger: ah yes, the dogpile cap went in before the release15:13
*** bhavikdbavishi has quit IRC15:16
*** quiquell is now known as quiquell|off15:29
corvusssbarnea: if we merge https://review.openstack.org/626138 i expect there will be no fewer failures (and possibly more).  the post timeouts will turn into failures when ansible exits with a connection error (and it's possible that a busy system might not respond for 90 seconds, and in that case we would see new errors).  that might be okay, i just want to make sure that's what you want.15:32
ssbarneacorvus: yes I do. Once we get these errors we can update the queries to accomodate them. I would personally increase the timeout if needed. Still, we need to do it one way or another: permanent freeze is clearly the worst possible behavior.15:36
corvusssbarnea: ok.  let's ask clarkb his thoughts on the timeout value (should be 30*3 = 90 seconds) ^15:37
Shrewstobiash: well, that seems new. but i don'15:42
Shrewsdon't know of any sdk changes around flavors15:43
* Shrews looks15:43
mordredShrews, tobiash: I think it's the same issue we keep hitting with the munchification ... let me go make a comprehensive patch15:45
pabelangerhttp://paste.openstack.org/show/737049/15:48
pabelangerthat's the same traceback from me a few days ago15:48
mordredpabelanger: crap. I totally  missed that15:52
pabelangermordred: np! I rolled back, but miss understood that it might have been fixed15:52
mordredShrews: https://review.openstack.org/#/c/625923/ is related to it - it's the thing we thought we were solving by munching the dict at the top of normalize15:52
mordredpabelanger: well, I thought it was - but I'm gonna solve it for real this time15:52
Shrewswhy aren't our tests catching these errors?15:53
mordredShrews: I don't know - I also want to dig in to that - I Found the security group thing by trying to use ansible to do a thing - which is a thing I'm guessing isn't covered in our ansible tests15:55
mordredShrews: but I'm not sure why the nodepool tests aren't catching this - are we not doing enough in the nodepool tests?15:55
Shrewsi can't answer that without more info. maybe tobiash or pabelanger can point to something that reproduces it15:58
Shrewsi'm not seeing that traceback in any recent nodepool tests (or associated nodepool logs)15:58
mordredShrews: I'm going to add some explicit unit tests for these in this patch15:58
mordredbut I want to figure out what we can add to nodepool tests to make sure these codepaths are exercised15:59
pabelangerShrews: I can uncap and debug more, but there wasn't too much in logs beside the traceback I linked about15:59
tobiashShrews: I saw this traceback during quota handling16:08
pabelangeryah, same16:09
Shrewstobiash: under what conditions? at full quota? normal quota checks? i can't reproduce it locally with 0.21.016:09
tobiashShrews: it was an empty or almost empty nodepool without quota pressure but with some unmanaged vms16:10
clarkbcorvus: that ssh option implies a timeout failure after 3 failed attempts?16:11
tobiashnot sure if the unmanaged vms make the difference16:11
Shrewstobiash: pabelanger: could one of you provide us a unit test scenario?16:11
mordredShrews: nodepool/driver/openstack/provider.py line 196 ... it's in unmanagedServersUsed16:11
mordredShrews: so we'd need to have a vm in the cloud that isn't managed by nodepool to trigger the codepath16:11
mordredShrews: maybe in our nodepool functional test we should boot a vm out of band16:12
pabelangeryah, was just thinking that too16:12
pabelangerI have to run into town for errand shortly, but can see about a test when I get back16:14
corvusclarkb: yes -- i believe ssh and then ansible will exit with an error after 3 attempts, 30 seconds each16:18
tobiashmordred, Shrews: yes, booting an unmanaged vm in the functional test should do it. Just double checked the code paths and this should get triggered by an unmanaged vm.16:19
clarkbcorvus we can try it. May help with what appears to be ansible running the same task twice by failing early or keeping the connection up16:20
tobiashbut I have no idea how to put this into that job, I'm completely new to devstack16:20
clarkb90s seems somewhat short for a timeout but ssh should be more alive than that16:20
corvusclarkb: want to bump it to 3m?  or 5m?16:21
clarkbcorvus: 3m with 60s interval might be better16:21
corvusssbarnea: ^ want to update the values?16:22
clarkbtobiash in devstack/plugin.sh is a function that starts nodepool. Before starting nodepool we ca  use openstackclient to boot an instance16:22
ssbarneasyrem so should I make it from 30 -> 30?16:24
ssbarnea30 -> 6016:24
corvusssbarnea: yep16:24
tobiashclarkb: is there some pre-installed image that can be used to boot an instance in devstack?16:25
clarkbtobiash: yes there is a cirros image16:25
openstackgerritSorin Sbarnea proposed openstack-infra/zuul master: Adds ServerAliveInterval to ssh_args to prevent frozen connections  https://review.openstack.org/62613816:26
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Add unmanaged VM to nodepool functional tests  https://review.openstack.org/62635716:31
tobiashclarkb: something like ^ ?16:31
mordredShrews: ok. I've actually got the more comprehensive version of a fix ready - searching for more places to increase the testing now16:31
pabelangertobiash: I think you'll need to check that is was deleted after we stop nodepool, otherwise something failed16:31
pabelangermaybe shutdown_nodepool16:32
corvuswell, it shouldn't be deleted16:32
tobiashpabelanger: you mean that it's still alive after we stop nodepool ;)16:32
pabelangersorry, yah that16:32
tobiashand maybe that it still has a port ;)16:32
pabelangerbut we need a way to some how trap the exception16:33
pabelangerokay, ignore more, I need some coffee16:33
clarkbtobiash: ya though I dont recall if the image name is specific and might have to be grepped for first16:35
*** j^2 has quit IRC16:37
corvusclarkb, tobiash: want to +3 https://review.openstack.org/626138  ?16:41
tobiashlgtm16:41
clarkbtobiash: beat me to it16:41
Shrewsmordred: ok. i was looking at maybe a unit test for this, but i don't think that's going to be sufficient now. so... yeah, we'll need to do a functional test16:44
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Add unmanaged vm to nodepool functional tests  https://review.openstack.org/62635716:44
*** chandankumar is now known as chkumar|vanished16:50
*** neilsun has quit IRC17:05
*** j^2 has joined #zuul17:27
tobiashclarkb: yupp, grepping for the image was needed according to the live log, thx17:29
mordredShrews: ok. I'm working on unittests - and have the central transform method working properly - which has just poitned out a few places where the payload is a little different than I'm smoothing out17:30
*** _yumapath has joined #zuul17:34
*** j^2 has left #zuul17:35
_yumapathhi team, for the past one week after the log-inventory role got added in zuul base jobs our zuul deployment is failing17:35
_yumapath2018-12-18 02:38:02.152982 | primary |   "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'inventory_file'\n\nThe error appears to have been in '/tmp/fe6c300e779b4833a900c6c9c710ba5b/trusted/project_1/git.zuul-ci.org/zuul-jobs/roles/log-inventory/tasks/main.yaml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending li17:35
_yumapathwith the following error17:35
_yumapaththe inventory.yaml file is not getting copied17:35
_yumapathcan anyone please help with this17:36
openstackgerritMerged openstack-infra/zuul master: Adds ServerAliveInterval to ssh_args to prevent frozen connections  https://review.openstack.org/62613817:36
fungi_yumapath: is your zuul executor running from zuul 3.2.0 or later?17:39
fungiit looks like the change to make the executor start supplying inventory_file in the zuul variables was https://review.openstack.org/578235 which first appeared in the 3.2.0 release of zuul in july17:40
_yumapatham using zuul v317:41
_yumapathbut am not sure which version it is17:41
fungipip3 freeze|grep zuul17:41
fungishould tell you which version is installed17:41
_yumapatham using version 3.1.017:42
_yumapathzuuluser@cinder-zuulv3:~$ zuul --version Zuul version: 3.1.017:42
openstackgerritClark Boylan proposed openstack-infra/nodepool master: Trim devstack services used in testing  https://review.openstack.org/62604417:43
_yumapathso is it becasue it is a lower version am facing this issue?17:43
clarkbhopefully ^ addresses ianw_pto's concerns17:43
corvus_yumapath: there was a security issue, so upgrading to 3.3.1 is a very good idea anyway: http://lists.zuul-ci.org/pipermail/zuul-announce/2018-November/000026.html17:45
fungi_yumapath: yes, upgrading zuul to at least 3.2.0 (but ideally to the latest release) on all your zuul servers is a good idea17:45
fungiand should solve the issue you're seeing17:45
fungialternatively you could pin/fork the version of zuul-jobs you're using, but i don't personally know how to go about doing that17:47
corvusif you wanted to do that, you'd need to make a local fork; we don't have an option to pin yet17:48
fungi_yumapath: but to clarify, the error you're seeing is because the change you noted in zuul-jobs depends on the inventory_file variable which zuul executors didn't start supplying until 3.2.017:49
_yumapathok17:51
_yumapath wget http://tarballs.openstack.org/zuul/zuul-content-latest.tar.gz roles/zuulnodepool/tasks/main.yaml:    tar -xzvf zuul-content-latest.tar.gz -C {{ zuul_path_raw.stdout }}/static17:51
_yumapathi see this is some tar ball installation17:51
_yumapathshould i go do the same17:51
_yumapathor can i use pip to upgrade17:52
corvus_yumapath: that depends on how it was installed originally; but if you installed with pip, you should be able to upgrade with pip.17:53
corvus_yumapath: be sure to read the upgrade notes17:53
_yumapathok17:53
_yumapathsure17:53
_yumapathwill give it a try17:53
corvus_yumapath: https://zuul-ci.org/docs/zuul/releasenotes.html#relnotes-3-3-117:53
corvus(and the older ones too)17:53
*** jpena is now known as jpena|off17:54
*** hashar has quit IRC17:56
tobiashclarkb, Shrews: the unmanaged vm also triggers the failure in the functional test: http://logs.openstack.org/57/626357/2/check/nodepool-functional-py35/1e8b67d/controller/logs/screen-nodepool-launcher.txt.gz18:25
clarkbgood that the tests are able to reproduce things properly :)18:26
*** tobiash has quit IRC18:35
*** tobiash has joined #zuul18:44
*** tobiash has quit IRC18:47
*** tobiash has joined #zuul18:48
*** _yumapath has quit IRC18:54
pabelangerI can confirm latest wheel has static html bits, got it working last night locally finally :)18:55
*** sshnaidm|off has quit IRC19:24
*** sshnaidm has joined #zuul19:27
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Add unmanaged vm to nodepool functional tests  https://review.openstack.org/62635719:35
*** gouthamr_ is now known as gouthamr19:38
*** hashar has joined #zuul19:43
*** tobiash has quit IRC19:53
*** tobiash has joined #zuul19:58
openstackgerritMerged openstack-infra/zuul master: web: update status page layout based on screen size  https://review.openstack.org/62201020:02
mordredtobiash, Shrews: I think we're expecting that nodepool patch to fail, due to the sdk bug right?20:21
mordredtobiash, Shrews: so then if we run it with a depends-on the sdk patch we should expect the src version to pass and the non-src to fail - until we cut an sdk release, right?20:21
clarkbmordred: ++20:22
tobiashmordred: yes, that fails currently, will try the depends-on20:22
mordredcool20:22
openstackgerritTobias Henkel proposed openstack-infra/nodepool master: Add unmanaged vm to nodepool functional tests  https://review.openstack.org/62635720:23
tobiashmordred: lets see if it works20:24
mordredtobiash: woot20:27
mordredtobiash: I would very much like to have our tests fail when there are patches that don't work :)20:27
tobiashmordred, corvus: what do you think about trying out pipenv together with pip lock files?20:30
tobiashhaving lock files could save us in the future from getting broken by dependency updates as these would be gated together with the lock file20:30
*** dkehn has quit IRC20:39
mordredtobiash: I've poked at that a few times - there was a reason I bumped in to last time I looked that something didn't seem quite right yet20:45
tobiashah ok20:45
mordredtobiash: oh - I know what it is - pbr doesn't understand pipenv files yet - so 'pip install path/to/zuul' would, I think, stop working20:45
mordredtobiash: I have some thoughts on what to do about that - but havne't had time to work on them - I should probably write them down in case someone else wants to work on it20:46
tobiashso pbr would need pipenv support20:46
mordredyeah - basically, I think what we want to do is make pbr be able to provide a 'builder' plugin (forget the pep)20:46
mordredand then pip would know how to call that to produce the wheel20:46
tobiashI think at least in the long run it would be beneficial to not get broken every few weeks by a random rependency20:47
mordredtobiash: I haven't worked on it because for openstack more broadly, the constraints construct that exists is a bit in conflict, so we'd need to solve that - this doesn't mean zuul couldn't do something, since it doesn't have a shared constraints concept20:47
tobiashI see20:48
mordredtobiash: yeah - totally agree - although I'd also like to make sure our testing is solid - otherwise we won't be able to judge patches to bump a lock file either20:48
tobiashyeah, testing is hard and can be constantly improved20:52
*** dkehn has joined #zuul20:55
mordred++20:57
*** ssbarnea has quit IRC21:00
mordredtobiash, Shrews, corvus: https://review.openstack.org/#/c/626357 succeeds on the -src job and fails on the release job as expected22:32
mordredtobiash, Shrews, corvus: which means I think we should land https://review.openstack.org/#/c/625923 and cut a release22:34
mordredclarkb: ^^22:34
corvustobiash, mordred: were you talking about having the gate being broken, or actual installations?22:34
corvus(in your earlier conversation about pipenv)22:34
mordredcorvus: I was concerned about actual installations22:34
mordredcorvus: mostly just the mechanics are a bit different and differently opinionated, so I'm concerned it could turn into a rabbithole22:35
corvusok, so another phrasing is "should zuul adopt pipenv as the recommended installation method instead of pip install"?22:35
corvus(because regardless of the method, it is a feature that the gate mirrors production installations)22:35
clarkbI think as long as pypa's installation tool is pip we should support pip?22:37
mordredyes, it is a feature that the gate mirrors production for sure ... I don't think pipenv is a replacement for pip for installation though22:37
mordredpipevn is a mechanism for managing virtualenv installation that uses Pipfile22:37
clarkbmordred: in the last few months there have been talks of pipenv and pip/pypa diverging (sort of forking) because they don't agree on that bit aiui22:37
mordredPipfile is a format that works together with Pipfile.lock and is a replacement for requirements.txt22:38
mordredclarkb: yah. I'm not surprised22:38
clarkbbasiaclly pipenv is asserting they are the tool now and don't want to be encumbered by pypas slower methodical moves22:38
clarkbfungi likely actually has more of this paged in as I think he follows distutils sig22:38
SpamapSUgh.. I really.. really dislike pipenv's method.22:39
mordredthe thing is - pipenv is a tool for installing software into virtualenvs22:39
SpamapSBasically "we give up trying to be correct, so lets be wrong like npm"22:39
mordredso I don't really see how they would see themselves as a replacement for pip22:39
clarkbmordred: well even pip will tell you to only ever use it in a virtualenv22:39
clarkbI think they more disagree on details than see that as a wrong method :/22:39
SpamapSYou can pass --system and not install into a venv.22:39
SpamapSjust like npm can do -g22:40
SpamapSit's a feature for feature copy of npm.22:40
mordredSpamapS: ah. nod22:40
SpamapSComplete with emojis and colors.22:40
mordredyeah. I'm unlikely to find it interesting to switch to that22:40
SpamapSBut honestly, for the scale of most apps, it's fine.22:40
mordredin general, I think "pip install zuul" needs to be able to work and I don't want to adopt anything that will break that22:41
SpamapSYou just run into trouble if you want to keep two apps in sync.22:41
clarkbmordred: ++ put another way I think that THE official pypa package install tool (today pip) needs to work22:41
mordredclarkb: yes22:41
SpamapSI'm sure one could fairly easily automate the conversion of requirements.txt+upper-constraints.txt to Pipfile+Pipfile.lock22:41
clarkbnow we might be able to support both at the same time. I'm not sure22:41
SpamapSThat might even be something one would want to contribute to pipenv.22:42
mordredSpamapS: you could - although pipenv only has 2 sets of things- prod and dev (again copied from npm)22:42
mordredso things like doc/requirements.txt get left out in the cold22:42
SpamapSdocs?22:42
SpamapSyou22:42
SpamapSwho has docs?22:42
SpamapS;-)22:42
SpamapSThat's waht README.md is for22:42
mordredI keep looking at whether pipfile would improve life - and so far it's always been only 80% of the way there - having left out the important/hard bits22:44
fungii honestly don't recall the specifics now, but pipenv maintainers were wanting to break new ground on dependency management in ways which would be counter to what pip supports22:44
mordredI'm a big fan of the Cargo / lock model - so I *want* it to be something that is usable22:44
fungii'll see if i can find the distutils-sig thread for background22:45
mordredalthough otoh they all still drive me crazy with their adoption of toml for thigns - so I'm also not in a huge rush22:45
SpamapSThe lock file is fine.22:48
fungithere was also a huge blowup on reddit (surprise!) when pypa added an entry on the packaging site for pipenv as one of the possible tools for installing python packages, and then the pipenv maintainers went around declaring it "the official tool for installing python packages"22:49
fungithat created a lot of unnecessary friction in the community22:49
fungihttps://github.com/pypa/pipenv/commit/5853846 was the eventual result of that22:55
SpamapSYeah I recall that.22:57
SpamapSanyway, I don't think it would hurt to have Pipfile/Pipfile.lock in the repo22:58
SpamapSmordred: also aren't we not allowed to say TOML?22:58
mordredSpamapS: I try to not say it as frequently as possible22:59
fungithe distutils-sig thread i was thinking of was https://mail.python.org/archives/list/distutils-sig@python.org/thread/YFJITQB37MZOPOFJJF3OAQOY4TOAFXYM/23:05
fungithough the github pr/pep linked near the end then ended up having their conversations migrate into the python community discourse site at which point i lost track23:06
fungibut i think the conclusion (if one was reached) is that pyproject.toml is for declaring dependencies of python packages and pipfile is for declaring the environments of applications which happen to depend on some python packages23:09
clarkbI'm always slightly sadenned when "understanding how a unix shell works" is seen as a barrier to entry. Yes it is a barrier of sorts, but not understandnig this powerful tool really prevents you from taking advantage of whatever system you are running it on23:12
clarkb(replace unix shell with windows powershell as appropriate)23:13
clarkbpython doesn't execute in a vacuum.23:13
clarkbI guess it could23:13
fungiyou'd be surprised how many requests for help i see from people (mostly windows users) trying to enter `pip install something` in an interactive python interpreter session23:18
fungi"i'm trying to install the package but all i keep getting is SyntaxError: invalid syntax"23:18
fungiwindows users in particular seem to not even realize they have a shell prompt available in their operating system, and just double-click the python icon23:19
*** rlandy|rover is now known as rlandy|rover|bbl23:24
*** hashar has quit IRC23:41

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!