corvus | SpamapS: :( i think option #2 from earlier may be the best thing for now | 00:01 |
---|---|---|
clarkb | you can set the merge strategy to prefer the master side | 00:02 |
clarkb | or the prod side dpeneding | 00:02 |
clarkb | (though not sure if you can do that on a per file level) | 00:02 |
corvus | evrardjp: i'm warming to your idea of a pipeline flag to ignore files -- i think it might be good to make an exploratory change and see how it looks. are you interested in hacking on that, or did you just want to get the idea out there? | 00:03 |
clarkb | corvus: Shrews: do you know what the time from node going ready to node being unlocked in nodepool is spent doing? | 00:15 |
clarkb | in our integration job that takes a full 4 minutes | 00:15 |
clarkb | and just over 3 minutes in another case | 00:16 |
clarkb | http://logs.openstack.org/44/626044/1/check/nodepool-functional-py35-src/670251d/ shows the devstack service trimming should work | 00:18 |
corvus | clarkb: that's zuul getting around to starting the job. scheduler delay basically. | 00:18 |
clarkb | thinking that may make things more reliable over all just by trimming the total number of moving parts using cpu and disk and memory | 00:18 |
*** rlandy has quit IRC | 00:18 | |
clarkb | corvus: but there is no zuul in the functional job | 00:18 |
corvus | clarkb: oh, you said *unlocked* sorry | 00:18 |
corvus | clarkb: it should happen very quickly, so it's probably thread/cpu contention | 00:19 |
clarkb | ok https://review.openstack.org/626044 is an attempt at reducing the cpu contention on the host at least. Its not a massive reduction I don't think, but should be something | 00:22 |
clarkb | other than that we may want to look at that path from being marked ready to unlocked to see if we have an obvious bugs beacuse ya I would expect it to be much quicker | 00:23 |
clarkb | also another idea I have thinking about this is we may want to do a multinode job where one host runs the cloud and the other host runs nodepool and zk | 00:23 |
clarkb | (but that is probably more last resort if things continue to be "weird") | 00:23 |
*** quiquell|off has quit IRC | 00:29 | |
clarkb | mordred: related to https://review.openstack.org/626044 I wonder if the sdk jobs would find that useful too potentially? | 00:30 |
clarkb | mordred: not sure if the sdk/shade tests care about vnc or metadata service | 00:31 |
mordred | clarkb: sdk/shade test do not care about vnc or metadta service | 00:35 |
mordred | clarkb: so yeah - totes | 00:36 |
mordred | clarkb: https://review.openstack.org/626058 adds the same to sdk - thanks | 00:42 |
clarkb | mordred: as i noted in 626044's commit messages I also wanted to disable etcd since nodepool doesn't cinder and cinder is the only thing using etcd I think. But at the same time any service could start using etcd at any time | 00:47 |
clarkb | Does make me think the right way to add dependencies is when you actually depend on them though :) | 00:48 |
mordred | clarkb: go figure | 00:53 |
clarkb | tobiash: https://review.openstack.org/#/c/611920/ is the geard ipv6 change I referred to before | 00:55 |
tristanC | corvus: what is your suggestion for https://review.openstack.org/535549 again? | 01:22 |
tristanC | clarkb: corvus: Shrews: we could expand the connection_port or rename it yes, (this was suggested on the k8s review, but it got merged with connection_port...) | 01:32 |
tristanC | would you prefer it to be named connection_details then? | 01:33 |
clarkb | I think it will help communicate what the intent is there to someone that ends up debugging the nodeset data at a low level | 01:33 |
tristanC | what about having new znode keys for each detail? | 01:36 |
clarkb | connection_port connection_namespace connection_user and so on? | 01:37 |
clarkb | that may also work ,though the data structure might get noisy depending on how many attributes future systems depend on | 01:38 |
tristanC | i agree, but then shouldn't we move cloud, az, region, image_id to connection_details too? | 01:40 |
clarkb | or maybe into some other tree. They don't really mater when it comes to making a connection (of course at this point probably overthinking the color of the shed) | 01:41 |
*** bhavikdbavishi has joined #zuul | 02:39 | |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: [wip] Use bindep for devstack jobs https://review.openstack.org/626068 | 02:39 |
openstackgerrit | Ian Wienand proposed openstack-infra/nodepool master: [wip] Add dogpile.cache master to the -src tests https://review.openstack.org/625457 | 03:30 |
*** bhavikdbavishi1 has joined #zuul | 03:54 | |
*** bhavikdbavishi has quit IRC | 03:56 | |
*** bhavikdbavishi1 is now known as bhavikdbavishi | 03:56 | |
*** chkumar|out is now known as chandankumar | 04:12 | |
Shrews | clarkb: corvus: if it's a multinode request, a ready node will remain locked until all nodes in the request are ready | 04:14 |
Shrews | clarkb: corvus: could also be delay in processing the finished request handlers. iirc, tobiash did some work around that recently | 04:19 |
Shrews | to improve it, that is | 04:19 |
*** bjackman has joined #zuul | 04:23 | |
clarkb | its single node request | 04:24 |
clarkb | just the min ready request in thr functional job | 04:25 |
*** evrardjp_ has joined #zuul | 05:05 | |
*** evrardjp has quit IRC | 05:07 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul master: Fix container job documentation typo https://review.openstack.org/626093 | 05:38 |
openstackgerrit | Merged openstack-infra/nodepool master: Run devstack zookeeper on tmpfs https://review.openstack.org/626038 | 05:46 |
*** evrardjp has joined #zuul | 06:01 | |
*** evrardjp_ has quit IRC | 06:03 | |
*** mgagne has quit IRC | 06:35 | |
*** mgagne has joined #zuul | 06:40 | |
*** ianw is now known as ianw_pto | 06:43 | |
*** quiquell has joined #zuul | 07:13 | |
evrardjp | corvus: I was first asking if that idea was valid, then I think I'd love doing it as it might be low impact/low priority, but I will probably require some help, as I've never contributed here | 07:40 |
*** pcaruana has joined #zuul | 08:03 | |
*** bhavikdbavishi has quit IRC | 08:10 | |
openstackgerrit | Jens Harbott (frickler) proposed openstack-infra/nodepool master: [wip] Use bindep for devstack jobs https://review.openstack.org/626068 | 08:17 |
*** pcaruana has quit IRC | 08:24 | |
*** bhavikdbavishi has joined #zuul | 08:25 | |
*** pcaruana has joined #zuul | 08:33 | |
*** pcaruana has quit IRC | 08:41 | |
*** jpena|off is now known as jpena | 08:47 | |
*** dkehn has quit IRC | 09:08 | |
*** bjackman has quit IRC | 09:17 | |
*** bjackman has joined #zuul | 09:20 | |
*** hashar has joined #zuul | 09:34 | |
*** bhavikdbavishi has quit IRC | 09:52 | |
*** gtema has joined #zuul | 10:15 | |
*** bjackman has quit IRC | 10:19 | |
openstackgerrit | Sorin Sbarnea proposed openstack-infra/zuul master: Adds ServerAliveInterval to ssh_args to prevent frozen connections https://review.openstack.org/626138 | 10:21 |
*** bjackman has joined #zuul | 10:21 | |
*** ssbarnea|rover has quit IRC | 10:34 | |
*** ssbarnea|rover has joined #zuul | 10:35 | |
*** hashar has quit IRC | 10:44 | |
*** bjackman has quit IRC | 11:52 | |
*** bjackman has joined #zuul | 11:52 | |
*** neilsun has joined #zuul | 11:56 | |
*** rfolco has quit IRC | 11:59 | |
*** rf0lc0 has joined #zuul | 11:59 | |
*** bhavikdbavishi has joined #zuul | 12:06 | |
*** jpena is now known as jpena|lunch | 12:29 | |
*** hogepodge has quit IRC | 12:29 | |
*** hogepodge has joined #zuul | 12:30 | |
ssbarnea|rover | jhesketh: can you please have a look at https://review.openstack.org/#/c/626138/ ? | 12:35 |
jhesketh | ssbarnea|rover: seems reasonable to me :-) | 12:37 |
*** bhavikdbavishi has quit IRC | 12:42 | |
ssbarnea|rover | jhesketh: there is also the GSS part and eventually a different time than 30s which could prove to be too small. tripleo is using 270s. | 12:45 |
ssbarnea|rover | also i see no option for retry there and we use retries=3 which could prove very useful for occasional networking hiccups, also allows us to trace it on logstash. | 12:47 |
ssbarnea|rover | anyway, its current for should be enough for current issue. | 12:48 |
*** bjackman has quit IRC | 12:48 | |
tobiash | ssbarnea|rover: you mean you want to configure the number of retries? | 12:50 |
ssbarnea|rover | tobiash: is more of a question, would it make sense? | 12:50 |
tobiash | we try to avoid config options where there is no need to if we can come up with a default that should work everywhere | 12:51 |
tobiash | and I think the default of 3 retries is sane? | 12:51 |
*** jpena|lunch is now known as jpena | 13:01 | |
ssbarnea|rover | tobiash: more than one and less than a big value that could slow things down a lot in case of network downtimes. probably any value between 2-5 would be ok. | 13:02 |
*** rlandy has joined #zuul | 13:03 | |
ssbarnea|rover | the idea is to retry once, so we can see if doing it again fixed the problem. if we have direct failure, we cannot really know if it was recoverable or not. | 13:03 |
ssbarnea|rover | tobiash: anyway lets just do the proposed change for the moment and see how it behaves. it should take no more than two days to get enough data if this is fixing the POST timeouts. | 13:12 |
tobiash | sounds good | 13:12 |
ssbarnea|rover | out of curiosity, how long does it takes from the moment a zuul change is merged until it reach openstack production one? | 13:13 |
*** gtema has quit IRC | 13:14 | |
tobiash | It will be automatically installed, but to make it effective it needs to be restarted manually | 13:18 |
tobiash | However I don't know how long it takes until the Installation happened | 13:18 |
*** dkehn has joined #zuul | 13:49 | |
*** ssbarnea|rover has quit IRC | 14:40 | |
*** ssbarnea has joined #zuul | 14:40 | |
*** hashar has joined #zuul | 14:40 | |
*** bhavikdbavishi has joined #zuul | 14:42 | |
tobiash | mordred: is this another munch problem in openstacksdk? http://paste.openstack.org/show/737739/ | 14:47 |
tobiash | openstacksdk version is 0.21.0 | 14:48 |
tobiash | Shrews: ^ | 14:51 |
pabelanger | tobiash: I was getting that failure too, but want to say you thought it was also fixed | 15:00 |
pabelanger | ssbarnea: tobiash: we've set retries in ansible.cfg on executor to 3 by default | 15:01 |
ssbarnea | pabelanger: perfect. so lets merge that patch and hope that this will fix the POST failures. | 15:02 |
tobiash | pabelanger: yes, there were some munch issues that were fixed but I don't know anymore if the flavor.id was this exactly | 15:06 |
tobiash | at least it fails for me now with latest openstacksdk | 15:06 |
pabelanger | tobiash: yah, I had a patch locally to use flavor['id'] but reverted back to 0.18.0 (I think). would be good to confirm with mordred on where to properly fix | 15:08 |
tobiash | so am I right that currently there is no working release of openstacksdk because <0.21.0 has the dogpile issue and 0.21.0 the flavor munch issue? | 15:08 |
*** rlandy is now known as rlandy|rover | 15:08 | |
tobiash | (working release for nodepool) | 15:09 |
pabelanger | I can reboot my local nodepool and confirm, but would seem so | 15:09 |
pabelanger | tobiash: another option would be to tag 0.19.1 with dogpile cap for openstacksdk too | 15:13 |
tobiash | pabelanger: ah yes, the dogpile cap went in before the release | 15:13 |
*** bhavikdbavishi has quit IRC | 15:16 | |
*** quiquell is now known as quiquell|off | 15:29 | |
corvus | ssbarnea: if we merge https://review.openstack.org/626138 i expect there will be no fewer failures (and possibly more). the post timeouts will turn into failures when ansible exits with a connection error (and it's possible that a busy system might not respond for 90 seconds, and in that case we would see new errors). that might be okay, i just want to make sure that's what you want. | 15:32 |
ssbarnea | corvus: yes I do. Once we get these errors we can update the queries to accomodate them. I would personally increase the timeout if needed. Still, we need to do it one way or another: permanent freeze is clearly the worst possible behavior. | 15:36 |
corvus | ssbarnea: ok. let's ask clarkb his thoughts on the timeout value (should be 30*3 = 90 seconds) ^ | 15:37 |
Shrews | tobiash: well, that seems new. but i don' | 15:42 |
Shrews | don't know of any sdk changes around flavors | 15:43 |
* Shrews looks | 15:43 | |
mordred | Shrews, tobiash: I think it's the same issue we keep hitting with the munchification ... let me go make a comprehensive patch | 15:45 |
pabelanger | http://paste.openstack.org/show/737049/ | 15:48 |
pabelanger | that's the same traceback from me a few days ago | 15:48 |
mordred | pabelanger: crap. I totally missed that | 15:52 |
pabelanger | mordred: np! I rolled back, but miss understood that it might have been fixed | 15:52 |
mordred | Shrews: https://review.openstack.org/#/c/625923/ is related to it - it's the thing we thought we were solving by munching the dict at the top of normalize | 15:52 |
mordred | pabelanger: well, I thought it was - but I'm gonna solve it for real this time | 15:52 |
Shrews | why aren't our tests catching these errors? | 15:53 |
mordred | Shrews: I don't know - I also want to dig in to that - I Found the security group thing by trying to use ansible to do a thing - which is a thing I'm guessing isn't covered in our ansible tests | 15:55 |
mordred | Shrews: but I'm not sure why the nodepool tests aren't catching this - are we not doing enough in the nodepool tests? | 15:55 |
Shrews | i can't answer that without more info. maybe tobiash or pabelanger can point to something that reproduces it | 15:58 |
Shrews | i'm not seeing that traceback in any recent nodepool tests (or associated nodepool logs) | 15:58 |
mordred | Shrews: I'm going to add some explicit unit tests for these in this patch | 15:58 |
mordred | but I want to figure out what we can add to nodepool tests to make sure these codepaths are exercised | 15:59 |
pabelanger | Shrews: I can uncap and debug more, but there wasn't too much in logs beside the traceback I linked about | 15:59 |
tobiash | Shrews: I saw this traceback during quota handling | 16:08 |
pabelanger | yah, same | 16:09 |
Shrews | tobiash: under what conditions? at full quota? normal quota checks? i can't reproduce it locally with 0.21.0 | 16:09 |
tobiash | Shrews: it was an empty or almost empty nodepool without quota pressure but with some unmanaged vms | 16:10 |
clarkb | corvus: that ssh option implies a timeout failure after 3 failed attempts? | 16:11 |
tobiash | not sure if the unmanaged vms make the difference | 16:11 |
Shrews | tobiash: pabelanger: could one of you provide us a unit test scenario? | 16:11 |
mordred | Shrews: nodepool/driver/openstack/provider.py line 196 ... it's in unmanagedServersUsed | 16:11 |
mordred | Shrews: so we'd need to have a vm in the cloud that isn't managed by nodepool to trigger the codepath | 16:11 |
mordred | Shrews: maybe in our nodepool functional test we should boot a vm out of band | 16:12 |
pabelanger | yah, was just thinking that too | 16:12 |
pabelanger | I have to run into town for errand shortly, but can see about a test when I get back | 16:14 |
corvus | clarkb: yes -- i believe ssh and then ansible will exit with an error after 3 attempts, 30 seconds each | 16:18 |
tobiash | mordred, Shrews: yes, booting an unmanaged vm in the functional test should do it. Just double checked the code paths and this should get triggered by an unmanaged vm. | 16:19 |
clarkb | corvus we can try it. May help with what appears to be ansible running the same task twice by failing early or keeping the connection up | 16:20 |
tobiash | but I have no idea how to put this into that job, I'm completely new to devstack | 16:20 |
clarkb | 90s seems somewhat short for a timeout but ssh should be more alive than that | 16:20 |
corvus | clarkb: want to bump it to 3m? or 5m? | 16:21 |
clarkb | corvus: 3m with 60s interval might be better | 16:21 |
corvus | ssbarnea: ^ want to update the values? | 16:22 |
clarkb | tobiash in devstack/plugin.sh is a function that starts nodepool. Before starting nodepool we ca use openstackclient to boot an instance | 16:22 |
ssbarnea | syrem so should I make it from 30 -> 30? | 16:24 |
ssbarnea | 30 -> 60 | 16:24 |
corvus | ssbarnea: yep | 16:24 |
tobiash | clarkb: is there some pre-installed image that can be used to boot an instance in devstack? | 16:25 |
clarkb | tobiash: yes there is a cirros image | 16:25 |
openstackgerrit | Sorin Sbarnea proposed openstack-infra/zuul master: Adds ServerAliveInterval to ssh_args to prevent frozen connections https://review.openstack.org/626138 | 16:26 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool master: Add unmanaged VM to nodepool functional tests https://review.openstack.org/626357 | 16:31 |
tobiash | clarkb: something like ^ ? | 16:31 |
mordred | Shrews: ok. I've actually got the more comprehensive version of a fix ready - searching for more places to increase the testing now | 16:31 |
pabelanger | tobiash: I think you'll need to check that is was deleted after we stop nodepool, otherwise something failed | 16:31 |
pabelanger | maybe shutdown_nodepool | 16:32 |
corvus | well, it shouldn't be deleted | 16:32 |
tobiash | pabelanger: you mean that it's still alive after we stop nodepool ;) | 16:32 |
pabelanger | sorry, yah that | 16:32 |
tobiash | and maybe that it still has a port ;) | 16:32 |
pabelanger | but we need a way to some how trap the exception | 16:33 |
pabelanger | okay, ignore more, I need some coffee | 16:33 |
clarkb | tobiash: ya though I dont recall if the image name is specific and might have to be grepped for first | 16:35 |
*** j^2 has quit IRC | 16:37 | |
corvus | clarkb, tobiash: want to +3 https://review.openstack.org/626138 ? | 16:41 |
tobiash | lgtm | 16:41 |
clarkb | tobiash: beat me to it | 16:41 |
Shrews | mordred: ok. i was looking at maybe a unit test for this, but i don't think that's going to be sufficient now. so... yeah, we'll need to do a functional test | 16:44 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool master: Add unmanaged vm to nodepool functional tests https://review.openstack.org/626357 | 16:44 |
*** chandankumar is now known as chkumar|vanished | 16:50 | |
*** neilsun has quit IRC | 17:05 | |
*** j^2 has joined #zuul | 17:27 | |
tobiash | clarkb: yupp, grepping for the image was needed according to the live log, thx | 17:29 |
mordred | Shrews: ok. I'm working on unittests - and have the central transform method working properly - which has just poitned out a few places where the payload is a little different than I'm smoothing out | 17:30 |
*** _yumapath has joined #zuul | 17:34 | |
*** j^2 has left #zuul | 17:35 | |
_yumapath | hi team, for the past one week after the log-inventory role got added in zuul base jobs our zuul deployment is failing | 17:35 |
_yumapath | 2018-12-18 02:38:02.152982 | primary | "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'inventory_file'\n\nThe error appears to have been in '/tmp/fe6c300e779b4833a900c6c9c710ba5b/trusted/project_1/git.zuul-ci.org/zuul-jobs/roles/log-inventory/tasks/main.yaml': line 11, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending li | 17:35 |
_yumapath | with the following error | 17:35 |
_yumapath | the inventory.yaml file is not getting copied | 17:35 |
_yumapath | can anyone please help with this | 17:36 |
openstackgerrit | Merged openstack-infra/zuul master: Adds ServerAliveInterval to ssh_args to prevent frozen connections https://review.openstack.org/626138 | 17:36 |
fungi | _yumapath: is your zuul executor running from zuul 3.2.0 or later? | 17:39 |
fungi | it looks like the change to make the executor start supplying inventory_file in the zuul variables was https://review.openstack.org/578235 which first appeared in the 3.2.0 release of zuul in july | 17:40 |
_yumapath | am using zuul v3 | 17:41 |
_yumapath | but am not sure which version it is | 17:41 |
fungi | pip3 freeze|grep zuul | 17:41 |
fungi | should tell you which version is installed | 17:41 |
_yumapath | am using version 3.1.0 | 17:42 |
_yumapath | zuuluser@cinder-zuulv3:~$ zuul --version Zuul version: 3.1.0 | 17:42 |
openstackgerrit | Clark Boylan proposed openstack-infra/nodepool master: Trim devstack services used in testing https://review.openstack.org/626044 | 17:43 |
_yumapath | so is it becasue it is a lower version am facing this issue? | 17:43 |
clarkb | hopefully ^ addresses ianw_pto's concerns | 17:43 |
corvus | _yumapath: there was a security issue, so upgrading to 3.3.1 is a very good idea anyway: http://lists.zuul-ci.org/pipermail/zuul-announce/2018-November/000026.html | 17:45 |
fungi | _yumapath: yes, upgrading zuul to at least 3.2.0 (but ideally to the latest release) on all your zuul servers is a good idea | 17:45 |
fungi | and should solve the issue you're seeing | 17:45 |
fungi | alternatively you could pin/fork the version of zuul-jobs you're using, but i don't personally know how to go about doing that | 17:47 |
corvus | if you wanted to do that, you'd need to make a local fork; we don't have an option to pin yet | 17:48 |
fungi | _yumapath: but to clarify, the error you're seeing is because the change you noted in zuul-jobs depends on the inventory_file variable which zuul executors didn't start supplying until 3.2.0 | 17:49 |
_yumapath | ok | 17:51 |
_yumapath | wget http://tarballs.openstack.org/zuul/zuul-content-latest.tar.gz roles/zuulnodepool/tasks/main.yaml: tar -xzvf zuul-content-latest.tar.gz -C {{ zuul_path_raw.stdout }}/static | 17:51 |
_yumapath | i see this is some tar ball installation | 17:51 |
_yumapath | should i go do the same | 17:51 |
_yumapath | or can i use pip to upgrade | 17:52 |
corvus | _yumapath: that depends on how it was installed originally; but if you installed with pip, you should be able to upgrade with pip. | 17:53 |
corvus | _yumapath: be sure to read the upgrade notes | 17:53 |
_yumapath | ok | 17:53 |
_yumapath | sure | 17:53 |
_yumapath | will give it a try | 17:53 |
corvus | _yumapath: https://zuul-ci.org/docs/zuul/releasenotes.html#relnotes-3-3-1 | 17:53 |
corvus | (and the older ones too) | 17:53 |
*** jpena is now known as jpena|off | 17:54 | |
*** hashar has quit IRC | 17:56 | |
tobiash | clarkb, Shrews: the unmanaged vm also triggers the failure in the functional test: http://logs.openstack.org/57/626357/2/check/nodepool-functional-py35/1e8b67d/controller/logs/screen-nodepool-launcher.txt.gz | 18:25 |
clarkb | good that the tests are able to reproduce things properly :) | 18:26 |
*** tobiash has quit IRC | 18:35 | |
*** tobiash has joined #zuul | 18:44 | |
*** tobiash has quit IRC | 18:47 | |
*** tobiash has joined #zuul | 18:48 | |
*** _yumapath has quit IRC | 18:54 | |
pabelanger | I can confirm latest wheel has static html bits, got it working last night locally finally :) | 18:55 |
*** sshnaidm|off has quit IRC | 19:24 | |
*** sshnaidm has joined #zuul | 19:27 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool master: Add unmanaged vm to nodepool functional tests https://review.openstack.org/626357 | 19:35 |
*** gouthamr_ is now known as gouthamr | 19:38 | |
*** hashar has joined #zuul | 19:43 | |
*** tobiash has quit IRC | 19:53 | |
*** tobiash has joined #zuul | 19:58 | |
openstackgerrit | Merged openstack-infra/zuul master: web: update status page layout based on screen size https://review.openstack.org/622010 | 20:02 |
mordred | tobiash, Shrews: I think we're expecting that nodepool patch to fail, due to the sdk bug right? | 20:21 |
mordred | tobiash, Shrews: so then if we run it with a depends-on the sdk patch we should expect the src version to pass and the non-src to fail - until we cut an sdk release, right? | 20:21 |
clarkb | mordred: ++ | 20:22 |
tobiash | mordred: yes, that fails currently, will try the depends-on | 20:22 |
mordred | cool | 20:22 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool master: Add unmanaged vm to nodepool functional tests https://review.openstack.org/626357 | 20:23 |
tobiash | mordred: lets see if it works | 20:24 |
mordred | tobiash: woot | 20:27 |
mordred | tobiash: I would very much like to have our tests fail when there are patches that don't work :) | 20:27 |
tobiash | mordred, corvus: what do you think about trying out pipenv together with pip lock files? | 20:30 |
tobiash | having lock files could save us in the future from getting broken by dependency updates as these would be gated together with the lock file | 20:30 |
*** dkehn has quit IRC | 20:39 | |
mordred | tobiash: I've poked at that a few times - there was a reason I bumped in to last time I looked that something didn't seem quite right yet | 20:45 |
tobiash | ah ok | 20:45 |
mordred | tobiash: oh - I know what it is - pbr doesn't understand pipenv files yet - so 'pip install path/to/zuul' would, I think, stop working | 20:45 |
mordred | tobiash: I have some thoughts on what to do about that - but havne't had time to work on them - I should probably write them down in case someone else wants to work on it | 20:46 |
tobiash | so pbr would need pipenv support | 20:46 |
mordred | yeah - basically, I think what we want to do is make pbr be able to provide a 'builder' plugin (forget the pep) | 20:46 |
mordred | and then pip would know how to call that to produce the wheel | 20:46 |
tobiash | I think at least in the long run it would be beneficial to not get broken every few weeks by a random rependency | 20:47 |
mordred | tobiash: I haven't worked on it because for openstack more broadly, the constraints construct that exists is a bit in conflict, so we'd need to solve that - this doesn't mean zuul couldn't do something, since it doesn't have a shared constraints concept | 20:47 |
tobiash | I see | 20:48 |
mordred | tobiash: yeah - totally agree - although I'd also like to make sure our testing is solid - otherwise we won't be able to judge patches to bump a lock file either | 20:48 |
tobiash | yeah, testing is hard and can be constantly improved | 20:52 |
*** dkehn has joined #zuul | 20:55 | |
mordred | ++ | 20:57 |
*** ssbarnea has quit IRC | 21:00 | |
mordred | tobiash, Shrews, corvus: https://review.openstack.org/#/c/626357 succeeds on the -src job and fails on the release job as expected | 22:32 |
mordred | tobiash, Shrews, corvus: which means I think we should land https://review.openstack.org/#/c/625923 and cut a release | 22:34 |
mordred | clarkb: ^^ | 22:34 |
corvus | tobiash, mordred: were you talking about having the gate being broken, or actual installations? | 22:34 |
corvus | (in your earlier conversation about pipenv) | 22:34 |
mordred | corvus: I was concerned about actual installations | 22:34 |
mordred | corvus: mostly just the mechanics are a bit different and differently opinionated, so I'm concerned it could turn into a rabbithole | 22:35 |
corvus | ok, so another phrasing is "should zuul adopt pipenv as the recommended installation method instead of pip install"? | 22:35 |
corvus | (because regardless of the method, it is a feature that the gate mirrors production installations) | 22:35 |
clarkb | I think as long as pypa's installation tool is pip we should support pip? | 22:37 |
mordred | yes, it is a feature that the gate mirrors production for sure ... I don't think pipenv is a replacement for pip for installation though | 22:37 |
mordred | pipevn is a mechanism for managing virtualenv installation that uses Pipfile | 22:37 |
clarkb | mordred: in the last few months there have been talks of pipenv and pip/pypa diverging (sort of forking) because they don't agree on that bit aiui | 22:37 |
mordred | Pipfile is a format that works together with Pipfile.lock and is a replacement for requirements.txt | 22:38 |
mordred | clarkb: yah. I'm not surprised | 22:38 |
clarkb | basiaclly pipenv is asserting they are the tool now and don't want to be encumbered by pypas slower methodical moves | 22:38 |
clarkb | fungi likely actually has more of this paged in as I think he follows distutils sig | 22:38 |
SpamapS | Ugh.. I really.. really dislike pipenv's method. | 22:39 |
mordred | the thing is - pipenv is a tool for installing software into virtualenvs | 22:39 |
SpamapS | Basically "we give up trying to be correct, so lets be wrong like npm" | 22:39 |
mordred | so I don't really see how they would see themselves as a replacement for pip | 22:39 |
clarkb | mordred: well even pip will tell you to only ever use it in a virtualenv | 22:39 |
clarkb | I think they more disagree on details than see that as a wrong method :/ | 22:39 |
SpamapS | You can pass --system and not install into a venv. | 22:39 |
SpamapS | just like npm can do -g | 22:40 |
SpamapS | it's a feature for feature copy of npm. | 22:40 |
mordred | SpamapS: ah. nod | 22:40 |
SpamapS | Complete with emojis and colors. | 22:40 |
mordred | yeah. I'm unlikely to find it interesting to switch to that | 22:40 |
SpamapS | But honestly, for the scale of most apps, it's fine. | 22:40 |
mordred | in general, I think "pip install zuul" needs to be able to work and I don't want to adopt anything that will break that | 22:41 |
SpamapS | You just run into trouble if you want to keep two apps in sync. | 22:41 |
clarkb | mordred: ++ put another way I think that THE official pypa package install tool (today pip) needs to work | 22:41 |
mordred | clarkb: yes | 22:41 |
SpamapS | I'm sure one could fairly easily automate the conversion of requirements.txt+upper-constraints.txt to Pipfile+Pipfile.lock | 22:41 |
clarkb | now we might be able to support both at the same time. I'm not sure | 22:41 |
SpamapS | That might even be something one would want to contribute to pipenv. | 22:42 |
mordred | SpamapS: you could - although pipenv only has 2 sets of things- prod and dev (again copied from npm) | 22:42 |
mordred | so things like doc/requirements.txt get left out in the cold | 22:42 |
SpamapS | docs? | 22:42 |
SpamapS | you | 22:42 |
SpamapS | who has docs? | 22:42 |
SpamapS | ;-) | 22:42 |
SpamapS | That's waht README.md is for | 22:42 |
mordred | I keep looking at whether pipfile would improve life - and so far it's always been only 80% of the way there - having left out the important/hard bits | 22:44 |
fungi | i honestly don't recall the specifics now, but pipenv maintainers were wanting to break new ground on dependency management in ways which would be counter to what pip supports | 22:44 |
mordred | I'm a big fan of the Cargo / lock model - so I *want* it to be something that is usable | 22:44 |
fungi | i'll see if i can find the distutils-sig thread for background | 22:45 |
mordred | although otoh they all still drive me crazy with their adoption of toml for thigns - so I'm also not in a huge rush | 22:45 |
SpamapS | The lock file is fine. | 22:48 |
fungi | there was also a huge blowup on reddit (surprise!) when pypa added an entry on the packaging site for pipenv as one of the possible tools for installing python packages, and then the pipenv maintainers went around declaring it "the official tool for installing python packages" | 22:49 |
fungi | that created a lot of unnecessary friction in the community | 22:49 |
fungi | https://github.com/pypa/pipenv/commit/5853846 was the eventual result of that | 22:55 |
SpamapS | Yeah I recall that. | 22:57 |
SpamapS | anyway, I don't think it would hurt to have Pipfile/Pipfile.lock in the repo | 22:58 |
SpamapS | mordred: also aren't we not allowed to say TOML? | 22:58 |
mordred | SpamapS: I try to not say it as frequently as possible | 22:59 |
fungi | the distutils-sig thread i was thinking of was https://mail.python.org/archives/list/distutils-sig@python.org/thread/YFJITQB37MZOPOFJJF3OAQOY4TOAFXYM/ | 23:05 |
fungi | though the github pr/pep linked near the end then ended up having their conversations migrate into the python community discourse site at which point i lost track | 23:06 |
fungi | but i think the conclusion (if one was reached) is that pyproject.toml is for declaring dependencies of python packages and pipfile is for declaring the environments of applications which happen to depend on some python packages | 23:09 |
clarkb | I'm always slightly sadenned when "understanding how a unix shell works" is seen as a barrier to entry. Yes it is a barrier of sorts, but not understandnig this powerful tool really prevents you from taking advantage of whatever system you are running it on | 23:12 |
clarkb | (replace unix shell with windows powershell as appropriate) | 23:13 |
clarkb | python doesn't execute in a vacuum. | 23:13 |
clarkb | I guess it could | 23:13 |
fungi | you'd be surprised how many requests for help i see from people (mostly windows users) trying to enter `pip install something` in an interactive python interpreter session | 23:18 |
fungi | "i'm trying to install the package but all i keep getting is SyntaxError: invalid syntax" | 23:18 |
fungi | windows users in particular seem to not even realize they have a shell prompt available in their operating system, and just double-click the python icon | 23:19 |
*** rlandy|rover is now known as rlandy|rover|bbl | 23:24 | |
*** hashar has quit IRC | 23:41 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!