*** jesusaur has joined #zuul | 00:04 | |
*** caphrim007 has quit IRC | 00:36 | |
pabelanger | mordred: looking at the zuul 3.3.0 wheel on pypi, I don't see the compiled index.html bit for yarn, is that expected? | 01:29 |
---|---|---|
pabelanger | eg: I think out release job for zuul, is missing yarn | 01:30 |
pabelanger | which is causing those bits not to be added into the wheel | 01:30 |
clarkb | pabelanger: I think only latest wheel is expected to have them | 01:30 |
clarkb | if you sdist it works on older iirc | 01:30 |
pabelanger | clarkb: I unzipped it, but cannot see where they are stored | 01:31 |
pabelanger | I thought it was zuul/web/static | 01:31 |
clarkb | I dont recall but there were numerous bugs that I thought got fixed | 01:32 |
pabelanger | yah, I think we fixed it for our docker images | 01:32 |
pabelanger | but, I cannot see it inside the pypi wheel | 01:32 |
pabelanger | I also think http://git.openstack.org/cgit/openstack-infra/puppet-zuul/tree/manifests/web.pp#n218 is protecting zuul.o.o from this issue too, since we are still fetching them from tarballs.o.o | 01:32 |
pabelanger | checking out release jobs for zuul | 01:33 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul master: Use publish-zuul-python-branch-tarball job https://review.openstack.org/618634 | 01:49 |
pabelanger | clarkb: mordred: ^ should atleast fix our branch-tarball job to now install yarn bits, then we can confirm if static html is properly included in wheel. I don't see it for 3.3.0 release, but may just be missing something | 01:50 |
pabelanger | note depends-on for project-config | 01:50 |
SpamapS | pabelanger: when you're saying zone, are you meaning the same thing as an openstack availability zone? | 02:56 |
*** jhesketh has quit IRC | 05:05 | |
*** jhesketh has joined #zuul | 05:07 | |
*** bjackman has joined #zuul | 06:43 | |
*** Guest99010 has quit IRC | 07:12 | |
*** bjackman has quit IRC | 07:35 | |
*** bjackman has joined #zuul | 08:48 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-base-jobs master: Add base openshift job https://review.openstack.org/570669 | 09:32 |
*** sshnaidm|off has quit IRC | 11:02 | |
*** sshnaidm|off has joined #zuul | 11:09 | |
*** bjackman has quit IRC | 11:53 | |
*** bjackman has joined #zuul | 12:01 | |
*** bjackman has quit IRC | 12:12 | |
*** bjackman has joined #zuul | 12:13 | |
*** bjackman has quit IRC | 12:33 | |
*** bjackman has joined #zuul | 12:37 | |
*** bjackman_ has joined #zuul | 12:39 | |
*** bjackman has quit IRC | 12:41 | |
*** bjackman_ has quit IRC | 13:09 | |
*** bjackman_ has joined #zuul | 13:11 | |
*** bjackman_ has quit IRC | 13:26 | |
*** sshnaidm|off is now known as sshnaidm | 15:31 | |
*** dkehn_ has joined #zuul | 15:52 | |
*** dkehn_ has quit IRC | 15:57 | |
*** dkehn has joined #zuul | 16:04 | |
pabelanger | SpamapS: maybe, in my use case, it is more about being able to say which executors can be used by a specific tenant. Or my cloud does FIPs, and adding an executor on the same private network means I could maybe stop using FIPs | 16:18 |
clarkb | pabelanger: in that case couldnt you map executors to nodepool providers and that would be enough? | 16:20 |
clarkb | a nodeset is always provided by one provider | 16:20 |
clarkb | nodepool does understand that much | 16:20 |
pabelanger | clarkb: yah, I think the issue today is, if that is setup, and executor can still run jobs for other nodesets, in other cloud provider | 16:24 |
pabelanger | /if/in | 16:24 |
pabelanger | If the executor is in a zone, then they only executor jobs for that zone, based on what nodepool node is setup for | 16:24 |
clarkb | yes thats tge new feature right? | 16:28 |
clarkb | basically zone == nodepool provider and that should be sufficient I think | 16:28 |
clarkb | maybe nodpeool pool | 16:28 |
pabelanger | yah, in the use case we are discussing, a zuul executor maybe behind a corp firewall, and doesn't have access to other cloud providers. So we'd basically want per-tenant zuul executors, since only those jobs could be run behind the firewall | 16:31 |
pabelanger | we == ansible-network | 16:32 |
clarkb | it would be less tenant specific and more provider specific right? | 16:33 |
clarkb | tenant doesnt imply firewall but provider might | 16:33 |
pabelanger | yup, and we are doing that today, we have ansible-network tenant with specific providers | 16:36 |
pabelanger | however, if we add a zuul-executor into our region, to be closer to the jobs and get stability, other jobs outside of ansible-network will be scheduled onto that executor. And may not want to be since that are not in our tenant. | 16:38 |
pabelanger | and I don't know how changing a nodeset on nodepool side, will stop the executor from even getting the job | 16:38 |
pabelanger | reading the zone patch that rcarrillocruz started, it seemed like the way to limit which jobs run on an executor | 16:39 |
clarkb | iirc the code was supposed to be an update to gearman so that the gearman job was "zone" specific | 16:39 |
clarkb | the executors could be configured to only run jobs on nodes from specififc providers | 16:40 |
pabelanger | yah, that is what https://review.openstack.org/549197/ looks to do, if I understand the original story | 16:40 |
pabelanger | but need some eyes / reviews on it to make sure I am on the right path | 16:40 |
clarkb | ya so nodepools needs to tell zuul which "zone" or "pool" the resources are from | 16:42 |
pabelanger | yah, and https://review.openstack.org/#/c/618622/ is nodepool change | 16:43 |
clarkb | the in zuul/executor/client.py you use that info to request the zone specififc job (rather than what is hardcoded now) | 16:43 |
pabelanger | however, because nodepool doesn't understand what a nodeset is, I don't know how best to do on the zuul/executor/client side the lookup from zk. | 16:43 |
pabelanger | I guess, we just look at first node in nodeset, and pull zone, then assume all other nodes are in the same zone | 16:44 |
pabelanger | clarkb: yah | 16:44 |
clarkb | ya so this is what I was trying to explain before. drop zone as a new construct. the thing nodepool understands is "pool" | 16:44 |
clarkb | so just use that I think | 16:44 |
clarkb | you know all nodes ina noderequest are from the same pool | 16:45 |
clarkb | your zone is provider_name.pool_name | 16:45 |
pabelanger | yah, right. I see now, we are kinda duplicating zone / pool here | 16:45 |
pabelanger | okay, I can work that up today | 16:46 |
pabelanger | and play with it | 16:46 |
clarkb | adding zone would work too but I think it would be redundant | 16:47 |
pabelanger | I guess the bonus of a zone, would be adding more then 1 provider into it. If we did provider_name.pool_name, we'd be limited to 1 | 16:48 |
clarkb | thats true maybe we could address that byallowing an executor to talk to more tgan one pool | 16:50 |
pabelanger | Yah | 16:51 |
sshnaidm | how can I pass variable value from pre.yml to post.yml, will set_fact keep it? | 17:08 |
sshnaidm | without dumping on disk somewhere | 17:08 |
pabelanger | sshnaidm: no, you need to write it to disk | 19:27 |
pabelanger | I wonder how we could leverage the fact cache we provide on the zuul-executor | 19:28 |
pabelanger | oh, TIL | 19:29 |
pabelanger | set_fact: cachable: true | 19:29 |
pabelanger | that should do it | 19:29 |
pabelanger | https://docs.ansible.com/ansible/2.5/modules/set_fact_module.html | 19:29 |
*** jimi|ansible has quit IRC | 19:39 | |
pabelanger | clarkb: thinking more about the provider_name.pool_name 'zone' on nodepool side, how do we opt-in to that in nodepool.yaml? We don't want to enable it by default, maybe something executor-zone bool? | 19:50 |
clarkb | pabelanger: I think nodepool can always provide that data the nzuul can do what it wants with it | 19:51 |
clarkb | pabelanger: from the zuul side I would check the registered gearman functions list and if there are registered executors for a pool then send all jobs for nodes on it that way | 19:52 |
pabelanger | okay, that would also work | 19:52 |
pabelanger | and if function not found, don't use the zones | 19:52 |
clarkb | yup | 19:52 |
pabelanger | ack | 19:52 |
clarkb | assume any executor can run it in that case | 19:52 |
*** dkehn has quit IRC | 20:31 | |
*** dkehn has joined #zuul | 20:36 | |
pabelanger | clarkb: reading more about gearman, having issues figuring out how to see which functions are registered. Best I can see is using 'status' but that is admin level I think | 20:36 |
clarkb | pabelanger: yes status lists all the functions. I don't think gearman does authorization checking. Any connection should be able to run status | 20:37 |
clarkb | it may be "admin" in that it isn't directly used for running jobs but is instead administrative | 20:38 |
pabelanger | ah, okay | 20:38 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul master: WIP Add support for zones in executors https://review.openstack.org/549197 | 21:56 |
pabelanger | clarkb: okay, if you don't mind to look over ^, that should be what we talked about today for zuul/executor/client.py | 21:56 |
pabelanger | I still need to write some tests and refactor the status gearman call | 21:57 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul master: WIP Add support for zones in executors https://review.openstack.org/549197 | 22:41 |
openstackgerrit | Paul Belanger proposed openstack-infra/zuul master: WIP Add support for zones in executors https://review.openstack.org/549197 | 22:44 |
sshnaidm | pabelanger, yeah, cache may help.. although it's actually dump to disk :) | 22:50 |
*** ianychoi has joined #zuul | 23:34 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!