*** jkilpatr has joined #zuul | 00:11 | |
*** jkilpatr has quit IRC | 02:38 | |
*** jesusaur has quit IRC | 03:46 | |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Support ram limit per pool https://review.openstack.org/504284 | 04:08 |
---|---|---|
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: WIP: Honor cloud quotas before launching nodes https://review.openstack.org/503838 | 04:14 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Make max-servers optional https://review.openstack.org/504282 | 04:14 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Support cores limit per pool https://review.openstack.org/504283 | 04:14 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Don't fail on quota exceeded https://review.openstack.org/503051 | 04:14 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Support ram limit per pool https://review.openstack.org/504284 | 04:14 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: WIP: Honor cloud quotas before launching nodes https://review.openstack.org/503838 | 04:21 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Make max-servers optional https://review.openstack.org/504282 | 04:21 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Support cores limit per pool https://review.openstack.org/504283 | 04:21 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Don't fail on quota exceeded https://review.openstack.org/503051 | 04:21 |
openstackgerrit | Tobias Henkel proposed openstack-infra/nodepool feature/zuulv3: Support ram limit per pool https://review.openstack.org/504284 | 04:21 |
openstackgerrit | Merged openstack-infra/zuul feature/zuulv3: change assert(Not)Equals to assert(Not)Equal https://review.openstack.org/503870 | 04:43 |
dmsimard | mordred: sweet, they fixed https://github.com/ansible/ansible/issues/24207 | 05:33 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul feature/zuulv3: Delete IncludeRole object from result object for include_role tasks https://review.openstack.org/504238 | 06:04 |
*** pbrobinson has joined #zuul | 06:13 | |
*** pbrobinson has quit IRC | 06:52 | |
*** pbrobinson has joined #zuul | 06:54 | |
*** xinliang has quit IRC | 07:12 | |
*** xinliang has joined #zuul | 07:25 | |
*** xinliang has quit IRC | 07:25 | |
*** xinliang has joined #zuul | 07:25 | |
*** hashar has joined #zuul | 08:13 | |
*** electrofelix has joined #zuul | 08:47 | |
*** pbrobinson has quit IRC | 09:49 | |
*** pbrobinson has joined #zuul | 09:54 | |
*** pbrobinson has quit IRC | 10:03 | |
*** pbrobinson has joined #zuul | 10:04 | |
*** pbrobinson has quit IRC | 10:06 | |
*** pbrobinson has joined #zuul | 10:06 | |
*** pbrobinson has quit IRC | 10:07 | |
*** pbrobinson has joined #zuul | 10:07 | |
*** pbrobinson has quit IRC | 10:09 | |
*** pbrobinson has joined #zuul | 10:09 | |
*** yolanda has joined #zuul | 12:35 | |
*** yolanda_ has joined #zuul | 12:35 | |
*** yolanda_ has quit IRC | 12:35 | |
*** jkilpatr has joined #zuul | 12:48 | |
*** dkranz has joined #zuul | 13:26 | |
*** yolanda has quit IRC | 14:10 | |
*** yolanda has joined #zuul | 14:13 | |
*** hashar is now known as hasharAway | 14:44 | |
* tobiash sits at the gate two hours before boarding... | 15:12 | |
jeblair | we're thinking of having a post-transition zuul roadmap discussion at 19:30 utc. if remote folks would like to join in, let me know and we can set up an asterisk room | 15:37 |
tobiash | jeblair: I'll probably on a plane at that time | 15:40 |
SpamapS | jeblair: I might be interested, if for nothing else to listen. | 15:40 |
SpamapS | but if it's a lot of trouble to setup a mic, I'm fine with reading the notes after :) | 15:43 |
*** EmilienM has quit IRC | 15:55 | |
*** EmilienM has joined #zuul | 15:57 | |
jeblair | Shrews: i'm tracking down a nodepool problem and would love your help if you're online today | 16:05 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul feature/zuulv3: Delete IncludeRole object from result object for include_role tasks https://review.openstack.org/504238 | 16:17 |
SpamapS | speaking of nodepool | 16:20 |
SpamapS | is there a way to use it without uploading images? | 16:20 |
tobiash | SpamapS: yes (v3) | 16:21 |
tobiash | SpamapS: if you mean using pre-existing images | 16:22 |
*** dkranz has quit IRC | 16:22 | |
tobiash | SpamapS: have a look into the node-unmanaged-image.yaml test fixture (also documented in nodepoolv3 docs, but I'm not sure if they're hosted somewhere) | 16:23 |
jeblair | mordred: http://paste.openstack.org/show/621205/ | 16:28 |
*** yolanda has quit IRC | 16:59 | |
*** olaph has quit IRC | 17:00 | |
SpamapS | tobiash: thanks! | 17:00 |
SpamapS | Because it was rather difficult to get image upload access to GD's clouds (locked down for the usual reasons).. and I think it may be a common pitfall for experimenting with zuul+nodepool | 17:01 |
*** electrofelix has quit IRC | 17:05 | |
openstackgerrit | Monty Taylor proposed openstack-infra/nodepool feature/zuulv3: Consume server.id from shade create exception https://review.openstack.org/504464 | 17:06 |
mordred | clarkb: https://review.openstack.org/#/c/504462/ and https://review.openstack.org/#/c/504464/ | 17:06 |
SpamapS | oh wow | 17:13 |
SpamapS | fun fact.. godaddy's openstack only allows nova servernames of _15_ characters. | 17:13 |
*** olaph has joined #zuul | 17:14 | |
SpamapS | so my image label is now 'ux' (ubuntu-xenial) and my cloud's name is 'p' (for phoenix) ;) | 17:14 |
SpamapS | ux-p-0000000000 | 17:14 |
*** olaph has quit IRC | 17:18 | |
*** olaph has joined #zuul | 17:20 | |
SpamapS | 2017-09-15 10:32:16,158 INFO nodepool.NodeLauncher-0000000115: Creating server with hostname ux-p-0000000115 in p from image ubuntu-xenial for node id: 0000000115 | 17:34 |
*** hasharAway has quit IRC | 17:35 | |
*** hashar has joined #zuul | 17:35 | |
jeblair | pabelanger: i think we're seeing erroneous behavior in nodepool because we're running the same providers on multiple launchers | 17:35 |
jeblair | pabelanger: i believe the current algorithm only supports one launcher per provider | 17:36 |
jeblair | pabelanger: so we need to split providers across nl01 and nl02 | 17:36 |
*** eventingmonkey_ is now known as eventingmonkey | 17:40 | |
jeblair | http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#id1 (2nd to last paragraph) | 17:41 |
clarkb | jeblair: that was my understanding when we wrote hte specs at least | 17:43 |
pabelanger | jeblair: clarkb: okay, do we have a preference on what that breakdown would look like? Being friday, should we just stop 1 launcher until the redesign happens? | 17:58 |
*** jkilpatr has quit IRC | 17:58 | |
jeblair | pabelanger: we haven't flipped the switch yet, i don't think we have to delay the split. | 17:59 |
jeblair | pabelanger, clarkb: i don't know -- maybe we should roughly split them up by region and try to keep the quota roughly even? maybe dfw in 01 and ord+iad in 02, etc.... | 18:00 |
clarkb | eventually I see us running one in each region only against that region | 18:02 |
clarkb | that way if you lose a cloud only that pool is effevted | 18:02 |
clarkb | for short term that idea of balancing sounds good to me | 18:02 |
jeblair | that sounds good | 18:02 |
jeblair | if we do that, hopefully we can scale them down pretty small :) | 18:03 |
*** jesusaur has joined #zuul | 18:07 | |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul feature/zuulv3: Delete IncludeRole object from result object for include_role tasks https://review.openstack.org/504238 | 18:33 |
*** bhavik1 has joined #zuul | 18:37 | |
*** bhavik1 has quit IRC | 18:44 | |
mordred | SpamapS: wow. that's a ... ... wow | 19:01 |
mordred | jeblair, clarkb: https://review.openstack.org/#/c/504276 | 19:03 |
mordred | jeblair: http://logs.openstack.org/76/504276/4/check/zuul-migrate/9404af4/playbooks/legacy/congress-pe-replicated-mysql/post.yaml | 19:14 |
SpamapS | mordred: inorite? it's because some users of some vms register them with AD. | 19:15 |
SpamapS | which is also o_O | 19:16 |
SpamapS | mordred: I'm feeling lazy. How do I tell shade to not using floating ips even though it wants to because my addresses only has non-internet-routable ips? | 19:17 |
jeblair | at least it's more than 8.3 ? | 19:17 |
SpamapS | jeblair: >:| | 19:17 |
SpamapS | don't tempt them | 19:17 |
mordred | SpamapS: two things ... | 19:18 |
mordred | SpamapS: you can set "floating_ip_source: None" in clouds.yaml | 19:19 |
mordred | SpamapS: which will tell shade to never bother with fip calls | 19:19 |
mordred | SpamapS: you can also set private: True on the cloud entry, which tells shade you want to use the private interface as "the interface" | 19:20 |
mordred | SpamapS: (which I recommend in this case for you, since that's what your situation actually is) | 19:21 |
mordred | SpamapS: finally, you can add a specific entry for network in clouds.yaml - https://docs.openstack.org/os-client-config/latest/user/network-config.html ... | 19:22 |
mordred | SpamapS: so you can say "networks: [{name: private, routes_externally: true, default_interface: true}]" - and that'll tell shade to boot vms on that network specifically (default_interface) and to treat that network as the "external" network | 19:23 |
SpamapS | yeeahhh... that last one is complicated here. | 19:24 |
mordred | SpamapS: which will cause the ip's that coe from that network to show up in public_v4 | 19:24 |
SpamapS | there's this interesting AZ<->Network 1:1 mapping that would make that weird. | 19:24 |
SpamapS | I guess I could make 3 pools | 19:24 |
SpamapS | rather than listing all 3 az's in one | 19:25 |
mordred | SpamapS: oh - hrm. you may want to just do private: true then | 19:29 |
SpamapS | Yeah trying that right now | 19:32 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Order post playbook content properly https://review.openstack.org/504502 | 19:35 |
jeblair | https://etherpad.openstack.org/p/zuulv3-roadmap | 19:35 |
*** yolanda has joined #zuul | 19:48 | |
jlk | o/ | 19:51 |
jlk | oh a roadmap | 19:51 |
*** jkilpatr has joined #zuul | 20:25 | |
jlk | jeblair: I assume you're all in a room hashing out this roadmap. Is jamielennox in there? I thought he had some thoughts for post-migration. | 20:32 |
jeblair | jlk: yes | 20:33 |
jeblair | jlk: anything you want to jog his memory about? | 20:34 |
jlk | I honestly don't remember. | 20:34 |
jlk | other than nodepool driver stuff. I'd like to dive into that soon | 20:34 |
jlk | on that front, it'd be nice to have an ability to install out of band drivers, if at all possible. Forking nodepool to create a driver for an inhouse system would kind of suck | 20:35 |
SpamapS | jlk: do you have specific ideas for a driver? | 20:35 |
jlk | For bonny, we're probably going to focus on Docker, since we can can easily do bluemix VMs with docker daemons running on them | 20:36 |
SpamapS | is that really a driver for nodepool tho? | 20:36 |
SpamapS | docker seems like a single-host thing | 20:36 |
jlk | a driver for k8s is interesting as well, given the numerous k8s services. | 20:36 |
jlk | well, if Nodepool is to expose a container, then yes | 20:37 |
jlk | in a way that the consumer doesn't really notice/care that it's a container instead of a VM | 20:37 |
SpamapS | ultimately any nodepool driver is something that must 1) allocate resources, and 2) feed them into an ansible inventory. right? | 20:37 |
jlk | yes, including connection details, such as a driver to use | 20:38 |
SpamapS | so k8s would be like "allocate a k8s cluster" and then local to the executor, run playbooks that mostly just call kubectl? | 20:38 |
* SpamapS hasn't really thought this through | 20:38 | |
jlk | I'm not really sure on taht front either | 20:39 |
SpamapS | seems like that will need to drive some zuul change too | 20:39 |
jlk | it could be that k8s driver is just a way to expose short lived containers to zuul. "Hey, go get me a container I can muck with, using this image" | 20:39 |
jlk | the consumer's playbooks just run against that container. | 20:39 |
SpamapS | ansible_connection=k8s ? Is that a thing? :) | 20:39 |
SpamapS | like really, I don't know | 20:40 |
jlk | much like "openstack" driver vs "gce" driver would largely be transparent to the end user, at the end they just get a VM they can ssh to | 20:40 |
jlk | SpamapS: I don't know. | 20:40 |
SpamapS | I have a k8s-aaS that I'm going to be working on as a primary responsibility, so if we can figure this out it will make nodepool more attractive for me to work on. :) | 20:40 |
jlk | if k8s is using docker, couldn't you use the docker connection method? | 20:40 |
SpamapS | I don't think so, kubelet owns the docker details | 20:40 |
jlk | oh good! I have a very minimal understanding of k8s. I ran minikube once or twice and it didn't work so well. | 20:41 |
jeblair | yeah "give me a k8s" and "run on k8s" are both interesting things | 20:41 |
jlk | both interesting, but different | 20:41 |
SpamapS | jlk: also I don't think the docker connection type runs over SSH. So if the docker is on a remote host... | 20:42 |
*** jkilpatr has quit IRC | 20:42 | |
SpamapS | we might even end up writing a connection driver for Ansible for this. | 20:42 |
jlk | I'm pretty sure you can specify the host to connect to with the docker connection method | 20:43 |
jlk | it supports more than just localhost | 20:43 |
SpamapS | since at the core of things, we just want to docker exec [ ansible python stuff ] | 20:43 |
jlk | http://docs.ansible.com/ansible/latest/intro_inventory.html#non-ssh-connection-types | 20:43 |
*** jkilpatr has joined #zuul | 20:43 | |
SpamapS | jlk: that was not the case 2 years ago. 2 years is a long time I know. :) | 20:43 |
jlk | oh hrm. | 20:44 |
jlk | shit, I may be wrong here | 20:44 |
SpamapS | it was extremely annoying then, because I almost never wanted to run ansible on docker containers in my local box. ; | 20:44 |
jlk | ah yes | 20:44 |
jlk | ansible_docker_extra_args | 20:44 |
jlk | Could be a string with any additional arguments understood by Docker, which are not command specific. This parameter is mainly used to configure a remote Docker daemon to use. | 20:44 |
SpamapS | ah you can use docker's remote API | 20:44 |
SpamapS | apparently that's a thing docker added | 20:44 |
jlk | yeah | 20:44 |
SpamapS | ansible_docker_extra_args: "--tlsverify --tlscacert=/path/to/ca.pem --tlscert=/path/to/client-cert.pem --tlskey=/path/to/client-key.pem -H=tcp://myserver.net:4243" | 20:45 |
jlk | which, may be how nodepool interacts with it too in order to launch the container | 20:45 |
SpamapS | so yeah, nodepool could get into the business of spinning up containers into a ready state on top of nodes that are in a ready state. | 20:46 |
SpamapS | feels like a second service.. containerpool | 20:46 |
*** yolanda has quit IRC | 20:46 | |
SpamapS | though | 20:46 |
SpamapS | the reasons for wanting ready/pooled containers are less than wanting ready/pooled vms | 20:47 |
jlk | well... | 20:47 |
jlk | first pass of it could simply be a static listing of daemons to reach out to | 20:47 |
jlk | kind of like a static listing of openstack clouds to reach out to | 20:47 |
SpamapS | most of the pain is done in the image build for containers | 20:47 |
SpamapS | yeah I like that | 20:47 |
SpamapS | just turn the pools: for that driver into an inventory, and let nodepool do docker image builds. | 20:48 |
SpamapS | though.. another thing I've been thinking about is that I kind of want my image build to be a job that my other jobs depend on. ;) | 20:48 |
jlk | sure | 20:48 |
jlk | so that's more of "give me a host that has docker running on it" | 20:49 |
SpamapS | yeah which nodepool can do now | 20:49 |
jlk | and you feed credentials in for a registry | 20:49 |
SpamapS | label: ubuntu-xenial-docker-1.7 | 20:49 |
jlk | it's an interesting use case, but separate. | 20:49 |
jlk | For things like "run tox" or "lint my stuffs" I'm guessing most consumers don't care if it happens in a container (like a good chunk of travis consumers) | 20:50 |
jlk | helps service providers with throughput and packing jobs onto resources | 20:50 |
SpamapS | Right the idea of having a nodepool driver that can give you containers that are shared on nodes is the fun one. | 20:50 |
*** jkilpatr has quit IRC | 20:51 | |
SpamapS | I personally think this makes most sense as 'containerpool'... which uses a nodepool for physical resources and then just talks the nodepool protocol via zk with zuul when zuul asks for containers to run its playbooks against. | 20:51 |
SpamapS | (not necessarily a separate code base, but a separate service) | 20:52 |
jlk | I honestly don't like that | 20:52 |
jlk | I'd love to run zuul without needing OpenStack | 20:52 |
SpamapS | you can. Where you gonna get them VMs? ;) | 20:52 |
jlk | who says they're VMs? | 20:52 |
SpamapS | M's | 20:52 |
SpamapS | V or not | 20:52 |
SpamapS | nodepool supports static servers. | 20:53 |
SpamapS | so you're good there. | 20:53 |
jlk | Same place we get "openstack clouds" | 20:53 |
jlk | correct | 20:53 |
jlk | I guess I don't see the value in a second service. | 20:53 |
jlk | unless it's too difficult to create the driver model | 20:53 |
SpamapS | Well the value is that containers run on servers. | 21:04 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul feature/zuulv3: Add netaddr requirements for running ipv4|ipv6 filters https://review.openstack.org/504522 | 21:04 |
dmsimard | jeblair ^ this would be useful for multinode things but not a blocker | 21:05 |
jlk | SpamapS: I don't follow. Containers run on servers, yes, but how does it matter if a nodepool driver causes the container to appear or a wholly separate service is used? | 21:06 |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul feature/zuulv3: Add SELinux type enforcement https://review.openstack.org/504526 | 21:16 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul-jobs master: DNM: Test v3 multinode things https://review.openstack.org/503806 | 21:23 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Order post playbook content properly https://review.openstack.org/504502 | 21:33 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Actually emit job content https://review.openstack.org/504276 | 21:33 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Split output into jobs+project_templates and projects https://review.openstack.org/504123 | 21:33 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Don't set nonvoting based on name suffix https://review.openstack.org/503769 | 21:33 |
openstackgerrit | Monty Taylor proposed openstack-infra/zuul feature/zuulv3: Emit job definitions https://review.openstack.org/503874 | 21:33 |
openstackgerrit | Clark Boylan proposed openstack-infra/zuul master: Case sensitive label matching https://review.openstack.org/482856 | 21:38 |
mordred | tristanC: http://paste.openstack.org/show/621233/ | 21:48 |
mordred | tobiash: just saw this in the scheduler log | 21:48 |
mordred | gah | 21:48 |
mordred | jeblair: /var/log/zuul/executor-debug.log:OSError: [Errno 28] No space left on device: '/var/lib/zuul/builds/227498fe8faf46d887b3330b81156022' | 21:58 |
SpamapS | jlk: so zuul is going to express to nodepool "I need X ubuntu-xenial-plusthings" containers. | 21:58 |
SpamapS | jlk: and the label for ubuntu-xenial-plusthings will have some kind of resource allocation like "8G of RAM" | 21:58 |
SpamapS | jlk: if nodepool's docker driver only knows how to make containers.. somewhere somebody has to have spun up docker on a bunch of static nodes. | 22:00 |
jlk | yes. | 22:00 |
jlk | just like somebody has to have spun up openstack on a bunch of nodes | 22:00 |
SpamapS | jlk: but what I really want is a nice elastic pool of nodes. | 22:00 |
SpamapS | Right but the difference is openstack runs on baremetal and is inherently not elastic. | 22:00 |
jlk | docker runs on baremetal too | 22:00 |
jlk | and it's not really elastic | 22:01 |
SpamapS | It isn't, but nodepool _is_ elastic. | 22:01 |
SpamapS | And it uses openstack assuming it is elastic. | 22:01 |
SpamapS | My point is that containers can be run _on top of an elastic thing_, and nodepool _creates elastic things_. So having the two work together seems like a huge win. | 22:02 |
jlk | sure, if you already have access to that base level elastic thing | 22:02 |
jlk | which you may not have | 22:02 |
SpamapS | Indeed | 22:03 |
SpamapS | Seems like there's 2 pieces of work to do. | 22:04 |
jlk | just 2? :D | 22:04 |
SpamapS | 1) talk to COE's | 22:04 |
SpamapS | 2) make COE's appear from pools of nodes. | 22:05 |
jlk | one other thought, a docker swarm is a set of docker hosts, in a cluster. You can say "swarm, run this service" and not care so much about _where_ it runs. This may be a better thing to think about with nodepool | 22:05 |
SpamapS | I agree with that wholeheartedly | 22:05 |
jlk | collect a series of hosts into a swarm, however you see fit. Give nodepool the swarm contact info | 22:05 |
jlk | but, networking be hard. So.. dunno | 22:06 |
SpamapS | just let nodepool do whatever it takes to manage the image<->label relationship in said swarm. | 22:06 |
jlk | yup | 22:06 |
SpamapS | I don't think you have to worry so much about the networking if your goal is to run ansible playbooks via docker_connection | 22:06 |
SpamapS | The docker connection will be able to do whatever it can to execute stuff. | 22:07 |
SpamapS | And whatever it can't is a signal -> VM | 22:07 |
jlk | yeah, if you can get back info about WHERE the container is actually provisioned | 22:07 |
jlk | or we extend docker connection to handle swarm connections if it's vastly different | 22:07 |
SpamapS | also one thing about this... we have to think a lot about how you get code into the container. | 22:08 |
jlk | well, we do that via ansible right now... | 22:08 |
SpamapS | right, so it's going to have to know how to get it into a volume or build it into a new image. | 22:09 |
jlk | not necessarily | 22:09 |
jlk | it can do the same thing it does right now, blast it in | 22:09 |
SpamapS | except a container doesn't exist if it has no process. | 22:09 |
jlk | that's where the special "run forever" process comes from | 22:10 |
jlk | that gets put into the image | 22:10 |
*** hashar has quit IRC | 22:10 | |
SpamapS | I'm pretty sure the way travis, for instance, does this is they stick your code in a volume and then just 'docker run ubuntu-trusty {{ volume setup }} python setup.py test' | 22:10 |
jlk | nodepool -> make container, run process. zuul -> connect to container to do things. nodepool -> tear down container. | 22:10 |
jlk | sure, we could explore that path, but the ansible docker connection plugin doesn't work unless the container is already running | 22:11 |
SpamapS | the run forever thing is interesting. Not sure I love it, but could simplify things a bit. | 22:11 |
jlk | it's not "containery" | 22:12 |
SpamapS | yeah it's not :-/ | 22:12 |
jlk | but it does the job | 22:12 |
jlk | (it's how bonnyCI CI has been working) | 22:12 |
SpamapS | Maybe ansible just doesn't do things inside the containers in a container scenario. | 22:12 |
jlk | feels like a pretty far departure for zuul | 22:13 |
SpamapS | Sort of. | 22:13 |
SpamapS | Like, so is being containery vs. being servery | 22:13 |
SpamapS | But I could see zuul running playbooks just to setup volumes. | 22:14 |
SpamapS | and then after run, consuming whatever was left on the volume | 22:15 |
SpamapS | It feels limiting, but that's containers... limiting but cheap. | 22:15 |
jlk | by limiting, you mean "you get a single script to run" | 22:16 |
SpamapS | Yeah | 22:18 |
jlk | and all the post-plays get weird too, for collecting logs and output and such | 22:19 |
SpamapS | not if the workspace is a volume | 22:19 |
jlk | I haven't thought about the log streamer implications | 22:19 |
SpamapS | maybe the run playbook is just 'command: docker run ......' | 22:20 |
SpamapS | this makes less sense with k8s | 22:20 |
SpamapS | I mean, to be clear.. ansible also suffers from this "how do we ... do containery things?" question | 22:21 |
SpamapS | so it's not even really zuul's problem per se | 22:21 |
* SpamapS is struggling right now to see why the github driver wants to use http when he clearly hs sshkey: set | 22:23 | |
jlk | for cloning? | 22:23 |
jlk | huh yeah, it should be that via getGitUrl() | 22:25 |
jlk | time to drop a: import rpdb; rpdb.set_trace() in that function, let it run and telnet in! | 22:25 |
SpamapS | gah no I found it | 22:45 |
SpamapS | the one thing not in git is my secrets.yaml | 22:45 |
SpamapS | and I constantly screw it up :-P | 22:45 |
jlk | ah | 22:45 |
SpamapS | doesn't help that in BonnyCI key/token and app/api are transposed in a few places | 22:45 |
SpamapS | hrm | 22:54 |
SpamapS | I'm getting NODE_FAILURE now and I don't know why | 22:54 |
jlk | that I can't help you with | 23:01 |
SpamapS | http://paste.openstack.org/show/621234/ | 23:02 |
SpamapS | 2017-09-15 15:55:59,510 INFO nodepool.PoolWorker.p-main: Assigning node request <NodeRequest {'requestor': 'zuul1.cloud.phx3.gdg', 'state_time': 1505516150.5774846, 'declined_by': [], 'reuse': True, 'state': 'requested', 'nodes': [], 'stat': ZnodeStat(czxid=283432, mzxid=283432, ctime=1505516150578, mtime=1505516150578, version=0, cversion=0, aversion=0, ephemeralOwner=98650230581887005, dataLength=161, | 23:04 |
SpamapS | numChildren=0, pzxid=283432), 'id': '100-0000000119', 'node_types': ['ubuntu-xenial']}> | 23:04 |
SpamapS | nodepool says it gave the node to the zuul | 23:04 |
SpamapS | but then zuul says "nah... failed" | 23:04 |
SpamapS | so.. hrm | 23:04 |
* SpamapS laments that the bulk of zuul capable debuggers is likely hammered or in the flying tubes | 23:09 | |
SpamapS | can't see any reason this is being marked as failed :-P | 23:09 |
* dmsimard waiting flight at his gate | 23:19 | |
SpamapS | 2017-09-15 16:20:31,285 DEBUG nodepool.driver.openstack.OpenStackNodeRequestHandler: Declining node request 100-0000000120 because node type(s) [ubuntu-xenial] not available | 23:22 |
SpamapS | oh | 23:22 |
* SpamapS thinks that should actually be an INFO | 23:22 | |
SpamapS | aaaaand bubblewrap fails because it's << 0.1.7 | 23:29 |
* SpamapS just LOVES centos 7 | 23:29 | |
SpamapS | Or rather << 0.1.8 | 23:29 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul-jobs master: DNM: Test v3 multinode things https://review.openstack.org/503806 | 23:32 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul-jobs master: DNM: Test v3 multinode things https://review.openstack.org/503806 | 23:35 |
SpamapS | jlk: is it possible the webhook code can't/doesn't update the PR status with the url to streaming? | 23:40 |
SpamapS | hrm and I got no logs | 23:41 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul-jobs master: DNM: Test v3 multinode things https://review.openstack.org/503806 | 23:41 |
SpamapS | probably just not configured right | 23:42 |
SpamapS | or I need to get the bonnyci base jobs | 23:42 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul-jobs master: DNM: Test v3 multinode things https://review.openstack.org/503806 | 23:48 |
dmsimard | jeblair, mordred: why would the private_ipv4 and the public_ipv4 be the same address ? http://logs.openstack.org/06/503806/29/check/multinode-test/7e35669/zuul-info/inventory.yaml | 23:55 |
SpamapS | jeblair: that happens sometimes :-P | 23:57 |
SpamapS | with some weird clouds | 23:57 |
SpamapS | (all clouds are weird) | 23:57 |
SpamapS | err | 23:57 |
dmsimard | SpamapS: was that meant for me ? :p | 23:57 |
SpamapS | dmsimard: ^ | 23:57 |
SpamapS | dmsimard: btw, after solving my python3.5 problems, I also ran into bubblewrap needing to be 0.1.8 :-P | 23:58 |
dmsimard | Awesome | 23:58 |
dmsimard | SpamapS: shouldn't nodepool be able to tell what's the private and public IPs though ? I guess some clouds won't have a private ip if there's no floating ip involved | 23:59 |
dmsimard | Oh well boarding my flight soon, I might get wifi if I'm bored | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!