Friday, 2018-05-04

SpamapSCool I think that one works for me.00:03
SpamapSclarkb: added myself00:04
*** zhuli_ has joined #zuul00:15
*** ssbarnea_ has quit IRC00:28
*** gouthamr has quit IRC00:30
*** dmellado has quit IRC00:31
*** gouthamr has joined #zuul00:38
*** dmellado has joined #zuul00:38
*** gouthamr has quit IRC00:58
*** dmellado has quit IRC00:59
*** elyezer has quit IRC01:12
*** spsurya has joined #zuul01:14
*** zhuli_ is now known as zhuli01:21
*** elyezer has joined #zuul01:22
*** gouthamr has joined #zuul01:23
*** dmellado has joined #zuul01:26
*** gouthamr has quit IRC01:32
*** dmellado has quit IRC01:32
*** rlandy|bbl is now known as rlandy01:45
*** gouthamr has joined #zuul01:56
*** dmellado has joined #zuul01:56
*** dmellado has quit IRC02:09
*** dmellado has joined #zuul02:13
*** gouthamr has quit IRC02:22
*** gouthamr has joined #zuul02:23
*** gouthamr has quit IRC02:27
*** gouthamr has joined #zuul02:32
*** gouthamr has quit IRC02:37
*** dmellado has quit IRC02:38
*** gouthamr has joined #zuul03:38
*** dmellado has joined #zuul03:39
*** gouthamr has quit IRC03:52
*** dmellado has quit IRC04:04
*** dmellado has joined #zuul04:39
*** gouthamr has joined #zuul04:40
*** gouthamr has quit IRC04:49
*** gouthamr has joined #zuul04:50
*** gouthamr has quit IRC04:56
*** ianw has quit IRC05:06
*** patrickeast has quit IRC05:15
*** sdake has quit IRC05:15
*** TheJulia has quit IRC05:15
*** abelur has quit IRC05:15
*** hogepodge has quit IRC05:15
*** sdake has joined #zuul05:15
*** sdake has joined #zuul05:15
*** abelur has joined #zuul05:15
*** TheJulia has joined #zuul05:16
*** hogepodge has joined #zuul05:16
*** threestrands has quit IRC05:47
*** ianw has joined #zuul05:55
*** nguyenhai has joined #zuul06:12
*** nguyenhai_ has quit IRC06:16
*** hashar has joined #zuul06:28
*** dmsimard has quit IRC06:39
*** dmsimard has joined #zuul06:40
*** yolanda_ has joined #zuul06:42
*** yolanda has quit IRC06:45
*** dmellado has quit IRC07:01
*** dmellado_ has joined #zuul07:24
*** ssbarnea_ has joined #zuul07:25
*** dmellado_ has quit IRC07:34
*** jpena|off is now known as jpena07:48
*** gtema has joined #zuul07:59
*** ssbarnea_ has quit IRC07:59
*** dmellado_ has joined #zuul08:01
*** dmellado_ has quit IRC08:11
*** jimi|ansible has quit IRC08:36
*** dmellado_ has joined #zuul08:50
*** nguyenhai has quit IRC08:54
*** ssbarnea_ has joined #zuul09:18
gtemaduring zuul job execution I restarted the zuul-scheduler. This resulted in "lost" job and github branch/PR stucked in the pending state. Is it intended or have I configured something wrong?09:34
tobiashgtema: that's expected as zuul reports pending at start and reports success/failure at end10:16
tobiashbut you can just rerun your job and everything should be ok10:16
gtematobiash: it is more about usability. I.e. as an admin of a big installation I do not know which jobs are lost, but the user does not have nice interface to restart job without sending "empty commit"10:18
gtematobiash: with PR you can of course make comment "recheck", but what with regular branch check job?10:19
tobiashgtema: you can also add a comment trigger so the user can retrigger the job via a comment10:19
tobiashhowever branch triggered pipelines are a problem in that regard10:19
gtematobiash: yes, exactly10:19
tobiashgtema: what's on the long term roadmap is that zuul will store its state in zookeeper in order to allow running multiple schedulers at the same time10:21
gtematobiash: it is nice in travis and co, that you can restart job from UI, but in zuul it is not possible. And even as an admin it is not that user-friendly10:21
gtematobiash: nice to hear10:21
gtematobiash: I basically faced similar problem, when job fails due to the config problems - no easy way to restart10:22
tristanCfwiw, "restart job from UI" is something mhu proposed a spec for: https://review.openstack.org/#/c/56232110:39
gtematristianC: thanks - fits10:39
tristanCgtema: also there is a zuul-changes.py script you can use to dump the current queue before restarting the scheduler10:42
gtematristianC: ok, this will however mean I need to rethink deployment. With pip (and ansible-role-zuul) tools are not available on the server, which actually should be, as even encrypt is not available this way10:45
*** spsurya has quit IRC10:56
*** jpena is now known as jpena|lunch12:03
*** jimi|ansible has joined #zuul12:14
*** ssbarnea_ has quit IRC12:16
*** electrofelix has quit IRC12:26
*** ssbarnea_ has joined #zuul12:33
*** rlandy has joined #zuul12:34
*** elyezer has quit IRC12:40
*** dkranz has joined #zuul12:43
*** ssbarnea_ has quit IRC12:49
*** elyezer has joined #zuul12:52
*** jpena|lunch is now known as jpena12:54
*** ssbarnea_ has joined #zuul12:57
*** pwhalen has quit IRC13:06
*** pwhalen has joined #zuul13:10
*** pwhalen has joined #zuul13:10
*** pcaruana has joined #zuul13:17
*** gouthamr has joined #zuul13:30
pabelangertobiash: clarkb: mordred: looks like there is another breakage with os-client-config: http://logs.openstack.org/58/566158/1/check/nodepool-functional-py35/45d52ed/controller/logs/screen-nodepool-launcher.txt.gz#_May_04_05_56_27_37844613:49
pabelangercloud_config.get_cache_expiration()13:49
pabelangerah, was reported in openstack-infra: https://bugs.launchpad.net/os-client-config/+bug/176881313:49
openstackLaunchpad bug 1768813 in os-client-config "config.get_cache_expiration_time() function missing in 1.31.0 release" [Undecided,New]13:49
pabelangerShrews: thanks!13:49
openstackgerritPaul Belanger proposed openstack-infra/nodepool master: WIP: Add fedora mirror to nodepool dsvm jobs  https://review.openstack.org/56610213:58
openstackgerritPaul Belanger proposed openstack-infra/nodepool master: Add fedora mirror to nodepool dsvm jobs  https://review.openstack.org/56610213:59
openstackgerritMerged openstack-infra/nodepool master: Fix test patching of clouds.yaml file locations  https://review.openstack.org/56613814:07
*** Guest46098 has quit IRC14:13
*** Guest46098 has joined #zuul14:14
openstackgerritPaul Belanger proposed openstack-infra/nodepool master: Remove debian-jessie from nodepool dsvm testing  https://review.openstack.org/56632714:26
openstackgerritPaul Belanger proposed openstack-infra/nodepool master: Add fedora mirror to nodepool dsvm jobs  https://review.openstack.org/56610214:28
openstackgerritPaul Belanger proposed openstack-infra/nodepool master: Add fedora-28 to nodepool dsvm  https://review.openstack.org/55921114:28
*** ssbarnea_ has quit IRC14:35
*** trishnag has quit IRC14:43
*** trishnag has joined #zuul14:47
openstackgerritPaul Belanger proposed openstack-infra/zuul master: Add tox-py36 jobs  https://review.openstack.org/56588114:48
pabelangerclarkb: okay, ^ should fix up bindep for bubblewrap now14:49
*** ssbarnea_ has joined #zuul14:51
*** dmellado_ is now known as dmellado14:57
*** pcaruana has quit IRC15:13
*** acozine has joined #zuul15:33
*** hashar is now known as hasharAway15:36
corvusfdegir: when you have a sec, can i ask you more about your static node situation?15:38
fdegircorvus: yes - I'm at the airport and will try to respond15:39
fdegircorvus: was typing an answer to your mail but got interrupted15:40
fdegircorvus: if you want, i can complete my response and send the mail which we can continue chatting here if you want15:40
corvusfdegir: that sounds great, and no rush :)15:41
corvusdon't miss your flight :)15:41
fdegircorvus: sent the mail15:55
openstackgerritPaul Belanger proposed openstack-infra/nodepool master: Add fedora-28 to nodepool dsvm  https://review.openstack.org/55921115:57
corvusfdegir: what's a POD?16:01
fdegircorvus: pool of devices16:02
fdegircorvus: in OPNFV, we have an official spec for the PODs we use16:03
fdegircorvus: 1 + 5 baremetal nodes16:03
clarkbpabelanger: +2 thanks. I like that simplification16:03
fdegircorvus: the 1 is the jumphost which serves as the entry point to a certain POD16:03
fdegircorvus: where we drive the deployment towards the remaining 5 nodes16:03
fdegircorvus: the 3 of 5 nodes act as controllers16:04
fdegircorvus: and the last 2 are compute nodes16:04
corvusfdegir: are those 1+5 always used together?16:04
fdegircorvus: yes16:04
fdegircorvus: I mean the jumphost gets connected to jenkins as slave16:04
fdegircorvus: and the remaining 5 has nothing to do with jenkins16:04
corvusfdegir: so, using nodepool, you might only register the jumphost16:04
fdegircorvus: yes16:04
fdegircorvus: when you initiate the deployment, the installers pxe boot the remaining 5 nodes and provision them with whatever OS they support16:05
fdegirwhich could be ubuntu, centos, or opensuse16:05
fdegircorvus: after the 5 nodes are provisioned, installation procedure starts with installing packages (if needed), doing network configuration, etc. etc.16:05
fdegircorvus: after that, OpenStack gets deployed on the nodes and the services are installed on them depending on what role they have16:06
fdegircorvus: the important thing for nodepool is the jumphost - that's the OS I was talking about in my response16:06
fdegircorvus: for example, we have tripleo which only supports Centos16:06
fdegircorvus: we have juju, it works with ubuntu16:07
fdegircorvus: openstack ansible works with all 316:07
corvusfdegir: so you might register a "centos" jumphost, and an "ubuntu" jumphost.  a tripelo job would request a centos jumphost, and for ansible, you'd have two jobs: centos job and an ubuntu job, to make sure you tested both?16:08
fdegircorvus: yes16:08
fdegircorvus: for the distro part16:09
fdegircorvus: and then we have sriov for tripleo and joid for example16:09
*** EmilienM is now known as EvilienM16:09
fdegircorvus: in this case, tripleo needs a node with centos + sriov and joid needs ubuntu + sriov16:10
fdegirjoid=juju16:10
corvusfdegir: you said sriov on both of those nodes, is that a mistake?16:11
fdegircorvus: no, you could have 2 PODs supporting sriov16:11
fdegircorvus: but their jumphosts my have different OS16:11
corvusfdegir: will you have any pods withou sriov?16:11
fdegircorvus: we have that too16:11
fdegircorvus: i didn't even mention other networking configuration or traffic generators etc. etc.16:12
fdegirI mean I don't expect all the different use cases to be covered16:12
corvusfdegir: so you will have some jobs that will request "ubuntu+sriov", and other jobs which will request "ubuntu" and they don't care whether sriov is there or not?16:12
fdegircorvus: that is true16:12
fdegirbut16:12
fdegircorvus: we really don't want to waste resources with special hw/capabilities for basic deployments16:13
fdegirbecause they are very few in number16:13
openstackgerritPaul Belanger proposed openstack-infra/nodepool master: Fix typo with _logConsole function  https://review.openstack.org/56635616:13
fdegirso this is the first part - the deployment16:14
fdegirthe other thing i mentioned above - traffic generators as well16:14
fdegirthe testing we do also contribute to number of combinations we might end up16:14
pabelangerclarkb: Shrews: tristanC: ^fixes a typo with recent refactor of nodepool-launcher16:15
corvusfdegir: are the traffic generators part of the pod as well, or can they be a separate resource?16:15
fdegirliek we shouldn't really run the test job that tests specific hw on a deployment that doesn't use any of the specific stuff16:15
fdegircorvus: they are not visible from jenkins/nodepool16:15
fdegircorvus: we just know they are attached to pod network16:15
fdegircorvus: which might be included in pod descriptor file16:16
fdegiri don't remember this part exactly16:16
corvusfdegir: what's an example of a set of labels that you might want to give a single node registered in nodepool?16:16
fdegirit could be like ubuntu + sriov, centos + ovsdpdk16:16
openstackgerritPaul Belanger proposed openstack-infra/nodepool master: Remove debian-jessie from nodepool dsvm testing  https://review.openstack.org/56632716:17
openstackgerritPaul Belanger proposed openstack-infra/nodepool master: Add fedora mirror to nodepool dsvm jobs  https://review.openstack.org/56610216:17
openstackgerritPaul Belanger proposed openstack-infra/nodepool master: Add fedora-28 to nodepool dsvm  https://review.openstack.org/55921116:17
fdegirwhen it comes to testing, i can think of something like centos + sriov + trafficgen16:17
corvusfdegir: well, the same node wouldn't be both ubuntu and centos?16:17
fdegircorvus: not at the moment16:17
fdegircorvus: the jumphosts are static - we don't (re)provision them16:17
fdegirthis is in our roadmap - make everything dynamic so we remove the OS dimension and provision jumphosts on demand16:18
fdegiror16:18
fdegirisolate installers from jumphosts by moving into VMs or containers16:18
corvusfdegir: i think i understand a lot about what you're saying, but i still don't understand enough to construct a set of labels for a single node.  like, i know that you are saying you'll have a job that requires centos+sriov+trafficgen, so therefore, you probably want a node with a label "centos_sriov_trafficgen".  but what other labels would it be useful for the same node to have?16:19
fdegirbut since all the installers we have come from upstream openstack, that's not an easy thing to achieve16:19
fdegirthose 3 are separate labels - not combined16:20
fdegiranother thing we do is, not all the pods are ci pods16:20
corvusfdegir: zuul only knows how to request a single label for a given node.16:20
fdegirmeaning that some member companies donate pods but they are donated for development work16:21
fdegirwhich we can't run ci on them and need to ensure jobs don't end them16:21
fdegirso these don't get ci-pod labels16:21
fdegircorvus: ok16:21
fdegircorvus: but then what was the nodepool multi-label feature about?16:21
corvusfdegir: so when writing a job in zuul, if you need centos+sriov+trafficgen, you'd have to have a label which means "this node has centos and sriov and trafficgen" which is why i speculated about the combined label centos_sriov_trafficgen16:21
fdegircorvus: ok - i thought having nodepool multi-label means zuul can request nodes using this feature as well16:22
corvusfdegir: it's about having a single node in nodepool with multiple labels, indicating it can satisfy different kinds of requests16:22
fdegircorvus: ok - now i think i realize the mistake I made16:23
corvusfdegir: unfortunately, part of the problem is that this feature snuck in without very much thought and doesn't really fit the architecture of the system.  that's part of why we're trying to remove it and/or redesign it to fit16:23
fdegircorvus: since we don't have dynamic pod assignment at the moment, we haven't really had the need to assign nodes using multiple labels16:23
fdegircorvus: i was talking about a future need but thinking a bit more about jenkins, i believe it doesn't support it either16:24
corvusfdegir: jenkins does let you give nodes multiple labels, then lets you specify things like "foo && bar", so you'd get a slave with both the foo and bar labels16:25
fdegircorvus: ok - i didn't know that because we haven't tried it yet16:25
fdegircorvus: so that's what we will need :)16:25
corvusfdegir: i think you would need that if you need to speficy centos+sriov+trafficgen as well as centos+sriov16:25
corvusfdegir: but if you don't have any jobs which want centos+sriov and don't care about trafficgen, then it's not necessary16:26
fdegircorvus: we have those16:26
fdegircorvus: we have different test levels - functional and benchmarking for example16:26
fdegircorvus: these have test cases for sriov16:26
fdegircorvus: on top of that, we have long duration testing which makes use of traffic gen16:27
fdegircorvus: so same pod can serve both functional+benchmarking and long duration testing on different times16:27
corvusfdegir: and you also have centos+sriov pods without trafficgen?16:28
corvusfdegir: (i ask because if all the centos+sriov pods do have trafficgen, then they're basically the same and trafficgen doesn't need to be a label)16:30
fdegircorvus: i think we do16:31
fdegirthe hw we have not homogenous16:32
corvusfdegir: okay.  basically, in nodepool, a label uniquely identifies a resource; so if a resource only needs to be addresed by one set of criteria (A+B+C) then it's okay for it to only have one label (A+B+C).  it really only needs more than one label if you need to address it by multiple criteria (A+B+C in some cases, just A in others, A+B in others, etc)16:33
clarkbjumping in with an idea maybe nodepool continues to uniquely label resources but jobs can specify either or16:34
clarkbso resource A has label X+Y+Z and resource B has X+Y label. Then in the job you say labels: - X+Y+Z - X+Y ?16:35
corvusfdegir: it sounds like most of the time, specifying a node by A+B+C is sufficient, but maybe sometimes you'll want to use A+B or just A;  in those cases, you could still request A+B+C, you would just be overspecifying.16:35
clarkbI don't know if that is easier to accomodate16:35
fdegircorvus: right16:35
corvusclarkb: that may work; we should think about it.16:36
fdegirclarkb: but if you have a label to uniquely identify a resource16:36
fdegirclarkb: we qlready have that with node names that are unique16:37
mordredcorvus: a similar usecase - based on what clarkb said - are the openstack tox jobs. they're currently designed to use ubuntu-xenial - but there is nothing about them that would not work on fedora-28 - if nodepool can provide both ubuntu-xenail and fedora-28, we COULD (not saying we should) say that tox jobs can run on ubuntu-xenial or fedora-28 and who cares16:37
fdegirthat’s a case we have too16:38
fdegirgeneral pool of PODs16:38
mordredbut I don't know if that would make the system better or worse to use overall16:38
corvusfdegir: well, you could still have multiple A+B+C pods, so even though they have unique hostnames, specifying them by label lets you use all of them.16:38
fdegirand in openstack ansible case, it doesn’t care about the OS of the junphost16:38
fdegirjust to reiterate again - i just wanted to highlight something we will need but it doesn’t mean it has to be supported16:39
fdegirbut we might bot be the only ones with similar needs16:39
fdegirnot*16:39
corvusfdegir: this is good, i like working with actual use cases :)16:39
corvusfdegir: i want to explore a different line of thought for a moment though16:40
mordredyah. actually use cases are better than theoretical ones16:40
corvuswe've been thinking about this in jenkins terms16:40
fdegirif you want more, i can bring more16:40
fdegiryou know, telco...16:40
corvusnodepool actually behaves a little differently, and it's interesting that you use the term "pool"16:40
corvusbecause nodepool does have an idea of a pool, and it's not that different from your POD16:40
corvusinstead of only registering the jumphost in nodepool, you could consider registering all of the hosts/resources16:41
fdegiryes - when i say pool, it is group of nodes that are pooled together with the use of labels on jenkins16:41
corvusthen instead of requesting a "centos+sriov+traficgen" node, which happens to come with 5 other nodes, you can request: 1 centos jumphost, 5 nodes, 1 traffic generator16:42
fdegirthat wouldn’t work...16:42
corvusnodepool will fulfill that request only if there is a pool that has at least 1 centos jumphost, 5 nodes, and 1 traffic generator16:43
corvuswhich means that if you have another pool with a centos jumphost and no traffic generator, it won't handle the request.16:43
fdegirthat might work - sorry for junpinh16:43
* Shrews catches up on scrollback from lunch16:43
fdegiri think this might work16:43
mordredcorvus: ooh. I like where you're going with that16:43
fdegircorvus: would it be possible to ensure zuul runs things on jumphost still?16:44
fdegirnot on any others randomly16:45
corvusfdegir: yes, all the nodes will be added to the ansible inventory, but you write playbooks which specify which nodes things run on, so you can just say "host: jumphost" in a playbook and it will only run there16:45
fdegirright16:46
corvusfdegir: (this is actually how some of our new v3 multinode jobs work in openstack)16:46
clarkband if you want to be extra careful you can have an early task that breaks ssh from executor to those nodes and only allow it from the jumphost16:46
fdegircorvus: so the example you’ve given16:46
clarkb(not necessary though)16:46
fdegircorvus: does the nodepool need to connect to all of those?16:46
clarkbfdegir: as currently implemented it needs to be able to do an ssh keyscan but I think pabelanger has added in an option to not do that?16:47
fdegircorvus: or just some kind of pool template and only try to access to the one we specify as host: jumphost16:47
corvusfdegir: hrm, currently i think it does; and also the zuul executor.16:47
fdegirclarkb: corvus: the thing is, we don’t realy deal with and guarantee the target nodes will have os on them16:48
fdegironly the jumphost is guaranteed to have an os and confugured16:48
mordredif we set the node connection type to something other that ssh16:48
fdegirthe rest of them are simply handled by installers16:48
mordredthen neither the executor nor nodepool will try to ssh to them16:48
fdegirthat works16:49
corvusmordred: i think there's a whitelist, so we'd have to add "null" or something16:49
pabelangerhost-key-checking is the setting in nodepool16:49
mordredyah16:49
pabelangerhttps://zuul-ci.org/docs/nodepool/configuration.html#pools16:49
corvuswe could do that, or, it's possible the "only register the jumphost" is the better model for some circumstances16:49
mordredcorvus: which would be pretty easy to do ... and incidentally would prevent accidental use of the nodes from the executor16:49
mordredcorvus: beause it would put ansible_connection_plugin: null in the inventory - and if you tried to use that, given that such a plugin doesn't exist, you'd get boom16:50
corvusmordred: well, we need to not break the setup task16:50
mordredyah. that's where we'd need the whitelist I think - because we already preventit from running setup if connection type is not supported16:51
corvusmordred: heh, the setup task has it's own inventory16:51
mordredthat too16:51
corvusmordred: so we're good, actually.  we just add 'null' to the blacklist16:51
mordredbecause ansible-network needs us to not run setup16:51
mordredcorvus: ++16:51
corvusBLACKLISTED_ANSIBLE_CONNECTION_TYPES = ['network_cli']16:51
corvusthat's the current value16:51
mordredyup16:51
clarkbblacklist or whitelist?16:51
mordredI like 'null'16:52
mordredclarkb: it's a blacklist - because ansible -msetup works for most things ... but not for network_cli16:52
clarkbah16:52
clarkbwe also have to separately whitelist it as an allowed connection type right?16:52
mordredso adding null to that would allow registering nodes in nodepool that you don't want ansible to try to talk to16:52
mordredclarkb: we prevent people from setting connection type16:52
mordredin playbooks16:53
clarkbI think we whitelist them as part of tobiash security updates16:53
corvusright, but in nodepool, we allow the admin to specify the connection type, and i think it is a whitelist16:53
mordredah16:53
fdegirsorry, boarding16:54
fdegirbefore i lose the connection16:54
corvusfdegir: thanks!16:54
fdegirwe have something else which i don’t remember i talked about in dublin during ptg16:55
fdegiri think i mentioned this to rcarrillocruz16:55
gtemabtw, about nodepool + shade. What is the expectation from interface_ip in a public cloud without attached floating_ip? In my case it is empty and nodepool-launcher fails16:55
*** jpena is now known as jpena|off16:55
fdegirso, not all the nodes are accessible via ssh16:55
fdegiron jenkins, we use jnlp so the nodes call jenkins16:56
fdegiri don’t know if you heard about this16:56
corvusin summary: i think there's some interesting new features in nodepool that may help with fdegir's case, but also, we should look seriously at supporting either multiple labels on the nodepool side, or multiple label requests on the zuul side.  or both.16:56
fdegirthat was all from me for the timebeing so thanks for listening16:57
fdegiri’ll be back and look at the backlog later but just to mention jnlp-like thingy is something i want to have a discussion about later16:58
corvusfdegir: the zuul executor needs to be able to contact the nodes (or at least one node in the case of a jumphost).  however, if the nodes aren't reachable, you may be able to put the executor near the nodes, and connect it to zuul via a vpn or similar.16:58
fdegircorvus: yes, i remember this conversation from ptg16:58
fdegirnow they ask me to turn off my phone so talk to you later16:59
corvusfdegir: bon voyage!16:59
fdegirthx16:59
corvusShrews, clarkb, mordred: what i'm getting is that because of the way nodepool works, we have some flexibility and some things that otherwise would have needed multiple label support may not because of the ability to request individual resources.  however, it does seem like there are some likely use cases for multiple label support.  fdegir's is one example, mordred's is another.17:01
corvusShrews, clarkb, mordred: so i'm inclined to think we should either handle multiple labels on nodes, or multiple label requests.17:02
corvuseither way -- the hard stuff is in nodepool.  if we do multiple label requests, the nodepool pool manager has to exhaustively search the combinations to figure out if it can satisfy a request.17:03
mordredcorvus: I agree with all of those words17:04
corvusif we handle multiple labels on nodes, we probably have to change how we report things17:04
corvuswe could also just bite the bullet and do both17:04
mordredgtema: when you say "without attached floating_ip" - does the node have a fixed ip that's 'public' or does it just have a private fixed ip and your nodepool can talk on that private subnet?17:05
Shrewscorvus: i'm not quite clear how a "multiple label request" would function17:05
mordredcorvus: I can see value in both things - the hard logic in nodepool of matching multiple requests could likely be used to satisfy both UI mechanisms17:06
corvusShrews: clarkb suggested a list of labels which should be considered as OR'd together.17:06
corvusShrews: so a zuul job says "ubuntu OR fedora" and each nodepool manager says "do i have an ubuntu node? yes!  i can handle this"17:06
corvusShrews: or another pool manager says "do i have an ubuntu node?  no.  do i have a fedora node?  yes!"17:07
clarkband if it doesn't have either of them but has capacity could boot a random one of the two17:07
Shrewscorvus: ah, gotcha17:07
gtemamordred: private fixed ip in the same subnet17:07
Shrewseither way, requires a nodepool change then17:07
corvusi'm being loose with terminology, i should really say "a pool manager says do i have an ubuntu label configured"17:07
corvusShrews: yep.  zuul gets off easy :|17:08
gtemamordred: issue is that nodepool fails (raise exception) if interface_ip is empty17:08
ShrewsAll I wanted to do was register static nodes  :(   *sad trombone*17:08
corvusShrews: it just has to worry about how the yaml is formatted.  maybe that's not so easy after all. :)17:08
corvusShrews, clarkb, mordred: how about we let our subconsious minds mull this over the weekend and regroup for a decision next week?17:09
Shrews++17:09
mordredgtema: cool. so - there is an option you can put into your clouds.yaml for that cloud in question17:09
mordredgtema: "private: true"17:09
mordredgtema: that will tell shade that it should use the private address for interface_ip17:09
clarkbShrews: corvus wfm17:10
gtemamordred: yes, I have found this part in the sdk, but it is somehow weird. The cloud is public in reality. It is your loved OTC17:11
mordred\o/17:11
mordredgtema: yah - that's ... maybe not the *best* named config option - but what it's saying is not that the cloud is private or public, but that shade should consider the private interface to be how you want to connect to the nodes from the location you're using shade17:12
mordred"connect_using_private_interface" would probably be a better option name17:12
gtemamorder: :-/ ok17:12
gtemamordred: should I make a patch for renaming?17:13
*** myoung|ruck is now known as myoung|ruck|biab17:13
mordredno - not just yet - I'm not crazy about that long name either - let's see if we can come up with a better name - or possibly what we might want to do is make this a flag in the networks: config17:14
mordredgtema: but this is a thing that comes up from time to time - so privatE: true is clearly not working for peopl17:14
gtemamordred: ok. got it.17:15
gtemamordred: story on storyboard for discussion?17:15
mordredgtema: that's a great idea17:17
gtemamordred: pl17:17
gtemaok17:17
*** acozine has quit IRC17:19
gtemamordred: done at https://storyboard.openstack.org/#!/story/200196617:22
openstackgerritMerged openstack-infra/nodepool master: Fix typo with _logConsole function  https://review.openstack.org/56635617:31
mordredgtema: thanks!17:45
gtemawelcome17:45
*** pcaruana has joined #zuul17:54
*** acozine has joined #zuul17:57
*** dtruong2 has joined #zuul18:22
*** dtruong2 has quit IRC18:22
*** dtruong2 has joined #zuul18:22
openstackgerritMonty Taylor proposed openstack-infra/nodepool master: Add openstacksdk to nodepool-functional-py35-src  https://review.openstack.org/56638718:50
*** dtruong2 has quit IRC18:59
*** gtema has quit IRC19:06
*** pcaruana has quit IRC19:10
*** hasharAway is now known as hashar20:36
openstackgerritMerged openstack-infra/zuul master: Add zuul systemd drop-in files for CentOS 7  https://review.openstack.org/56507421:13
openstackgerritMerged openstack-infra/zuul master: Skip attempting python3-devel installation on CentOS 7  https://review.openstack.org/56508021:13
openstackgerritMerged openstack-infra/zuul master: Add start and end timestamp to task and play result in zuul_json callback  https://review.openstack.org/56388821:17
*** myoung|ruck|biab is now known as myoung|ruck21:18
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Add logo to docs  https://review.openstack.org/56640621:22
pabelangerif anybody is up for some nodepool-dsvm testing reviews, removes debian-jessie and adds fedora with mirrors: https://review.openstack.org/#/q/topic:nodepool-dsvm+status:open21:26
corvuspabelanger, clarkb: i think it's time to look at moving that stuff out of nodepool.  when i went to make the release, nodepool changes are basically just dib testing changes.21:39
corvusit was hard to find out if anything in nodepool itself had actually changed21:40
corvusi think inside of nodepool itself, we should have enough testing to build a single OS image with dib and an ssh key21:40
corvusthe rest we should either move to dib, or to project-config21:41
*** hashar has quit IRC21:42
clarkbwfm, dib is probably preferable so that changes are self testing21:45
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Add logo to docs  https://review.openstack.org/56640621:47
openstackgerritJames E. Blair proposed openstack-infra/nodepool master: Remove initial stub release note  https://review.openstack.org/56640921:48
openstackgerritJames E. Blair proposed openstack-infra/nodepool master: Add logo to docs  https://review.openstack.org/56641021:48
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Add logo to docs  https://review.openstack.org/56641121:50
openstackgerritJames E. Blair proposed openstack-infra/zuul-base-jobs master: Add logo to docs  https://review.openstack.org/56641221:51
openstackgerritJames E. Blair proposed openstack-infra/zuul-sphinx master: Add logo to docs  https://review.openstack.org/56641321:52
corvusfungi: where did you find that mailman was trying to run list_lists?21:55
corvusfungi: er, that the debian scripts were trying to run that, rather21:55
corvusfungi: ah i think i found it21:58
corvusalso, sorry, wrong channel :)22:01
*** acozine has quit IRC22:22
*** rlandy has quit IRC22:30
*** pwhalen has quit IRC22:34
*** pwhalen has joined #zuul22:36
fungino problem, just found my way back from the beach bar22:43
*** ssbarnea_ has quit IRC22:43
*** trishnag has quit IRC23:24

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!