Monday, 2020-06-08

*** swest has quit IRC01:19
*** swest has joined #zuul01:34
*** bhavikdbavishi has joined #zuul03:13
*** bhavikdbavishi1 has joined #zuul03:16
*** bhavikdbavishi has quit IRC03:17
*** bhavikdbavishi1 is now known as bhavikdbavishi03:17
*** saneax has joined #zuul05:26
*** hashar has joined #zuul06:22
*** yolanda has joined #zuul06:43
*** bhavikdbavishi has quit IRC06:49
*** ysandeep is now known as ysandeep|afk06:57
*** iurygregory has joined #zuul07:10
*** jcapitao has joined #zuul07:13
*** rpittau|afk is now known as rpittau07:14
*** bhavikdbavishi has joined #zuul07:27
*** tosky has joined #zuul07:39
*** dennis_effa has joined #zuul07:44
*** hashar has quit IRC07:47
*** jpena|off is now known as jpena07:56
*** nils has joined #zuul08:07
*** wuchunyang has joined #zuul08:08
*** ysandeep|afk is now known as ysandeep08:08
*** ttx has quit IRC08:19
*** ttx has joined #zuul08:19
openstackgerritTobias Henkel proposed zuul/zuul master: Fix test race in test_client_dequeue_ref  https://review.opendev.org/73403208:27
tobiashzuul-maint: I just got hit by this test race in a local run ^08:30
*** sshnaidm|afk is now known as sshnaidm08:44
openstackgerritSorin Sbarnea (zbr) proposed zuul/zuul master: Make task errors expandable  https://review.opendev.org/72353408:47
*** wuchunyang has quit IRC09:08
*** sgw has quit IRC09:54
*** dennis_effa has quit IRC10:04
*** bhavikdbavishi has quit IRC10:12
*** rpittau is now known as rpittau|bbl10:15
*** avass has quit IRC10:18
*** bhavikdbavishi has joined #zuul10:23
zbrcorvus: mordred tristanC: https://review.opendev.org/#/c/723534/ about more/less on errors seems ready.10:24
*** masterpe has joined #zuul10:42
masterpeI want to set zuul_log_cloud_config but where do I define the secret?10:42
*** wuchunyang has joined #zuul11:05
*** jcapitao is now known as jcapitao_lunch11:08
*** fbo|off is now known as fbo11:18
*** wuchunyang has quit IRC11:26
*** jpena is now known as jpena|lunch11:32
*** rfolco has joined #zuul11:37
*** bhavikdbavishi has quit IRC11:45
*** dennis_effa has joined #zuul11:53
*** rfolco is now known as rfolco|rover11:54
*** jcapitao_lunch is now known as jcapitao12:01
*** rlandy has joined #zuul12:06
*** rpittau|bbl is now known as rpittau12:13
*** avass has joined #zuul12:43
*** saneax is now known as saneax_AFK12:54
openstackgerritFabien Boucher proposed zuul/zuul master: gitlab - add driver documentation  https://review.opendev.org/73388012:57
*** jpena|lunch is now known as jpena13:03
*** bhavikdbavishi has joined #zuul13:07
*** rlandy is now known as rlandy|mtg13:20
*** iurygregory is now known as iurygregory|afk13:20
openstackgerritSagi Shnaidman proposed zuul/zuul-jobs master: Add ansible collection roles  https://review.opendev.org/73036013:31
openstackgerritMatthieu Huin proposed zuul/zuul master: [WIP] web UI: user login with OpenID Connect  https://review.opendev.org/73408213:32
*** sgw has joined #zuul13:45
tobiashmasterpe: typically next do the job using it13:58
masterpeI figured out how the secrets work, When I try op upload the logs to swift I get a failure on: SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed14:00
masterpeThe swift server has a privately ca14:00
tobiashthen you need to add this to the ca certs on the executor14:01
masterpethe ca is in /usr/local/share/ca-certificates/14:01
masterpeon the executor14:01
tobiash(or disable ca cert verification in clouds.yaml)14:01
tobiashmordred: openstacksdk uses requests for accessing swift right?14:03
tobiashmasterpe: maybe this helps: https://stackoverflow.com/questions/42982143/python-requests-how-to-use-system-ca-certificates-debian-ubuntu14:03
openstackgerritSagi Shnaidman proposed zuul/zuul-jobs master: Add ansible collection roles  https://review.opendev.org/73036014:05
mordredtobiash: yes it does. looking14:07
mordredmasterpe: yes - what tobiash said about telling requests to use system bundle. you can also put the path to the ca into clouds.yaml for that cloud. one sec, getting a link with an example14:09
*** bhavikdbavishi has quit IRC14:09
mordredmasterpe: https://opendev.org/opendev/system-config/src/branch/master/playbooks/templates/clouds/bridge_all_clouds.yaml.j2#L154-L17414:10
mordredwe use the cacert option there for the limestone cloud14:11
*** rlandy|mtg is now known as rlandy14:15
tobiashcorvus, mordred: should we provide tagged docker images? Thinking about the release plan for scale out scheduler I think we should consider this because it's not really possible to stick with a release when running a dockerized zuul with the official images.14:26
fungitobiash: we do provide tagged docker images: https://hub.docker.com/r/zuul/zuul-scheduler/tags14:27
fungior do you mean specifically versioned tags?14:27
tobiashfungi: we have only the latest tag. I mean versioned tags14:28
mordredtobiash: yeah - it's a thing we've talked about before - there's an unsolved problem to solve14:30
fungii want to say we've talked through this before and the main logistical challenge to solve is mapping a git tag back to a gerrit change since our docker uploads are triggered by change-merged events... but then the version number itself isn't properly baked into the image either14:30
mordredyes - that14:30
fungiso we likely need to actually build new images from the git tags14:30
mordredit's a problem we shoudl solve, becuase it's an important use case14:30
fungibut... how do we test those?14:30
mordredexactly14:30
mordredI feel like corvus had an idea recently about how we could find the right image sha to tag14:31
tobiashoh hrm14:31
fungii suppose we could do a build->test->upload cycle in a tag-triggered release pipeline14:31
avasscouldn't you do the reverse, trigger a release and tag + upload if the tests pass?14:31
fungithe risk is that if a tag fails the test portion, we have a git tag with no corresponding docker image tag14:32
mordredavass: but releases are triggered by git tags14:32
mordredyeah14:32
mordredgit tags don't have a speculative aspect currently14:32
mordredand once a tag is pushed it should be considered permanent, becuase tag deletes don't propagate to people's clones14:32
avassmordred: yeah currently, but they don't have to do they?14:32
fungiright, what we're really lacking is some way for zuul to test a git tag which isn't applied to the repository, and then apply that tag14:32
mordredyah14:33
tobiashmordred: openstack is doing that differently right?14:33
tobiashvia a release repo?14:33
mordredtobiash: yeah - openstack has a release repo14:33
fungiright, they propose releases into yaml files which list the desired tag name and commit14:33
*** bhavikdbavishi has joined #zuul14:33
mordredbut I think it would be a really good experience for users of the zuul-jobs image build jobs if they could handle mapping a git tag to the appropriate already-tested docker image14:34
tobiashwith that workmode we could request a release, build and push docker images and at the end tag14:34
avassyeah we do the same with some projects14:34
fungiand then release jobs can make preliminary tags and exercise those before having automation create the final git tags14:34
fungibut it basically means a human can no longer sign and push git tags14:35
mordredthere's another thing - which is that docker tags aren't tags like git tags are - they're really more like branches14:36
*** iurygregory|afk is now known as iurygregory14:36
fungiahh, so you'd have one tag per release series and recreate that tag onto the latest point release image for that series14:37
fungiand then people would specify that tag and get the latest image for that particular release series?14:37
mordredyeah. like - frequently if you have like a 3.0.0 - you might have a docker image with :3.0.0 and :3.0 and :314:38
corvusit's an option; you don't have to do it that way14:38
*** bhavikdbavishi has quit IRC14:38
mordredand when you do 3.0.1 - :3.0 and :3 update but there might be a new tag called :3.0.114:38
avasshaving changes for tags in gerrit would have been nice to be honest.14:38
mordredavass: +100014:38
mordredavass: I've wanted that for so long14:39
corvusthe practical thing we can do today is build new docker images from tags just like we do release artifacts14:39
mordredyeah14:39
corvussomeone just needs to set up the jobs for that :)14:39
corvusthe harder thing is what mordred was talking about wrt retagging14:39
mordredcorvus: do we worry that image rebuilds are more dangerous than artifact rebuilds due to dependency shift?14:40
tobiashif the image build fails we could re-trigger the release pipeline (if it's not broken due to a package update at least)14:40
corvusmordred: isn't it roughly the same risk as the release artifacts?14:40
corvusthose are currently rebuilt as well.  the additional risk is only the opendev base images and underlying debian images14:41
mordredyeah - and I guess since we're pinning to specific python image versions that risk is also pretty low14:41
mordredyeah - maybe just doing rebuilds for now is fine and we can improve that later if we figure out retagging14:42
corvusi think so.  at least, i think i'm comfortable with a rebuild model until we get around to implementing retagging14:42
corvusi think for retagging, we'd have to find the change for the latest non-merge commit in the git tag, then query the api to find the gate build for it, then re-upload the image from the intermediate registry; so i guess not so much retagging as re-uploading.14:46
fungithough that image won't have the right zuul version number internally since its creation predates the git tag, right?14:47
mordredthat is correct14:47
mordredthe git sha portion of its version number will be accurate - but it will not know about the tag14:48
openstackgerritFabien Boucher proposed zuul/zuul master: gitlab - add driver documentation  https://review.opendev.org/73388014:48
fungiaf for generating new images at tag push, that seems like a reasonable compromise, just means (counterintuitively) that our non-release images are better tested than our release images, so we may want to explain that somewhere14:49
corvusfungi: we don't explain that for our tarballs14:51
corvusthe version number issue may be enough to say that just rebuilding is better14:52
fungigood point. as you said earlier, our tarballs have the same exact situation14:53
corvustobiash, fungi, mordred: if we don't implement rebuild tags by the time we want to cut 4.0, we could probaby manually make a one-time 3.x tag in docker just to keep the release going14:55
mordredcorvus: we should maybe do that anyway - we're not planning another 3.x git tag are we?14:57
fungiwild idea... what if we had some magic whereby a job could build release artifacts from a prerelease tag but manipulated the version to bake in the final version number. we could upload those images tagged with prerelease tags and then later the release tag could trigger a job which re-tags the image with the release tag14:58
mordredfungi: ooh. I actually like that ...14:58
corvusmordred: yeah, that's probably a good idea, because re-enqueing it may get messy?  not sure what would happen with the tarball releases in that case14:58
fungiso, e.g., 4.0.0rc1 tag tells pbr the version is actually 4.0.0 when building the image but we tag it on dockerhub as 4.0.0rc1, and then later if/when we tag the same commit as 4.0.0 we replace or add to the existing rc tag on the image with 4.0.014:59
corvusfungi, mordred: version munging would be pretty language specific right?  like, to do that, we'd need to do some kind of python setuptools thing, but that isn't going to translate to something in rust, etc?15:00
fungiyeah, this would probably be pbr-specific even15:00
fungiso not a general solution15:00
mordredI thnik it could _just_ be local git-tag based15:00
mordredbut we'd want 2 versions of the job15:00
mordredone that did not munge tags (so you could have an image with the pre-release tag a tagged)15:01
fungioh, that's a fair point, we could stick a fabricated git tag in the local copy of the git repo on the worker15:01
mordredand one that was built with a fabricated additional git tag15:01
mordredfungi: exactly15:01
fungiso you have the rc1 image and the rc1-for-release image or whatever15:01
mordredyeah15:01
corvusin order to use this, you would need to know that you are going to tag a certain commit ahead of time?15:01
mordredno - I think you do the image-rebuild-on-tag approach15:02
fungiwell, you'd need to know what you plan to tag it if it's viable15:02
mordredto rebuild images on tags - but on tags matching a pre-release regex15:02
fungihence tying it to rc tags15:02
mordredyeah15:02
corvusi don't understand what problem this solves15:02
mordredcorvus: it solves having the wrong version reported by zuul in a retagged workflow15:03
fungithe "existing image predates official release tag" problem15:03
mordredif you build an image from the -rc tag as-if it was retagged without the -rc - then the retag job could grab that image and it would be correct for the target final tag15:04
fungiso we can build and publish an image which internally considers itself to really be 4.0.0 but is generated from an rc tag and tagged as rc on dockerhub15:04
fungiand then later all we have to do is update the tag on dockerhub *if* we decide to retag rc1 as the actual release in git15:04
corvuswow i am so completely lost15:04
clarkbthe benefit to that is being able to separately verify the rc1 before calling it the release?15:04
corvusi think i need one of you to explain step by step how an image gets created15:05
mordredcorvus: ok. let me do that15:05
clarkbcorvus: ya I'm with you on not understanding the problem this solves15:05
avasswouldn't you need to store a lot of images or only be able to tag the latest commit then?15:05
mordredbefore I explain step by step, let me explain the problem again just so we're on the same page15:05
fungiclarkb: mainly that we can test the image before upload but not end up burning release tags if that image isn't viable (due to changes in dependencies or whatever)15:06
mordredthis is specifically talking about retagging in docker previously built images when we push a git tag15:06
mordredand the issue is that we have code in our software that derives the version reported to the user from the git repository15:06
mordredso if you re-tag a docker image that was built from a git repo that was not tagged, you will get a version reported that is based on that git repo state - and that will continue being the case even if the docker image gets retagged to match the git tag15:07
mordreddoes that part make sense as an issue?15:07
mordredso like zuul.opendev.org currently reports Zuul version: 3.18.1.dev166 319bbacf - which is fine for us, but if we tagged 319bbacf as 3.19.0 and then did a docker re-tag of the image that corresponded to 319bbacf and published to dockerhub as 3.19.0 - their zuul would still report 3.18.1.dev166 and that might be confusing15:10
clarkbright but building on tag in the release pipeline solves that?15:12
mordredright. I am not tlaking about that15:12
mordredI am talking about the retagging workflow15:12
clarkbok reading scrollback what I read was "don't worry about retagging"15:12
clarkbbecause it requires speculative tagging which we don't have, but maybe I misread that15:13
mordredI believe that's "dont' worry about retagging for now - we'll rebuild on tag"15:13
mordredI think ultimately if we can have a retagging solution it jives much better with the story of "the image you publish is the image that was gated" which we ultimately otherwise go through great pains to implement - but I agree, due to lack of speculative git tags is not a quick implementation. I think however, fungi's suggestion can greatly simplify some parts of the equation15:14
mordredso let me describe stepwise how this could work15:15
clarkbI'm guessing the way most deal with this is to always use the speculative version on "release"15:15
clarkbso even for change 123456 we'd tag that as 3.next15:16
clarkband drop the .dev15:16
mordredthe way most people deal with this is garbage15:16
clarkbya I'm not assigning any value to it, merely thinking about how others must be addressing this in the wild15:16
clarkband I'm guessing as soon as development begins on version 3.3.2 that is the version used for all published images until 3.3.2 is cut and 3.3.3 starts15:17
mordredno - they usually do a git commit dance15:17
openstackgerritSagi Shnaidman proposed zuul/zuul-jobs master: Add ansible collection roles  https://review.opendev.org/73036015:17
mordredso there is a version of 3.3.2-dev in the git repo, then they make a commit changing the version to 3.3.2 with a message "prepare for 3.3.2 release" then they hopefully remember to tag that commit, then they make anotehr commit changing the version to 3.3.3-dev15:18
mordredlike - that's basically 98% of why we wrote pbr :)15:18
clarkbright, that was all before dockerhub though?15:19
clarkbon dockerhub what you've described is that others are retagging as things move15:19
clarkband if you do that having the .dev versions is ugly15:19
mordredwell - what I'm describing is us retagging15:19
mordredother people rebuild15:19
clarkbgotcha15:19
mordredso - the process to do fungi's suggestion ... we tag 319bbacf as 3.19.0-rc115:22
mordredtwo build/publish jobs are triggered15:22
mordredone builds, tests and publishes the image as-is as 3.19.0-rc1. we can test here because if we have a git rc tag with no published artifact it's not really that big of a real, we can always make an rc2 - that's fine15:22
fungi3.19.0.0rc1 but who's counting ;)15:23
fungi(or is that 3.19.0rc1? i never can remember)15:23
mordredthe other job has a pre-playbook that looks at the tag, strips the rc and locally re-tags the git repo without the rc sufix - then runs the build/test/publish sequence15:23
mordredit publishes it as, let's say 3.19.0-rc1-retagged (or something like that)15:24
mordredand the image contents of 3.19.0-rc1-retagged would be zuul software with a 3.19.0 tag applied15:24
mordredthen - when we decide 3.19.0-rc1 was good, we tag that same commit as 3.19.015:25
funginote that we can also run tests in the release pipeline if we want, maybe create the image and stick it in a build registry, run our integration tests on it, then upload15:25
mordredfungi: yes - absolutely15:25
fungiif tests fail due to a problem with the image, we may not upload some rcs15:26
fungiwhich doesn't seem like a problem to me at least15:26
mordredthe 3.19.0 tag triggers a retagging job. that job looks at the tag, and now it doesn't have to look for latest non-merge commit - it can just look for the most recent rc commit matching the same tag15:26
fungijust fix and move on to rc215:26
mordredor - really - scratch that last part15:26
corvusmordred: i understand now, thanks.15:27
mordredit can just look for the docker image matching the most recent rc with the -retagged suffix - and it can do the retag on dockerhub15:27
mordredcorvus: cool.15:27
corvusthe part i was missing was that we would tag the same commit twice, one with -rc, then promote an edited version of the rc15:28
corvusand both of those tag events would be retroactive15:28
*** iurygregory has quit IRC15:29
fungithat pattern is borrowed from openstack's rc model... you keep tagging a new rc and when you're happy with one, you re-tag the same commit as the release15:29
*** iurygregory has joined #zuul15:29
*** bhavikdbavishi has joined #zuul15:29
fungibut the building the rc artifact as if it's the release artifact is the novel bit15:29
fungianyway, i did call it a "wild idea" and it may be more complexity than its worth15:30
mordredyeah - I could see it working, and also see it being too much :)15:30
corvusmordred: why bother with the "one builds, tests and publishes the image as-is as 3.19.0-rc1." part?15:31
fungithe rc bit does sort of mirror how we've been logistically releasing zuul though... we do come up with a candidate commit of sorts, decide it's been sufficiently exercised, and then tag it as the release. tagging an rc there first when we identify a potential release point is just a slightly more formal declaration15:31
mordredcorvus: good point15:31
mordredcorvus: that is probably wasted15:31
corvusk.  could be useful if we really wanted an rc for people to test, but that's not how we roll :)15:31
fungii didn't originally suggest parallel jobs for rc-built-as-rc and rc-built-as-release (only the latter)15:32
mordredcorvus: yeah - in fact we could just publish the images as 3.19.0-rc1 but with the git retag done15:32
corvus(so maybe a component of a more general-purpose complete system, but not something we'd use for the zuul project itself)15:32
mordredyeah15:32
*** bhavikdbavishi has quit IRC15:42
avasswell, it still seems to me that the tags shouldn't be what triggers the release (unless you want to save every build from a merged change). but the tags should be created after a release build succeeds15:51
avassbut then you'd need a manual trigger that can start testing any commit15:52
mordredavass: yeah - the manual trigger process you describe is what openstack uses a releases repo for - someone adds a commit id there and jobs then run stuff15:54
*** ysandeep is now known as ysandeep|afk15:54
mordredavass: for instance: https://review.opendev.org/#/c/730663/2/deliverables/victoria/osc-lib.yaml15:55
mordredbut that's a bit harder to generalize15:56
avassmordred: yeah, we do the same. but I'm not sure I like that solution entirely15:56
mordredI think one of teh things I like about the above rc-baesd system is that you only have to save the rc builds15:56
mordredand - you have the build system build those rc's as if they actually were the release - which makes them even closer to being true release candidates :)15:57
avassyep15:57
mordredtheyr'e still not 100% speculative - in that you couldn't propose a tag, and then have a depends on somewhere else in a patch that bumps a minimum... but we don't have that anywhere15:58
avassI guess what I don't like about it is having more tags in the repo, but that's a bit nitty :)15:58
mordred(I wish we did - I actually have need for that righ tnow)15:58
avassyou could technically create a new stub branch that tests against HEAD^15:58
openstackgerritMatthieu Huin proposed zuul/zuul master: zuul-web: support OPTIONS for protected endpoints  https://review.opendev.org/73413415:59
avassand tags that if it succeeds, but that would leave a lot of branches. Well unless you delete them15:59
mordredyeah - and then that's kind of similar to an out-of-band file - it might not be clear what's happening, and possible hard to implement in a reproducible manner16:00
avassyeah16:00
mordredwe could pretty easily make some jobs that implement the rc idea and just say "if you tag things using this strategy, the following will happen"16:00
mordredat the cost of some extra tags :)16:01
avassexcept that the jobs could have changed between the tagged commit and the head of the branch, and if you really want to release that commit you would need to branch out anyway16:02
fungithat's true, however the image isn't benig rebuilt16:04
avassspeaking more generally :)16:04
fungioh, well sure. i think the idea with this model is that you build artifacts from the rc tag as if they're the release (by locally retagging in the jobs) and then don't rebuild again when the actual release is tagged from the same commit16:05
fungijust update external references to the rc-as-release artifacts16:06
avassyup16:06
avassI guess having changes for tags would solve all of this really.16:07
*** rpittau is now known as rpittau|afk16:08
corvusmordred: what's your need for more tags in the repo right now?16:08
avassI believe it's building + testing without using the release tag (if I got that right)16:09
openstackgerritSagi Shnaidman proposed zuul/zuul-jobs master: Add ansible collection roles  https://review.opendev.org/73036016:09
corvusmordred, avass: ah, i misread that, thanks :)16:11
*** ysandeep|afk is now known as ysandeep16:15
openstackgerritMatthieu Huin proposed zuul/zuul master: zuul-web: refactor auth token handling code  https://review.opendev.org/73413916:23
*** dennis_effa has quit IRC16:24
openstackgerritMatthieu Huin proposed zuul/zuul master: zuul-web: support OPTIONS for protected endpoints  https://review.opendev.org/73413416:26
openstackgerritMatthieu Huin proposed zuul/zuul master: zuul-web: refactor auth token handling code  https://review.opendev.org/73413916:29
*** ysandeep is now known as ysandeep|away16:30
*** sshnaidm is now known as sshnaidm|afk16:34
zbrcorvus: avass : https://review.opendev.org/#/c/723534/ ? (more)16:36
sshnaidm|afkcores, please take a look: https://review.opendev.org/#/c/730360/16:37
avasszbr: oh, nice16:38
avasssshnaidm|afk: looking :)16:41
zbravass: i wonder if you could create another linter rule that fails if someone adds new role that does not have testing jobs.16:41
avasszbr: you don't need a linter rule for that, just something like "find roles -mindepth 1 -maxdepth 1 | xargs -I {} bash -c 'git grep -q {} -- test-playbooks'"16:44
zbris not so easy, we still have few w/o tests16:45
zbri think is bit tricky because you could have a broken trigger rule and still miss to test the change16:46
zbrnot sure how to enforce something like this, we may even have jobs that tests a group of roles16:46
zbrthere is no 1-1 match, afaik16:47
avasszbr: replace find with: git diff-tree --name-only -r --diff-filter A HEAD | grep roles | awk '{split($0,a,"/"); print a[2]}' | uniq16:51
avass:)16:52
corvuszbr, avass: i think it's probably better that we ask zuul-jobs reviewers to think about the test coverage when adding roles.  determining whether a test is adequate should be part of any change.16:52
avasscorvus: yeah I agree :)16:52
zbrindeed, not everything needs to be automated16:52
avassbut automating everything is the goal isn't it? ;)16:54
corvushow about a compromise: we automate the test tests after we've automated writing the code to start with?  :)16:54
*** jcapitao has quit IRC16:55
avassor we write the tests first and let an ai write the code to fulfill the tests!17:00
fungi"automate all the things it makes sense to automate" is a mouthful, and not as catchy ;)17:01
openstackgerritFabien Boucher proposed zuul/zuul master: WIP gitlab: implement git push support  https://review.opendev.org/73415917:05
*** fbo is now known as fbo|off17:08
*** bhavikdbavishi has joined #zuul17:17
*** jpena is now known as jpena|off17:20
*** nils has quit IRC17:25
*** dennis_effa has joined #zuul17:37
openstackgerritMerged zuul/zuul master: Make task errors expandable  https://review.opendev.org/72353417:49
openstackgerritMatthieu Huin proposed zuul/zuul master: zuul-web: support OPTIONS for protected endpoints  https://review.opendev.org/73413417:51
openstackgerritMatthieu Huin proposed zuul/zuul master: zuul-web: refactor auth token handling code  https://review.opendev.org/73413917:52
openstackgerritMatthieu Huin proposed zuul/zuul master: zuul-web: refactor auth token handling code  https://review.opendev.org/73413917:58
openstackgerritMerged zuul/zuul master: Fix quickstart gating, Add git name and email to executor  https://review.opendev.org/72409618:03
*** saneax_AFK has quit IRC18:03
*** dustinc has joined #zuul18:16
*** rlandy is now known as rlandy|mtg18:20
avasscorvus: would it make sense extend nodepool-builder for dib with ec2? I found out how nice that tool actually is and might set up some roles to import images to aws :)18:23
avasscorvus: and do we want to support multiple builders, like amazon image builder as well?18:24
Shrewsavass: there has been talk in the past about having a "build API" similar to how nodepool has a "driver API", and i think that would be a welcome change. It's not something I had in mind when designing the initial code, so there will be some significant (I'm guessing) reworking of that to allow such an API. It would take someone familiar with more than one or two different build processes, I would think, to make it a good, extensible API.18:29
avassShrews: I'm looking into it a bit at a time, we're not using it right now since it only supports dib and building something in AWS is a bit easier than importing it through s3.18:30
mordredavass: for context- the very original version of nodepool made images by booting vms in cloud and then saving them (the approach things like packer use) and we abandoned it due to its fragility at scale - and also so that we could make images that were actually consistent no matter who the cloud provider was18:32
avassmordred: yeah it makes sense18:32
mordredavass: so I'd still advocate for building images and uploading them to ec2 than snapshotting an image based on a base ami18:32
mordredBUT18:32
mordredthat's what I'd suggest- I can't see a reason to make a hard rejection of a driver that did it the other way18:33
mordredit just seems much much harder18:33
avassnot needing credentials and extra resources to build images is a plus as well :)18:33
mordredyah18:33
mordredalso - corvus has been working on code for dib to be able to use arbitrary docker image as the source of an initial root filesystem18:34
mordredend-goal there being that you should be able to just express what you want in your cloud images in a dockerfile18:34
avassthat would be cool18:34
mordredand then have dib transform that into a bootable cloud image18:34
mordredall that said - we should totally grow the support for nodepool-buiolder to interact with aws :)18:35
avassyeah, I'm planning on 'whenever I get time' create some roles for s3 log uploads and things like that. but there's always so much to do :)18:36
fungithe biggest reason for the current builder design is that we wanted to be able to upload consistent images to a dozen or more regions of independent service providers18:37
fungiand the "easiest" way to do that was build once upload everywhere, rather than trying to force builds in multiple locations to produce identical results18:37
mordredavass: "whenever I get time" is my favorite schedule18:38
openstackgerritMatthieu Huin proposed zuul/zuul master: zuul-web: support OPTIONS for protected endpoints  https://review.opendev.org/73413418:43
openstackgerritMatthieu Huin proposed zuul/zuul master: zuul-web: refactor auth token handling code  https://review.opendev.org/73413918:44
*** rlandy|mtg is now known as rlandy18:56
*** hashar has joined #zuul18:59
avassoh and getting support for other builders might be necessary unless dib wants to support windows :)19:04
fungisure... i mean, basically nodepool already has support for using whatever process you like to create images and then just referencing those in your configuration19:05
fungiyou're on the hook for making sure they exist in your provider(s)19:05
clarkbya I think the key bit is what shrews pointed out, we'll need someone with enough familiarity with more than one tool/process to produce something that makes sense. But the basic design is already set up to support various builders19:06
fungibut i do think extending the builder could make sense19:06
clarkbI expect that we could get pretty far by making the command to run configurable (and maybe removing elements as top level config). Then rely on the command configuration as well as env vars to produce the expected result19:07
clarkbbut having never used amazon image builder I couldn't say for sure. I think that would work for packer though19:07
avassyeah amazon image builder might have been a bad example19:08
clarkbthen we end up with the tricky bit being upload if the builder operates like dib (because right now we can only upload to openstack clouds)19:08
fungithe only non-openstack cloud i've ever touched was a vmware-based one i used to run for a hosting company ~10 years ago, so i'm really no help19:09
avassI haven't actually used amazon image builder myself, so we'll probably stick with packer for now19:15
avassand since we'll probably end up using both azure and aws we might contribute with something from that :)19:15
*** bhavikdbavishi has quit IRC19:20
mhuhey there, could I have some reviews on these: https://review.opendev.org/#/q/status:open+project:zuul/zuul+branch:master+topic:CORS_OPTIONS please?19:43
AJaegermhu: could you review https://review.opendev.org/#/c/731606/, please?19:46
AJaegermhu: that's a policy update that you might be interested in.19:46
*** dennis_effa has quit IRC19:52
*** hashar has quit IRC20:11
*** harrymichal has joined #zuul20:23
*** y2kenny has joined #zuul21:04
openstackgerritJames E. Blair proposed zuul/zuul master: Contain pipeline exceptions  https://review.opendev.org/73391721:09
openstackgerritJames E. Blair proposed zuul/zuul master: Detect Gerrit gate pipelines with the wrong connection  https://review.opendev.org/73392921:09
corvustristanC: ^ that should be ready to go now21:09
y2kennyCan someone tell me how to specify a tox test environment to test?  (for example, testenv:functional_kubernetes)  I just started doing driver dev and I am totally new to tox21:13
y2kenny(nodepool driver)21:14
fungithe generic tox job from zuul-jobs takes a tox env name as a parameter i think... pulling up docs21:15
fungioh, wait, you mean locally?21:15
fungitox -e functional_kubernetes21:15
fungisorry i thought you were asking about jobs there for a moment, my bad ;)21:16
y2kennyyes locally... Ah ok thanks.  For some reason I thought only py3/pep8, etc. applies to the -e21:17
y2kennyand looks like I also had a typo before... I can continue now, thanks!21:20
y2kennyum... I am getting yaml load error (load() without Loader is deprecated, as the default Loader is unsafe)... am I missing some dependencies?21:33
y2kennyand for some reason that error comes from kube_config.py...21:34
corvusy2kenny: that should just be a warning which you can probably ignore21:40
y2kennyfor some reason it shows up under stderr21:40
corvusy2kenny: yes, python warnings get written to stderr, but they generally don't affect execution of the program21:41
y2kennyanother error I see... load() take 3 positional arguments but 4 were given...21:41
y2kennya lot of those...21:41
corvusnow that's a problem :)21:41
y2kennyOh...21:41
y2kennyI think it's my driver21:41
y2kennyok...21:41
y2kennyboth k8s and openshift functional requires minikube right?21:42
corvusthe openshift functional test job in the gate uses an openshift installation rather than minikube21:44
corvusy2kenny: you can check the playbooks/nodepool-functional-openshift/pre.yaml playbook to see how it's set up21:44
y2kennyok.21:44
y2kennythanks21:44
*** harrymichal has quit IRC22:04
*** threestrands has joined #zuul22:15
*** threestrands has quit IRC22:16
*** threestrands has joined #zuul22:17
*** threestrands has quit IRC22:18
*** threestrands has joined #zuul22:18
*** threestrands has quit IRC22:19
*** threestrands has joined #zuul22:20
*** threestrands has quit IRC22:21
*** threestrands has joined #zuul22:21
openstackgerritMerged zuul/zuul master: Contain pipeline exceptions  https://review.opendev.org/73391722:22
*** threestrands has quit IRC22:22
*** threestrands has joined #zuul22:23
*** threestrands has quit IRC22:24
*** threestrands has joined #zuul22:24
*** threestrands has quit IRC22:25
*** threestrands has joined #zuul22:26
*** threestrands has quit IRC22:27
*** threestrands has joined #zuul22:27
*** threestrands has quit IRC22:28
*** threestrands has joined #zuul22:29
*** threestrands has quit IRC22:30
*** threestrands has joined #zuul22:30
*** dmellado_ has joined #zuul23:10
*** dmellado has quit IRC23:11
*** dmellado_ is now known as dmellado23:11
*** tosky has quit IRC23:13

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!