Wednesday, 2017-08-16

mordredpabelanger: it's like we're all grown up :)00:12
pabelangerIKR00:12
fungimordred: belated "yeseroo" on the moving zuul into the now correct group question00:26
fungialso, yay!00:27
*** smyers has quit IRC05:22
*** smyers has joined #zuul05:23
*** isaacb has joined #zuul07:19
*** isaacb has quit IRC08:14
*** amoralej|off is now known as amoralej08:25
*** isaacb has joined #zuul08:27
*** electrofelix has joined #zuul09:09
*** jkilpatr has quit IRC10:54
*** isaacb has quit IRC11:09
*** isaacb has joined #zuul11:10
*** jkilpatr has joined #zuul11:29
electrofelixFor zuul 2.5.2, noticed that the filter for zuul is split on a ',' however GitHub changes are reported back as '<PR>,<SHA>' but this can match gerrit patchset numbers for new github projects or gerrit change ids for older github projects12:19
electrofelixmeaning it's difficult to create a status_url to take you to only one change12:20
electrofelixand when I looked to add the similar status filter for Gerrit changes I can only seem to find '<Change>,<Patchset>', in both cases this can result in matching multiple items because there doesn't appear to be a way to perform 'AND' matching, it defaults to 'OR'12:20
electrofelixIs this something that will be changed for zuulv3? what direction is being considered?12:32
dmsimard|offmordred: fyi https://review.openstack.org/#/c/487853/12:33
mordredelectrofelix: I believe it all works for v3 already - check out https://github.com/gtest-org/ansible/pull/112:38
mordredelectrofelix: (that's openstack's zuulv3 working with that gh repo)12:39
mordreddmsimard|off: woot - I'll dust that off12:39
fungizuulv3.o.o has broken...12:41
fungidb socket issues12:42
fungimaybe it's timing out the connection due to inactivity?12:42
fungihttp://paste.openstack.org/show/61852212:43
electrofelixmordred: the code in the filter still performs a split on ',' so you can't populate the filter string with something like 1,2 to pick up the second patchset of the first change12:45
fungii'm manually installing mysql-client on zuulv3.o.o for now to test connectivity to the trove instance12:46
mordredfungi: kk \o/12:46
fungithe 'connection "mysql"'.dburi in /etc/zuul/zuul.conf gets me working access to the trove instance from zuulv3.o.o fwiw12:47
mordredelectrofelix: ah - then that sounds like a new issue12:47
fungii'll check the db configuration there and see if it's missing our sane timeout default overrides12:48
mordredfungi: oh goodie. my expectation is that the dbapi connection would reconnect12:48
mordredfungi: oh - it probably is12:48
fungiconfirmed, still has the default configuration12:51
fungiand we don't seem to have any alternative configurations built in dfw yet applicable to mysql 5.7 instances12:51
fungii'll put one together based on our mysql 5.6 "sanity" config12:52
electrofelixmordred: What I have noticed is I can search on 1_2 instead, so I might tweak the what we are running locally to change the status_url that is posted back from being 1,2 for gerrit or 1,<sha> for github to being 1_2 and 1_<sha> respectively12:53
mordredfungi: thanks! and sorry about that12:54
*** dkranz has joined #zuul12:56
fungiconfiguration created, applied, and instance restarting now12:56
mordredelectrofelix: oh - you mean the /status/{change} zuul page12:56
electrofelixmordred: yeap, where the jquery code allows filtering of what is shown12:56
mordredelectrofelix: I had misunderstood the nature of your question12:57
mordredelectrofelix: cool12:58
electrofelixmordred: however even using the panel_change which uses 'id', if a gerrit change being matched on 1_2 will also match a github PR 1 if the sha1 starts with a '2' so it's not without some limitations13:01
electrofelixso using the status page to filter for a single change is a bit of a hack at the moment13:02
SpamapSshould probably add source name13:03
mordredelectrofelix: indeed. I think we should probably make it a richer api somewhere - like change=1,patchset=2 or something (or /status/{change} and /status/{change}/patchset/{patchset} )13:03
*** isaacb has quit IRC13:04
SpamapSif it were github:1_2 or openstackgerrit:1_2 it would be unambiguous13:04
SpamapSor / or whatever13:04
*** isaacb has joined #zuul13:04
electrofelixI'll need to test the /status/{change} zuulv3 just to see if it behaves the same as the filtering in zuulv2 but it did look like it might work the same from the jquery code13:05
fungihas anybody looked into the frequent job restarts for the zuulv3 server yet? i just saw a doc-build job in the gate pipeline complete, then restart13:06
fungiand saw some changes repeatedly failing yesterday on retry limit13:06
electrofelixmordred: Should there be a way to filter on combination of {project} & {change}? as that should be unique while {change} & {patchset} might not be if you are using multiple sources13:07
mordredfungi: each time I've looked it's been an issue fetching from a mirror13:09
mordredfungi: I wonder if we're missing a retry loop in one of our base jobs - I'll look in to that right now13:10
mordredelectrofelix: yes, I think so. also, there's work started on a richer dashboard too - we've been deferring it a bit as we get the other bits out the door - but in short, yes, all such things should be possible13:11
fungii'm poking around in documentation trying to see what we should be doing with sqla to self-heal on broken pipe exceptions for pymysql, though i'm a bit out of my depth there13:11
SpamapSbroken pipe?13:12
fungiSpamapS: suspect this is a closed socket for an inactivity timeout (the default server-side wait_timeout in rackspace's trove instances is something absurdly short, like on the order of minutes)13:13
fungiSpamapS: tracebacks at http://paste.openstack.org/show/618522/13:14
fungiwe override it back to the mysql upstream value normally, but that's a manual process to apply a nonstandard configuration to the instance, and it had been overlooked for the one created to serve as the target for the zuulv3.o.o mysql reporter13:15
fungii see some recommendations of using http://docs.sqlalchemy.org/en/rel_0_9/core/engines.html#sqlalchemy.create_engine.params.pool_recycle to avoid that13:17
fungithough it seems like a bit of a workaround (isn't it possible to automatically reestablish the query socket and retry on a failure of that sort?)13:17
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Retry updating apt-cache  https://review.openstack.org/49420413:21
mordredfungi: ^^13:21
mordredfungi: it is - but I actually think we should use that setting and set it to 113:22
mordredfungi: mysql connections are not heavyweight like postgres or oracle connections - so most of the idea of a "connection pool" when it's used to keep an established connection around is an anti-pattern for mysql13:23
fungiright, that's what i thought too13:23
mordredfungi: connection pools to limit the number of active concurrent connections are good of course :)13:23
fungiahhhh... got it13:24
SpamapSfungi: I'd personally go with a longer wait_timeout. MySQL connections are pretty cheap to keep idle.13:35
SpamapSEspecially in this instance.13:35
mordredheh. we have the exact opposite viewpoint on this13:35
mordredmy recommended best practice is wait_time=013:36
mordredand never keeping an idle connection13:36
fungiSpamapS: yeah, i mean we do the longer wait_timeout normally. i'm more concerned about downstream users of zuul who may have shorter timeouts thrust upon them and making sure the server is robust in the face of such situations13:36
fungialso i suspect there are other sorts of network disruptions which could cause similar symptoms, not just timeouts to idle sockets13:37
mordredyup. this is amongst the reasons I recommend always reconnecting - the mysql protocol doens' thave any sort of heartbeat or keepalive - so with long-lived idle connections you're always at the risk at the start of action that your connection doesn't work anymore - sometimes due to switches or routers dropping connections in between13:38
mordredto protect against that, it's common to issue a statement like select 1; before doing any real work13:38
mordredof course, as soon you do that, you just did a network round trip and completely negated literally every amount of value in a long-lied connection13:38
mordredthat being that a long-lived connection saves exactly one network roundtrip which is the cost of establishing a connection13:39
fungii thought one of the libs (maybe it wasn't pymysqlclient?) had a built-in query ping which would transparently reconnect the socket on failure13:39
mordredright. that's what I'm saying -that's crazypants13:39
mordredit's a total waste of energy - establishing a pool of long-lived connections and then issuing a query ping to trigger a reconnect is complexity that provides no value13:40
fungidoes seem a little hacky, i have to agree on that point13:40
SpamapSmordred: In high scale I'm with you. This is not that.13:40
mordredit makes TOTAL sense if you're using oracle, postgres or mssql where the connection is heavy weight13:41
mordredSpamapS: I'm saying disconnecting is EASIER13:41
mordredit's less work for small scale, and it works more efficiently at large scale13:41
SpamapSand most libraries do this automatically. Surprised to even see it's a problem.13:41
mordredif you just set engine_recycle=1 in sqlalchemy, there is no more tuning or work that is needed13:41
SpamapSAh so there's just magic sauce that is turned off. Why is that turned off???13:42
SpamapS:-P13:42
mordredsqlalchemy defaults to -1 which is "never recycle" - because sqlalchemy defaults to a pg-centric worldview13:42
mordredand you have to tell sqlalchemy that you'd prefer it to behave appropriately for mysql13:42
SpamapSahhhhhhhhhh13:42
fungiso, easy fix sounds like13:43
SpamapSI figured this was deeper. :-P13:43
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Recycle stale SQL connections  https://review.openstack.org/49421013:43
mordredthereyago13:43
fungithanks!13:44
fungii would not have found my way there on my own13:44
mordredfungi, SpamapS: we could also go the other route and do pool_pre_ping=True13:51
fungimordred: ahh, right, that's the thing i was thinking of13:52
fungithat's what i was looking into in an attempt to fix imilar issues we were encountering (and sometimes still do) with lodgeit/paste.o.o13:52
fungis/imilar/similar/13:53
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Recycle stale SQL connections  https://review.openstack.org/49421013:55
mordredthe docs seem to indicate that pool defaults to None13:56
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Recycle stale SQL connections  https://review.openstack.org/49421014:44
jeblairfungi: (moving -infra conversation here)14:46
jeblairit looks like the jobdir playbook directory only contains links to repos which are in other directories, and any secrets.yaml files needed for that playbook14:47
jeblair(filesystem symbolic links i mean)14:47
jeblairso from a space usage pov, it's entirely possible for that to be a tmpfs14:48
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Add two roles for publishing artifacts over ssh  https://review.openstack.org/49423014:49
jeblairwe may be able to use the --tmpfs, --symlink, and --file bwrap options to create that setup.14:50
mordredjeblair: neat14:52
jeblairfungi: and apparently ssh-add will read a key on stdin (though that doesn't appear to be documented in the man page at least, though i have experimentally verified it).  using that, we may be able to avoid having the key touch the disk in the job as well.14:52
fungijeblair: that sounds even better still, agreed!14:54
mordredjeblair, pabelanger: so - quesiton in my mind from those other two patches - if a secret could be requested for a job by an alternate name - then in theory someone could do "post: - publish-openstack-artifacts: secrets: - my-local-project-secret: name: fileserver" - and publish artifacts to their own fileserver using our trusted job14:54
jeblair(that last bit depends on some hand-wavey ansible)14:54
mordredjeblair: oh awesome - that was my first thought re: needing to write the file to disk14:55
fungishred is an imperfect belt-and-braces effort to work around those scenarios, so definitely not preferable if there are more elegant alternatives like cache-only filesystems or fifos14:55
jeblairmordred: yes! i think you may have found a really good reason *not* to bundle the hostname with the secret (unless that's the intention).14:55
fungigranted, those still expose a similar risk if the server doesn't employ encrypted swap14:56
fungioh, though tmpfs normally shouldn't get swapped out i suppose14:56
fungibut it's of course possible for the copies in application memory allocations to be paged out still14:56
fungiso, you know, no security solution is perfect, defense in depth, et cetera14:57
fungithe tmpfs and/or pipe solutions seem much better than relying on shred at least14:57
pabelangermordred: jeblair: roles lgtm14:59
jeblairmordred: oh -- we can also set 'final: true' to prohibit that.14:59
mordredI'm saying the opposite thing14:59
mordredwhat i'm saying is that the roles are general and describe an operation that's potentially safely re-usable14:59
jeblairmordred: to summarise -- if we want a tarball upload job to be reusable, then putting the connection information in secrets or job variables is good.14:59
mordredyes!15:00
jeblairmordred: if we don't want it to be reusable, then hard-coding the connection info and/or setting final:true is good.15:00
mordredyes15:00
jeblairmordred: yeah, so if we want this to be *fully* generalizable, then we need to make an "api" for the job so that folks know whether they should supply the connection information in secrets or job variables.  either will work.15:01
mordredthe missing link with the secret is that the playbook hsa to refer to the secret by the secrets name - we can't hand secret a to job b to show up as variable name c15:01
jeblairmordred: ah, i missed that first time through, but yes.15:01
mordredsince the user would need to re-use the job and its playbook itself15:01
jeblairmordred: i think we can do the usual scalar-or-dict thing with secrets to add an optional name field.15:02
mordredjeblair: I think this works fine for us for now - mostly raising that if we could request a secret and set the name by which it's exposed in a job, that might be a nice feature add15:02
mordredjeblair: ++15:02
pabelangermordred: I'm pretty sure we could just make https://review.openstack.org/#/c/494230/1/roles/publish-artifacts-to-fileserver/tasks/main.yaml its own role too, we'd bascially need to do the same for logs.o.o at some point15:03
mordredI mean, I'm not sure who exactly wants openstack zuul to build an artifact that they rsync over ssh to an external server - but there's remarkably little to prevent that :)15:03
pabelangerwrite private key, ssh-add, shred private key, add known hosts15:03
pabelangergoing to be pretty common15:03
mordredpabelanger: gah - I got those two backwards in that patch15:04
mordredpabelanger: one sec ...15:04
pabelangerOh wait15:04
pabelangerit already is a role15:04
pabelangersorry, just noticed15:04
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Add two roles for publishing artifacts over ssh  https://review.openstack.org/49423015:05
mordredpabelanger: yup - I just did the copy/paste/delete backwards - your thought was my thought too :)15:05
fungi489691 got me thinking... what is the current behavior if someone does attempt to (either accidentally or intentionally) define a job under a name which is already in use by another repo? does zuul refuse to test that change? fail? undefined behavior still?15:13
mordredfungi: fails with a syntax error, iirc15:14
fungiokay, good15:16
fungii assumed someone had already thought of that as a potential avenue for confusion or abuse15:17
mordredfungi: yah- we still need to tell folks to be friendly and prefix their local jobs with their project name or something like that15:28
mordredfungi: and then see how far 'be nice to your neighbors' gets us15:28
*** yolanda has quit IRC15:44
*** yolanda has joined #zuul15:44
*** isaacb has quit IRC15:52
pabelangerFirst job using zuulv3 secrets: http://logs.openstack.org/89/489689/15/check/publish-openstack-python-branch-tarball/45a2698/15:54
pabelanger\o/15:54
mordredpabelanger: that's super awesome!16:06
pabelangerI'm going to move to testpypi.python.org credentials now16:06
mordredpabelanger: I verify that no secrets were emitted into any files on the log server16:07
mordreddmsimard|off: in ara - what do you do about information that is otherwise protected by things like no-log - do you also skip writing that data to your database?16:07
dmsimard|offmordred: it's handled by Ansible's callback hook thing, _dump_results I think ?16:08
mordreddmsimard|off: ok - so you're taking advantage of that, which means the data your storing is pre-stripped16:09
dmsimard|offmordred: the other perhaps relevant thing related to the ansible-playbook arguments (the "parameters" tab in the UI) which, for example, doesn't save extra-vars by default but can also be tweaked to ignore (or not) other arguments16:09
dmsimard|offmordred: right, the data in the database is pre-stripped16:09
mordreddmsimard|off: ok - cool. the things we pass in as extra_vars are, in fact, things we would like to be ignored - so that behavior is good for us16:10
dmsimard|offso, back to arguments, say you do something like -e password=somethingsecret that won't be saved by default16:10
dmsimard|offbut you can remove that config to allow extra vars to be saved16:10
mordreddmsimard|off: -e@secrets.yaml will work the same way16:10
mordreddmsimard|off: oh golly no - we want those to not be stored, so that's perfect :)16:11
dmsimard|offblog post about 1.0 highlights will be up today btw, will let you know16:12
mordredpabelanger, jeblair, SpamapS: https://review.openstack.org/#/c/487853 is good to go now (as is https://review.openstack.org/#/c/488214)16:14
mordreddmsimard|off: oh - I meant to ask you a question about your db design when you tweeted it - can you re-share that link?16:18
dmsimard|offmordred: that graph is no longer relevant, I took a sledgehammer to the model already :)16:25
mordreddmsimard|off: awesome. and I was justlooking at your models.py file anyway16:25
dmsimard|offmordred: still happy to get input about the current state though16:26
dmsimard|offmordred: make sure you look at feature/1.0 branch16:26
mordreddmsimard|off: the concern I had from your graph was a key-value table off on the side that I didn't understand - but the one you have in there now makes more sense to me16:26
mordredah- lemme look at that16:26
dmsimard|offmordred: I can also get an updated relationship graph up if need be :)16:27
*** isaacb has joined #zuul16:27
mordredk. same thing - Record was confusing on the graph - but your comment makes it make sense16:27
dmsimard|offmordred: there's two less tables, some fields that have been taken out, some new relations, etc.16:27
mordredara_record allows someone to record arbitrary information - so I agree a k/v table seems fine16:27
mordred(I always get worried when I see a k/v table in a rdbms so wanted to double-check)16:28
dmsimard|offmordred: yeah, it's just a generic way to attach arbitrary data to a playbook report -- see the record tab in the UI here for examples: http://logs.openstack.org/70/494070/2/check/gate-ara-integration-py27-latest-centos-7/2849bbb/logs/build-playbook/16:29
dmsimard|offPeople asked for ARA to save many things (git versions, or whatever) I didn't want to include by default so I gave them the means to save whatever they want :)16:30
mordreddmsimard|off: ++16:38
mordredjeblair: my patch needed for https://review.openstack.org/#/c/492557 merged and upstream cut a new release with it16:41
mordredjeblair: I just hit recheck on it so we can verify, but it's ready for review now - pabelanger, SpamapS ^^16:41
pabelangermordred: one day for the depends-on in patch :)16:43
jeblairze02 is online and running jobs16:46
pabelangerWoah, nice16:47
SpamapSneat!17:02
*** isaacb has quit IRC17:07
mordredpabelanger: right?17:13
mordredjeblair: yay!17:13
dmsimard|offmordred: there ya go http://rdo.dmsimard.com:1313/2017/08/16/ara-1.0-whats-coming/17:23
dmsimard|offer17:23
dmsimard|offwrong paste :) https://dmsimard.com/2017/08/16/whats-coming-in-ara-1.0/17:23
*** bhavik1 has joined #zuul17:38
mordredjeblair: in our zuul-jobs doc building, is there any way to cross-reference a zuul glossary concept?17:43
*** electrofelix has quit IRC18:00
SpamapSlooks like jobs are a bit stuck18:02
SpamapShttp://zuulv3.openstack.org/static/stream.html?uuid=4485757750cb48d1b7b4c0acf2d60d41&logfile=console.log18:02
SpamapS47 minutes at that point18:02
SpamapSwe could maybe drop the tox-py35 timeout for zuul since it's usually done in < 10min18:03
SpamapSso...18:07
SpamapSis there anything i can help with?18:07
* SpamapS is just poking at BonnyCI's zuulv3 migration but otherwise not really zuulv3-ing much18:08
pabelangerwe're having networking issue in infracloud, so cloud be related to long runtimes18:09
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Add sphinx-autodoc-typehits sphinx extension  https://review.openstack.org/49255718:09
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Collect logging information into ara callback  https://review.openstack.org/48785318:09
pabelangeror maybe just slow compute node18:09
jeblairmordred: yes!  https://docs.openstack.org/infra/zuul/feature/zuulv3/developer/docs.html#terminology18:10
jeblair(i get a point everytime i can answer a question with a docs link, right?)18:11
SpamapSpabelanger: former seems more likely.18:11
jeblairpabelanger: internap is working now, right?18:14
jeblairpabelanger: (the mirror hostname fixes are in place everywhere?)18:14
pabelangerjeblair: not yet, we need to restart nl01 to add cloud info into zk18:14
jeblairpabelanger: would you mind doing that?  when you do, we can stop using infracloud.18:15
pabelangerwaiting for https://review.openstack.org/493088/18:15
pabelangerbut yes, I can restart18:16
mordredjeblair: yes - but can I use those in a README in a role in zuul-jobs and have them point to the zuul docs?18:16
jeblairmordred: oh, i'm sorry i missed 'zuul-jobs' in your question.  no point for me.  :(18:16
mordredjeblair: :)18:17
jeblairmordred: not at the moment; it'd just have to be a regular RST hyperlink18:17
pabelangerokay, nl01.o.o restarted18:17
mordredjeblair: if we can't do that yet, we can add it to the todo-list... I mostly wanted to be able to link to secrets18:17
jeblairmordred: part of me wants to add stuff like that to zuul-sphinx, but i kind of think it may be a little bit of a waste of time to do it before we have all-job-docs-auto-built-from-zuul-job-api.18:17
mordredjeblair: k. I'm going to leave it alone for now since the v3 docs are published to a feature branch location anyway and I don't want to add too much cruft we have to remember to replace18:17
jeblairmordred: indeed18:18
mordredjeblair: ++18:18
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Create nodepool.cloud inventory variable  https://review.openstack.org/49308818:18
mordredjeblair: I mean, we COULD register an intersphinx mapping to zuul's docs18:18
jeblairmordred: yeah, though does that still hit the feature branch problem?18:18
mordredjeblair: when we do the job doc builds - but yeah, feature branch problem18:19
mordredjeblair: so let's wait18:19
jeblairkk18:19
jeblairSpamapS: do you want to tackle the add-build-sshkey item on https://etherpad.openstack.org/p/AIFz4wRKQm  (currently lines 42-44) ?18:20
mordredjeblair: ah - there's not a secret glossary anyway18:20
jeblair(probably should be :)18:20
SpamapSmmm ssh keys :)18:20
SpamapSThe Zuulonomicon should definitely have an entry for that.18:28
*** bhavik1 has quit IRC18:28
openstackgerritMerged openstack-infra/zuul-jobs master: Retry updating apt-cache  https://review.openstack.org/49420418:30
pabelangermordred: do you want to rebase 49423118:30
openstackgerritMerged openstack-infra/zuul-jobs master: Add two roles for publishing artifacts over ssh  https://review.openstack.org/49423018:31
mordredoh. hrm18:34
mordredone sec18:34
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Document and update fileserver roles  https://review.openstack.org/49429118:36
mordredpabelanger: I had been working on an update to https://review.openstack.org/494230 :)18:36
mordredpabelanger: and  yes- I have to afk for a bit - but I will rebase 494231 a soon as I return18:37
*** amoralej is now known as amoralej|off18:47
jeblairpabelanger: do you understand what went wrong here? http://logs.openstack.org/91/494291/1/check/openstack-doc-build/f60753a/job-output.txt.gz#_2017-08-16_18_41_46_43453518:49
openstackgerritPaul Belanger proposed openstack-infra/zuul feature/zuulv3: Add publish-openstack-python-branch-tarball to post pipeline  https://review.openstack.org/49429618:50
pabelangerjeblair: Hmm, I think that might be realted to 494204. it tried 3 times to update cache and failed18:51
pabelangermaybe condition is not correct18:52
jeblairpabelanger: does it only display the output on the final 'retry'?18:52
jeblair(cause i only see it happening once in that log)18:53
pabelangerjeblair: I am not sure, I haven't used retry option much on the apt task18:53
pabelangerlook at logs, I think our until condition is not correct18:54
pabelangerso, we likely did run apt-get update 3 times18:54
jeblairpabelanger: more questions -- why is it saying to use the apt module if we *are* using the apt module?  also, why are we using the apt module?  i thought we stopped that because python-apt wasn't installed?18:55
SpamapShm18:55
jeblairaha, we *are* using shell18:55
jeblairthat change's parent is old so the diff is wrong18:56
jeblairhttp://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/configure-mirrors/tasks/mirror.yaml18:56
pabelangerOh, ya, so until condition is not correct now18:56
pabelangerso, we should revert 49420418:57
jeblairpabelanger: we're going to have to force-merge it18:57
pabelangeryes :(18:57
jeblairpabelanger: can we fix it?  so we don't have to merge 2 changes?18:57
pabelangerI think we'll need to register the exit code now, and retry if != 018:58
pabelangerchecking how we do it today in project-config18:58
jeblairpabelanger: you think you can work up a probably-correct version of that fix and we can force-merge it?  or do you want to force-merge the revert, and then write up the fix as a new role we use from base-test to exercise it first?18:59
jeblair(i'm okay with doing the first to save time; we're still mostly pre-production here)19:00
pabelangerjeblair: yes, I think we should force revert and do base-test. This is something we need to take care with in the future too19:00
jeblairokay, i'll do the revert19:00
pabelangerI didn't do a good know remembering this role was using by a trusted playbook19:00
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Revert "Retry updating apt-cache"  https://review.openstack.org/49429819:02
openstackgerritMerged openstack-infra/zuul-jobs master: Revert "Retry updating apt-cache"  https://review.openstack.org/49429819:05
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul-jobs master: Install build private key too  https://review.openstack.org/49430219:24
pabelangermordred: jeblair: so, I think we can write start writing some test playbooks in zuul-jobs (maybe tests folder) to flex some of our trusted roles. basically, zuul-executor run playbooks, to setup and run ansible-playbook with connection=local on node. The test playbook could try to use the role, ensure it works19:38
pabelangerhopefully that makes sense19:39
jeblairpabelanger: that makes sense, however, i question how many roles that approach would be effective with.  many of them would need a nearly complete simulation of the local executor environment and a remote node.19:41
pabelangerjeblair: Ya, it would be complex fast. I guess we could limit it just to roles used by trusted playbooks, untrusted could be tested via depends-on19:43
jeblairi don't think graceful works for zuul executor19:48
jeblairi'm going to hard-stop ze0119:48
SpamapSthere's a lot of wonky process stuff going on19:48
SpamapSwouldn't be surprised if we missed something if it has a TERM handler19:49
jeblairSpamapS: it's simpler than that19:49
jeblair    def graceful(self):19:49
jeblair        # TODOv3: implement19:49
jeblair        pass19:49
SpamapScurious... has anybody worked out a quick way to test playbooks locally?19:49
SpamapSI'm thinking of writing an entry point thing similar to zuul-bwrap but just like, zuul-job-execute that sets up the modules and stuff and runs them in bwrap19:50
SpamapSjeblair: :-|19:50
jeblairSpamapS: i'm unaware of work having been done on that (though i am aware of the desire)19:51
SpamapSlike what I'm thinking is just zuul-job-execute -i {inventory of whatever targets you choose}19:51
pabelangerSpamapS: I've tried using opentack/dox for testing some ansible things. Basically, but tox into docker and did something. protected my host system19:51
SpamapSand it can pass the inventory in19:51
pabelangerzuul-bwrap would be nice too19:51
SpamapSpabelanger: yeah I've got a docker thing going.. but having to hand-code an ansible.cfg that works and thinking we have all that code already19:51
SpamapSpabelanger: zuul-bwrap already exists :)19:52
SpamapSand is half of this19:52
SpamapSjust need the ansible part19:52
SpamapSor.. maybe I'm missing it and zuul-bwrap already works actually19:52
SpamapSjust have to put an inventory in the work_dir and call ansible-playbook explicitly.19:53
pabelangermordred: since you've signed up to do (pre-)python-tarball work on etherpad. Any interest adding https://review.openstack.org/494296/ to start testing current version of publish-openstack-python-branch-tarball19:53
SpamapSI think.. dunno..playing with it now19:54
SpamapSthat should help us iterate on roles and playbooks faster ayway19:54
pabelangerpushing untrusted playbooks up to zuulv3.o.o has been my process. trusted playbooks are little harder, usually requests me using local resources for that19:55
jeblairi'm going to prime git clones of all the repos on all the executors to prepare for the startup test (i'm not interested in the clone time as we can do that beforehand)20:02
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Fix documentation nits  https://review.openstack.org/49431020:03
openstackgerritDavid Shrewsbury proposed openstack-infra/zuul feature/zuulv3: Fix documentation nits  https://review.openstack.org/49431020:04
clarkbits also a one time cost so even if your first startup is slower due to cloning subsequent ones shouldn't be20:05
jeblairya20:06
mordredSpamapS: I was going to get around to doing that, so thanks! I currently just have an ansible.cfg sitting the root of my zuul checkout that points all of the plugin dirs to the right plugin places and then I run ansible-playbook with it20:12
mordredSpamapS: but as you point out, that's a bunch of by-hand fiddling20:13
mordredSpamapS: and then I turn on and off zuul action plugin blocking / etc by just commenting/uncommenting lines in that file20:14
mordredSpamapS: of course, what I really want is something that will make me a temp dir in the right structure with repos symlinked in and make an inventory inside of that :)20:15
mordredSpamapS, pabelanger: but one we have such a thing, doing a two-node job that runs ansible on one node with the other node in its inventory to test some of the things from trusted contexts would be nice - even if we need to write a specific job for each base-job we want to test and have a zuul-job-execute command with 10 command line options to get all the things set properly ... I think it would be a win (like,20:17
mordredit doesn't have to be able to actually read the whole zuul.yaml file or anything to be an improvement)20:17
* mordred shuts up and goes back to accomplishing things20:18
pabelangermordred: ya, that would be interesting to do also20:19
mordredpabelanger: should we remove the tarball job from the v2 jobs before landing that change above? otherwise v2 and v3 will be fighting to upload to the same location, yeah?20:29
mordredpabelanger: (unless we already did that and I missed it)20:30
SpamapSYeah the next level up, of having all the zuul* vars set would be cool20:31
SpamapSbut right now I just want to test the playbook with the zuul_ modules and actions20:32
SpamapSit is rather tempting to just make something that shoves a job into the executor btw.20:32
SpamapSAs that would get this done in a rather complete fashion.20:33
pabelangermordred: ya, I thought we did. Let me confirm20:35
pabelangermordred: ya, we just have pre-release and release jobs for zuul still on zuul.o.o20:35
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul-jobs master: Install build private key too  https://review.openstack.org/49430220:39
SpamapSpabelanger: ^^ I misunderstood the original push for this. I think it makes more sense to have those go into the ansible_ssh_user's .ssh20:39
mordredSpamapS: fwiw, this is what I have locally: http://paste.openstack.org/show/618575/20:40
mordredpabelanger: sweet20:40
SpamapSmordred: yeah, so, Zuul will happily write that for you if we add the right entry point.20:41
SpamapSI don't even think it will be much code20:41
mordredSpamapS: oh - totally - that's just my "I'll just make a file" cop out - I'd love a tool20:41
SpamapSJust mostly looking at what level to enter now.20:42
SpamapSIf we enter too high, we have to have sources configured somehow.20:42
SpamapSIf we go too low, we have to add a bunch of details on the cmdline or we lose assumptions.20:42
pabelangerSpamapS: Hmm, why not just ~/.ssh? We should be connected as ansible_ssh_user right?20:44
pabelangeralso left a few suggests about ansible syntax20:44
SpamapSoh right duh20:46
*** dkranz has quit IRC20:46
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul-jobs master: Install build private key too  https://review.openstack.org/49430220:54
SpamapSpabelanger: I forgot that yaml has native octals :-P20:54
jeblair1621 repos 9GB21:00
jeblairokay, i'd like to take zuulv3.o.o offline now to perform the startup tests21:02
jeblairany objection?21:03
pabelangerwfm21:03
jeblairoh, ha, the first startup is going to take a while as we generate 1600 rsa keypairs.21:04
jeblairat the current rate, almost half an hour.21:05
pabelangerha, nice21:06
jeblairokay, trying that again now that puppet is disabled21:18
SpamapSjeblair: haveged running?21:22
SpamapS:)21:22
SpamapSso... known hosts propagation21:23
SpamapSeasiest thing would probably be to call ssh-keyscan21:26
pabelangerya, JJB does this today21:32
pabelangerhttp://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/macros.yaml#n84521:32
SpamapSoh that's not handling known_hosts though21:34
SpamapSnothing seems to be actually21:34
pabelangerOh, hmm. I liked the wrong thing21:36
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul-jobs master: Install build private key too  https://review.openstack.org/49430221:36
SpamapSoops forgot a fix derp21:36
SpamapSpabelanger: ^^ all your comments addressed now I think21:36
pabelangerhttp://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/ansible-role-jobs.yaml#n3721:36
pabelangerthat is how I deal withit for testing ansible roles today21:36
pabelangerso, should be easy to make that into a role21:37
jeblairpabelanger, SpamapS: are you talking about the add-ssh-key-to-root role?21:37
pabelangeroriginally ya, I commented about that in the review21:38
mordredI mean - we know the known_hosts info for each node, because we get it from nodepool - so we really just need to copy .ssh/known_hosts from the executor to each node - don't need keyscan21:38
pabelangerYa, that too. They should all exist in inventory now21:39
mordredpabelanger, SpamapS, if you have a sec, feel like +Aing https://review.openstack.org/#/c/493250 ?21:40
jeblairhttp://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate.sh#n181  is the current devstack-gate equivalent using keyscan (though i agree, we should just use local inventory).  however, public vs private ip addresses may come in to play here.  we'll want both in the known_hosts file.21:40
jeblairoh the executors are running cat jobs!21:41
pabelangerYa, we don't actually have them in inventory directly, but we could just use known_hosts from zuul-executor job dir, that is where we write them21:42
SpamapSjeblair: I may have misunderstood the task entirely. :)21:43
SpamapSI'm installing the build SSH key in the ansible ssh user's ~/.ssh21:44
SpamapSit already gets installed in ~/.ssh/authorized_keys21:44
* SpamapS reads the jjb task again21:44
jeblairSpamapS: oh okay, you mentioned known_hosts so i assumed you were moving on to the next task21:44
SpamapSallow-local-ssh-root is something else right?21:44
jeblairi have no idea what that is :(21:45
SpamapSjeblair: I figured known_hosts was a third thing21:45
mordredSpamapS: allowing each node in the nodeset to ssh to each other21:45
SpamapSjeblair: http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/macros.yaml#n845 <-- allow-local-ssh-root seems to allow you to ssh from whatever job you're running to root21:45
jeblairSpamapS: yeah, but what's the relevance of that to devstack-gate?21:45
SpamapSjeblair: dunno, pabelanger pointed it out21:45
jeblairSpamapS: that looks like a macro that's used by our infra puppet module tests21:46
jeblairso i don't think it has anything to do with the tasks at hand21:46
SpamapSI'll ignore it :)21:46
mordred:)21:46
SpamapSso https://review.openstack.org/494302 <--- just installs the build private key in ~21:46
SpamapSwhich is the first half of "let them ssh to eachother"21:47
mordredyah. oh - we may have left out something21:47
jeblairSpamapS: what's the second half?21:47
SpamapSand now I'm poking at the second half, add known_hosts files for all nodes to all nodes.21:47
mordrednevermind me- the thing I was about to say is wrong21:48
jeblairSpamapS: ah, on the etherpad that was part of the add-ssh-key-to-root role21:48
jeblairSpamapS: it's possible that it should, in fact, be part of the role you're working on instead21:48
jeblairbut that is why i thought you were starting on the root ssh task21:48
SpamapSfor adding ssh key to root, that seems like another thing entirely .. no?21:49
SpamapSalso now that I'm playing with known_hosts .. I am truly curious how we're doing that on all jobs.21:50
jeblairSpamapS: indeed it is (lines 45-49 on etherpad)21:50
SpamapSlike, how does the executor handle it? add w/o prompt?21:50
jeblairSpamapS: they come from nodepool21:50
SpamapSoh nodepool gives 'em to us?21:50
jeblairyep21:50
SpamapSCOOL... can haz in zuul vars?21:50
jeblairwhy?21:50
SpamapSso I don't have to keyscan for them.21:51
SpamapSor scrape from /etc/ssh21:51
jeblairbut they're already in ~/.ssh/known_hosts21:51
SpamapSaderpaderpaderp, so they are21:51
* SpamapS steps back from the 4 tasks he just wrotes, cracks knuckles, replaces with copy...21:52
SpamapSwait no.. copy won't work.21:52
* SpamapS will get it21:52
jeblairand it started21:52
jeblairi'll go get timing info from the logs21:52
jeblairokay, one thing i've learned is that i think we need to cache the branch list for every project -- as every configuration update (even speculative ones) needs to know all the project branches, so it's doing git-upload-packs for all projects every time that happens.21:55
mordredjeblair: makes sense21:56
SpamapSknown_hosts is tricky.. I wonder if node-to-node comms in the multinode jobs ssh to hostnames or ips.21:59
SpamapSguessing short hostnames21:59
SpamapSbut wondering how those are looked up21:59
* SpamapS would wager /etc/hosts21:59
jeblairSpamapS: by "private" ip address, actually; which is something we may need to plumb through to the known_hosts file; and we may also need to figure out how to expose that to the job22:01
jeblairit took 1.5 minutes for zuul to submit all the cat jobs, and a further 7 minutes for them to complete (with 4 executors), for a total startup time of 8.5 minutes.22:02
pabelangerjeblair: SpamapS: couldn't we use the fact cache for 'private' IPs? They should be listed first time we run a play22:03
jeblairif we have 8 mergers and 8 executors, it should start up in less than 2.5 minutes.22:03
pabelangerjeblair: each on there own server?22:04
jeblairthat seems entirely tractable for a system that rarely needs a full restart22:04
jeblairpabelanger: that's how we do it now22:04
jeblairi'm going to restart zuulv3 with the normal configuration now22:06
pabelangerjeblair: sorry, just confirming. we'd have 8 ze01-ze08 and zm01-zm08? Today ze01 does both operations right?22:07
SpamapSpabelanger: yeah I was just looking at that.22:08
jeblairpabelanger: i think so.  at least, that's what i'd like to have for the ptg.  we can wind some mergers down if we don't need them.22:08
SpamapSfor that we might even want to just ssh-keyscan all the ips we know about for every host.22:08
SpamapSfrom every host..22:08
SpamapSthen we'd have all the reachable ones.22:09
SpamapSbut that could get messy22:09
pabelangerjeblair: okay, thanks. Also means we might need to update puppet-zuul for zuulv3 mergers22:09
*** jkilpatr has quit IRC22:09
SpamapSalso ready from nodepool doesn't necessarily mean they're all 100% network plumbed.22:09
SpamapSjust means the public ips can be ssh'd to22:09
SpamapSbut I will look closer at what multinode is already doing for these problems22:10
mordredSpamapS: we do not assume private network connectivity between nodes22:11
mordredSpamapS: the fun bits (the network-overlay role) will use public or private as makes sense for the set of nodes22:12
mordredSpamapS: because we can't count on the clouds to provide us with an environment in wich multi-node openstack can actually run - thus the manual overlay \o/22:13
pabelangerjust thinking, it wouldn't be too hard to add nodepool.ssh_known_hosts variables into the inventory for each host.  Maybe that is something users want to access22:13
pabelangeransible facts would gather then too: ansible_ssh_host_key_dsa_public for example: http://logs.openstack.org/96/494296/1/check/tox-pep8/2ec42f9/zuul-info/host-info.ubuntu-xenial.yaml22:15
SpamapSif they're already in hostvars that is actually a lot easier22:23
SpamapSbut what's more complicated still is making sure the right _host_ argument is there22:23
SpamapSif it's the IPs of the manual overlay.........22:24
SpamapSthat seems like a "deep within the bowels of d-g" problem22:24
mordredSpamapS: yah - I think you can ignore that case for now22:24
SpamapSmaybe I shouldn't be doing these things in base jobs22:24
mordredbecaues this is a base-job thing so there is no network-overlay yet22:24
mordredalso, I don't think we expect ssh traffic over that overlay - people can use the normal hostname/public-ip for that22:25
SpamapSyeah, adding the SSH key is easy22:25
mordredthe overlay is just for neutron to have a network to manage22:25
SpamapSand we'll have the host keys in facts... so one can generate known_hosts after overlay setup22:25
SpamapSoh I thought there was host-to-host SSH to enable?22:25
mordredif people want to ssh over the overlay they can solve that problem themselves22:25
mordredyes. but not over the overlay22:25
mordredthe overlay is just for neutron in d-g22:25
mordredi't's not otherwise useful22:25
SpamapSAh ok, so it's just via the already-existing public IP or hostname?22:26
mordredyes22:26
SpamapSok so I can just dig through hostvars for ansible_ethX's22:26
mordredso that if people want to, for instance, run a zuul command on node one that ansible's to node 2 as part of a functional test of a zuul playbook22:26
SpamapSand maybe also resolve the public hostname to an IP for good measure22:26
mordredI seriously think you can literally just copy .ssh/known_hosts22:26
SpamapSthat's what I have now :)22:26
mordred\o/22:26
mordredI think maybe writing an /etc/hosts is maybe a thing that's good too?22:27
jeblairthat's a good start, but i think we'll need to add private ip to known_hosts.22:27
mordredI can't remember if we do that currently22:27
clarkbmordred: we ssh over the overlay, but you are correct that we don't care about those hostkeys. The sshing is from tempest to the instances and instance to instance to test networking and they are responsible for sorting that out themselves22:28
openstackgerritClint 'SpamapS' Byrum proposed openstack-infra/zuul-jobs master: Add known_hosts from executor to all nodes  https://review.openstack.org/49433322:28
SpamapS^ the dumb version that just makes sure all the local lines are in the remote file.22:28
mordredclarkb: right- we don't ssh between nodes over the overlay for normal node-ot-node ssh traffic22:28
SpamapSand then yeah, we can also add all the interfaces ipv4 and ipv6's there too22:29
mordredclarkb, SpamapS, jeblair: that said - the thing SpamapS was getting at earlier - each host does have the hostkey in the ansible hostvars - potentially a role that's "add-hostkey" so that if someone does want to ssh over a different network it's easy for them to add that in a playbook or something?22:30
mordredclarkb: do we create an /etc/hosts today?22:30
clarkbmordred: on multinode jobs we do yes22:31
clarkbmordred: beacuse libvirt live migration uses the hostnames that nova provides so they have to resolve22:31
SpamapSI mean, whatever adds those host records should probably also add the known_hosts entries.22:31
SpamapSthat would maybe be a nice generic role22:32
SpamapSssh-safely-to-from-all-nodes22:32
SpamapSor something shorter ;)22:32
SpamapSbut it's hard to close the loop on what hostname will be ued22:33
SpamapSused22:33
clarkbthe hostname is set before ansible ever shows up22:34
clarkbso just use that?22:34
mordredyah. each node has hostname set22:34
mordredin fact, that's how we do it today22:34
clarkbya22:34
mordredSpamapS: http://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate.sh#n18422:35
mordredSpamapS: we obviously don't need the keyscan cause we have keys now22:35
mordredoh - we should probably work out something for /etc/nodepool contents22:36
mordredclarkb: so we do known_hosts by ip and also put that ip into /etc/hosts with the hostname from each host22:39
clarkbya that should be for the normal non overlay IPs though22:40
mordredyah22:40
clarkbwe do however use the cloud internal non NATed network if present though22:40
clarkbbecause running some of this through NAT causes failures and ebugging that has been low priority because the other networks work22:40
mordredok- I think we have another thing to add to the list22:42
mordredwe'll need a role to write out an /etc/nodepool dir with primary and secondary info for the existing multinode jobs - and probably want a nodeset that has two nodes, one called "primary" and one called "secondary" :)22:44
mordredbut I think we're going to also need to expose the public/private info from nodepool into the inventory - since the logic of "node.private_ip == private_ip if private_ip else public_ip" is nodepool logic22:45
*** jkilpatr has joined #zuul22:48
jeblairmordred: we'll want /etc/nodepool for non-devstack jobs which auto-convert, but we shouldn't need it for our devstack job.22:49
jeblairmordred: (ie, i think we can put it in the category of 'legacy role', like zuul vars)22:50
mordredjeblair: I think we need it for devstack jobs that auto-convert to the legacy ...22:50
mordredyah22:50
mordredit's just it's an interface we've told people about, so god only knows how it's being used in jobs22:50
jeblairright.  just saying we don't need it for the devstack job we're building22:50
mordredoh - totally22:50
jeblairwhereas the private ip thing we will need22:50
mordredI'm more making a note that we need an answer for it before ptg22:51
mordredjeblair: yup22:51
mordredjeblair: also - v3 should be restarted/running normal, yeah?22:51
jeblairmordred: yep22:52
mordredjeblair: I hit +A again on the zuul-jobs docs patch and it doesn't seem to be running22:52
jeblairmordred: number?22:52
mordredjeblair: https://review.openstack.org/#/c/493250/22:52
jeblair2017-08-16 22:23:01,027 DEBUG zuul.DependentPipelineManager: Change <Change 0x7fe4c8051e10 493250,1> does not match pipeline requirement <GerritRefFilter connection_name: gerrit open: True current-patchset: True22:52
jeblairrequired-approvals: [{'username': re.compile('zuul'), 'Verified': [1, 2]}, {'Workflow': 1}]>22:52
jeblairmordred: got caught in the no-mans-land between check and gate. needs a recheck.22:53
mordrednod22:53
jeblairdone22:53
mordredtwice :)22:53
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Document and update fileserver roles  https://review.openstack.org/49429123:01
mordredjeblair: that ^^ gets your review comments I think23:05
mordredjeblair: can I nudge you for a +A on https://review.openstack.org/#/c/494296 ?23:09
SpamapShrm23:16
jeblairmordred: +2 and done23:17
mordredjeblair: thanks!23:21
openstackgerritMerged openstack-infra/zuul feature/zuulv3: Add publish-openstack-python-branch-tarball to post pipeline  https://review.openstack.org/49429623:33
pabelangerneat! post job running right away23:34
pabelangerha23:38
pabelanger2017-08-16 23:37:16.873474 | ubuntu-xenial | mv: cannot move 'zuul-2.5.2.dev1640.tar.gz' to 'zuul-feature/zuulv3.tar.gz': No such file or directory23:38
pabelangerwill fix that tomorrow23:38
openstackgerritMerged openstack-infra/zuul-jobs master: Use new sphinx roles in docs  https://review.openstack.org/49325023:40
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Allow requesting secrets by a different name  https://review.openstack.org/49434323:43
mordredjeblair: ^^ that turned out to not be very hard23:43
mordredpabelanger: whoops! but nice that it ran!23:44
jeblairmordred: of course because of the duplication due to decryption.  neat.  :)23:45
mordredjeblair: exactly! turned out to work rather nicely :)23:45
SpamapSoy.. nested loops digging around hostvars is quite a weird thing to get done in ansible23:48
mordredSpamapS: indeed. fwiw - don't be afraid of tossing in a python module in the role23:49
mordredSpamapS: no need to break your head too much on the jinja23:49
SpamapSYeah I draw the line at 2 filters23:50
mordredSpamapS: I've got a simple one in zuul-jobs ./roles/validate-host/library/zuul_debug_info.py if you want to cargo-cult the setup bits :)23:51
SpamapSand I think that's what I'll do, since this is a pretty dumb-easy bit for python23:51
pabelangermordred: ^should be our fix23:51
pabelangererr23:51
mordredyah. python is pretty amazing for that23:51
pabelangerremote:   https://review.openstack.org/494344 Replace slash for tarball rename23:51
SpamapSSince it's like "for each ip of each interface fo each ssh key"23:51
mordredpabelanger: +A (single-core/bugfix)23:52
mordredSpamapS: yah. I'm sure you can do that in jinja, but you won't be sane when you're done23:52
openstackgerritMonty Taylor proposed openstack-infra/zuul-jobs master: Document and update fileserver roles  https://review.openstack.org/49429123:53
openstackgerritMonty Taylor proposed openstack-infra/zuul feature/zuulv3: Allow requesting secrets by a different name  https://review.openstack.org/49434323:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!