Thursday, 2015-09-03

*** rromans has quit IRC00:14
*** charz has quit IRC00:14
*** rromans has joined #openstack-ansible00:15
*** tobasco_ has quit IRC00:15
*** tobasco has joined #openstack-ansible00:16
*** charz has joined #openstack-ansible00:16
*** galstrom_zzz has quit IRC00:18
*** xar- has quit IRC00:19
*** galstrom_zzz has joined #openstack-ansible00:19
*** xar- has joined #openstack-ansible00:26
*** richoid has quit IRC00:28
*** shoutm_ has joined #openstack-ansible00:56
*** shoutm has quit IRC00:56
*** jwagner is now known as jwagner_away01:21
*** sdake has quit IRC01:36
*** mcarden has quit IRC01:40
*** mcarden has joined #openstack-ansible01:40
*** ybabenko has joined #openstack-ansible01:57
*** ybabenko has quit IRC02:01
*** sdake has joined #openstack-ansible02:28
*** sdake_ has joined #openstack-ansible02:31
*** k_stev has joined #openstack-ansible02:34
*** k_stev has quit IRC02:34
*** sdake has quit IRC02:34
*** shoutm_ has quit IRC03:20
*** eglute has quit IRC03:27
*** palendae has quit IRC03:27
*** bgmccollum has quit IRC03:27
*** dolphm has quit IRC03:29
*** sigmavirus24 has quit IRC03:30
*** d34dh0r53 has quit IRC03:30
*** jroll has quit IRC03:30
*** d34dh0r53 has joined #openstack-ansible03:31
*** dolphm has joined #openstack-ansible03:32
*** eglute has joined #openstack-ansible03:32
*** bgmccollum has joined #openstack-ansible03:32
*** sigmavirus24 has joined #openstack-ansible03:33
*** jroll has joined #openstack-ansible03:33
*** palendae has joined #openstack-ansible03:33
*** sigmavirus24 is now known as sigmavirus24_awa03:34
*** shoutm has joined #openstack-ansible03:41
*** sdake_ is now known as sdake03:42
*** stevemar has joined #openstack-ansible04:23
*** tlian has quit IRC04:24
*** shoutm has quit IRC04:38
*** stevemar has quit IRC04:43
*** shoutm has joined #openstack-ansible04:47
*** metral is now known as metral_zzz05:11
*** metral_zzz is now known as metral05:11
*** sdake_ has joined #openstack-ansible05:13
*** shausy has joined #openstack-ansible05:16
*** sdake has quit IRC05:17
*** shausy has quit IRC05:27
*** shoutm has quit IRC05:36
*** shoutm has joined #openstack-ansible05:37
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update how neutron migrations are handled  https://review.openstack.org/21558405:45
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Ensure rsync restarts fully during swift setup  https://review.openstack.org/21734105:45
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Allow cinder-backup to use ceph  https://review.openstack.org/20953705:46
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Adds the ability to provide user certificates to HAProxy  https://review.openstack.org/21552505:46
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add sorting_method to swift proxy config as needed  https://review.openstack.org/20881705:46
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add profiling for Ansible tasks  https://review.openstack.org/21684905:47
*** sdake has joined #openstack-ansible05:48
*** sdake_ has quit IRC05:52
*** sdake_ has joined #openstack-ansible05:54
*** sdake has quit IRC05:57
*** shoutm has quit IRC05:58
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: [WIP] Set the Ubuntu mirror used based on the environment  https://review.openstack.org/21861105:58
*** shoutm has joined #openstack-ansible05:58
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Allow Horizon setup with external SSL termination  https://review.openstack.org/21464706:18
*** shoutm_ has joined #openstack-ansible06:25
*** shoutm has quit IRC06:26
*** javeriak has joined #openstack-ansible06:36
*** javeriak has quit IRC06:41
*** javeriak has joined #openstack-ansible06:42
*** ybabenko has joined #openstack-ansible06:51
*** ybabenko has quit IRC06:56
*** ybabenko has joined #openstack-ansible06:56
openstackgerritMerged stackforge/os-ansible-deployment: Removed rpc-repo upstream pip deps  https://review.openstack.org/21618107:06
openstackgerritMerged stackforge/os-ansible-deployment: Update tempest configuration  https://review.openstack.org/21010707:14
evrardjpmgariepy: Please tell me if you have issues with keepalived07:20
*** gparaskevas has joined #openstack-ansible07:23
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Allow cinder-backup to use ceph  https://review.openstack.org/20953707:37
*** javeriak has quit IRC07:57
*** Ti-mo has joined #openstack-ansible08:22
*** javeriak has joined #openstack-ansible08:25
openstackgerritMerged stackforge/os-ansible-deployment: Enable tempest testing of ceilometer  https://review.openstack.org/21760008:53
*** d0ugal has left #openstack-ansible09:01
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: [WIP] Set client versions deployed to use global requirements  https://review.openstack.org/22004909:28
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: [WIP] Set client versions deployed to use global requirements  https://review.openstack.org/22005009:34
*** javeriak_ has joined #openstack-ansible09:38
*** javeriak has quit IRC09:39
odyssey4meevrardjp FYI, considering you've been looking into this lately: https://bugs.launchpad.net/openstack-ansible/trunk/+bug/142479709:56
openstackLaunchpad bug 1424797 in openstack-ansible trunk "Implement SSL for spice consoles" [Wishlist,Confirmed]09:56
odyssey4meyou may wish to assign it to yourself09:56
evrardjpthis isn't what I've done myself09:57
*** mhayden has quit IRC09:58
odyssey4meoh ok - I thought that you were :p09:58
evrardjpI've used a loadbalancer with SSL termination (here, haproxy)09:58
evrardjpand changed upstream code09:58
evrardjpof novnc09:58
evrardjpso thereisn't something to do09:58
evrardjpin OSAD09:58
evrardjpso I'm not really sure if this is required or working better than my workaround09:59
evrardjpin all cases, I don't think we should fix upstream code in osad10:00
evrardjpor implement workarounds10:00
evrardjpbut I could share10:00
odyssey4meperhaps share your workaround in the bug for any that wish to use it10:00
odyssey4mebut yes, if upstream doesn't support it then we can't do it until upstream is fixed10:01
evrardjpodyssey4me: you've found a solution for hpcloud?10:03
evrardjplet me rephrase that10:05
odyssey4meevrardjp it's an ongoing process - I'm just testing an optimisation right now which I hope helps10:05
evrardjpare you sure this is complete? https://review.openstack.org/#/c/218611/10:05
odyssey4mehpcloud-b4 is still broken for us permanently10:05
odyssey4meevrardjp not sure just yet - the jury's still out on whether using the hpcloud az based ubuntu repositories is better than using ubuntu archive10:06
evrardjpI have variables in my user_variables: lxc_container_template_(main|security)_apt_repo10:06
odyssey4meubuntu archive worked quite well for the previous set of tests10:06
evrardjpit could be useful to you10:06
odyssey4meyeah, the bootstrap-aio script already adds those variables into it10:06
evrardjpok10:07
evrardjpI missed that part sorry10:07
evrardjpjust trying to help ;)10:07
odyssey4mehttps://github.com/stackforge/os-ansible-deployment/blob/master/scripts/bootstrap-aio.sh#L357-L35910:07
odyssey4me:) no worries - it's always good to have extra eyes10:08
evrardjpthat's my opinion10:08
*** ybabenko has quit IRC10:23
openstackgerritMerged stackforge/os-ansible-deployment: Allow cinder-backup to use ceph  https://review.openstack.org/20953710:24
openstackgerritMatt Thompson proposed stackforge/os-ansible-deployment: Allow cinder-backup to use ceph  https://review.openstack.org/22007410:25
*** mhayden has joined #openstack-ansible10:27
*** javeriak_ has quit IRC10:34
odyssey4meevrardjp perhaps now's a good time to backport https://review.openstack.org/215579 ?10:38
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Switch Nova/Tempest to use/test Cinder API v2  https://review.openstack.org/22008110:38
*** javeriak has joined #openstack-ansible10:39
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Implement tox.ini config for bashate and pep8 tests  https://review.openstack.org/22008210:39
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Disable python buffering for gate checks  https://review.openstack.org/22008310:39
*** javeriak has quit IRC10:51
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: [WIP] Set client versions deployed to use global requirements  https://review.openstack.org/22005010:52
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: [WIP] Set client versions deployed to use global requirements  https://review.openstack.org/22004910:53
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Set the Ubuntu mirror used based on the environment  https://review.openstack.org/21861110:54
*** gparaskevas has quit IRC11:08
evrardjpodyssey4me: cool, merged (:11:13
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: Fixing haproxy-playbook fails when installing on multiple hosts  https://review.openstack.org/22008911:14
openstackgerritMerged stackforge/os-ansible-deployment: Updated juno to include fix for CVE-2015-3241 - 26 Aug 2015  https://review.openstack.org/21709811:15
*** shoutm has joined #openstack-ansible11:17
*** shoutm_ has quit IRC11:18
*** gparaskevas has joined #openstack-ansible11:21
openstackgerritMerged stackforge/os-ansible-deployment: Remove hardcoded config drive enforcement  https://review.openstack.org/21848011:23
*** ybabenko has joined #openstack-ansible11:32
*** ybabenko has quit IRC11:32
mgariepygood morning !12:02
mgariepyevrardjp, I don't have issue with keepalived other than i think it need different priority when deploying on 3 hosts.12:03
evrardjpwhat do you mean mgariepy?12:05
evrardjp2 backups with different priorities?12:05
mgariepyyep12:05
evrardjpit's doable12:05
mgariepyi override with hosts_var12:05
evrardjpyou can define it in conf.d/keepalived.yml12:06
mgariepyhttp://paste.ubuntu.com/12262483/12:06
evrardjpyup, it works the same way12:06
evrardjpand it doesn't the change into account?12:06
mgariepyyeah it works.12:07
evrardjpoh ok12:07
evrardjpthen it's fine :)12:07
evrardjpI'll remove the WIP tag on it then12:07
openstackgerritJean-Philippe Evrard proposed stackforge/os-ansible-deployment: Implementation of keepalived for haproxy  https://review.openstack.org/21881812:08
mhaydenmorning12:20
*** woodard has joined #openstack-ansible12:27
odyssey4meo/ mhayden i see you had some fun in the sewers of linux/lxc networking yesterday :p12:28
*** cfarquhar has joined #openstack-ansible12:28
*** cfarquhar has quit IRC12:28
*** cfarquhar has joined #openstack-ansible12:28
odyssey4mehughsaunders how do you feel about https://review.openstack.org/220074 ?12:29
hughsaundersodyssey4me: warm and fuzzy12:29
odyssey4meandymccr I think it may be time to review https://review.openstack.org/21734112:29
mhaydenodyssey4me: yes, had to take a long shower last night12:29
andymccrodyssey4me: you already have 2+2's ?12:30
mhaydenodyssey4me: i have a lxc patch that seems to be working so far, but needs more testing12:30
mhaydenmight propose it as a PR to the upstream shortly to get them to gander12:30
odyssey4meandymccr doh, wrong one: https://review.openstack.org/20881712:30
odyssey4memhayden nice :)12:31
*** woodard has quit IRC12:39
*** woodard has joined #openstack-ansible12:43
*** pradk has quit IRC12:45
*** shoutm_ has joined #openstack-ansible12:47
*** shoutm has quit IRC12:48
evrardjpnice indeed :)13:05
*** tlian has joined #openstack-ansible13:06
evrardjpmgariepy: just a quick question about your usage of keepalived: you're using on bare metal, right? because I've seen lots of issues in the past with running this kind of sofware virtualized/containerized13:07
mgariepyevrardjp, yep indeed on metal.13:13
evrardjpok perfect13:13
cloudnullmorning13:20
evrardjpo/ cloudnull13:28
mhaydenodyssey4me: the PR is in --> https://github.com/lxc/lxc/pull/64913:30
odyssey4memhayden sweet, subscribed :)13:30
cloudnullnice mhayden!13:31
mhaydenforgot semicolons -- had to rebase a quick fix before i really embarrassed myself13:33
mhaydeni still wish i could figure out where the actual veth teardown was going wrong13:35
mhaydenthat would take some kernel digging13:35
mhaydenwould need additional coffee for that13:35
cloudnulltypie typie ;)13:35
mhaydenthere has to be some kind of kernel queue for netlink where devices are removed13:36
matttmhayden: didn't you review a linux kernel hacking book rcently?  :P13:36
mhaydenmattt: already checked that book -- no netlink mentions :)13:36
odyssey4mecloudnull and others - please don't +w any patches until https://review.openstack.org/220074 is merged :)13:38
odyssey4meif any patch ahead of it fails, it restarts the build process - so things take awfully long13:38
cloudnullonly for kilo it seems13:39
odyssey4mecloudnull yup, but unfortunately the zuul chain doesn't seem to differentiate13:40
odyssey4meit queues all patches equally, then resets the queue if one fails in the gate pipeline13:40
cloudnullso your saying a failure in master is causing kilo to regate ?13:40
odyssey4meyup13:40
cloudnullis infra fixing that ?13:41
cloudnullbecause that seems wrong13:41
odyssey4meI haven't spoken to anyone about it yet - but yes, it seems to me like a bug13:41
odyssey4meI only noticed it yesterday when our gate pipeline was very, very long - and got reset many, many times.13:42
odyssey4mewe had some patches in there for over 4 hours when I stopped watching.13:42
*** Bjoern_ has joined #openstack-ansible13:42
svgSeems Packt Publishing found me again for a round. No thanks :)13:52
*** k_stev has joined #openstack-ansible13:53
*** pradk has joined #openstack-ansible13:54
cloudnullsvg the author !13:54
*** jroll has quit IRC13:54
*** jroll has joined #openstack-ansible13:54
*** woodard has quit IRC13:55
svgcloudnull: alas no, only reviewer credits! in the book! on the website!13:55
*** KLevenstein has joined #openstack-ansible13:57
*** woodard has joined #openstack-ansible13:57
*** galstrom_zzz is now known as galstrom13:58
Bjoern_what's going on with pakt ? They bother me too13:59
*** sigmavirus24_awa is now known as sigmavirus2413:59
*** Bjoern_ is now known as BjoernT13:59
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Updated juno for new dev work - 3 Sep 2015  https://review.openstack.org/22015114:02
*** Mudpuppy has joined #openstack-ansible14:03
*** gparaskevas has quit IRC14:03
*** spotz_zzz is now known as spotz14:05
palendaeBjoernT: I think they're nagging svg about an ansible book review14:11
*** woodard has quit IRC14:12
svgyeah, I already told them last year to no bother me with that. They're not very professional.14:12
palendaeNope14:12
palendaeThat's kind of their MO14:12
*** shoutm_ has quit IRC14:18
*** woodard has joined #openstack-ansible14:21
*** galstrom is now known as galstrom_zzz14:22
*** cloudtrainme has joined #openstack-ansible14:23
*** jmckind has joined #openstack-ansible14:27
arbrandes1Hey guys.  So, I'm trying to use Ceph with Cinder.  On openstack_user_config.yml, how can I define "cinder_backends" only once for all "storage_hosts"?14:31
openstackgerritMerged stackforge/os-ansible-deployment: Add sorting_method to swift proxy config as needed  https://review.openstack.org/20881714:32
openstackgerritMerged stackforge/os-ansible-deployment: Allow cinder-backup to use ceph  https://review.openstack.org/22007414:32
arbrandes1Yay, I was waiting for that one! ^^^14:33
*** devlaps has quit IRC14:33
*** arbrandes1 is now known as arbrandes14:33
odyssey4meright, time to tag 11.2.0 :)14:34
arbrandesWoohooo! \o/ \o/ \o/14:34
odyssey4mearbrandes I might be wrong, but if you're using ceph only, then you should only have one cinder-volume and one cinder-scheduler container (change the cinder-volume to is_metal: false)14:39
arbrandesodyssey4me: no redundancy for cinder-volume?14:40
*** k_stev1 has joined #openstack-ansible14:42
*** k_stev has quit IRC14:42
arbrandes * [new tag]         11.2.0     -> 11.2.014:43
arbrandes\o/ \o/ \o/ \o/ \o/ \o/ \o/14:43
odyssey4mearbrandes well, technically cinder-volume is not redundant in cinder anyway.... but with ceph and in most cases cinder-volume only facilitates the initial connection to the storage14:44
odyssey4mefrom then on its direct14:44
odyssey4meagain, I could be wrong - but that's my understanding14:44
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: [WIP] Compartmentalizing RabbitMQ  https://review.openstack.org/20282214:44
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Updated MariaDB to the new release version (10.0)  https://review.openstack.org/17825914:44
arbrandesodyssey4me: that's true - I'm just worried about having some redundancy in the sense that if the storage_host running cinder-volume goes down, I'd like to have another option running.14:45
odyssey4mearbrandes if you're only using ceph then I think your storage hosts should all be the same hosts as your controllers14:46
odyssey4meandymccr hughsaunders git-harry is that correct?14:46
arbrandesodyssey4me: yup, that's what I have configured.  I may be misuderstanding exactly HOW to configure this, though.14:46
andymccrarbrandes: if you are using cinder-volumes with only a ceph backend (and not an lvm backend) then you can put it in containers and then run it on the infra hosts14:47
arbrandesandymccr: yeah, that's my intention.14:48
arbrandesandymccr: my doubt is regarding how to best configure this in openstack_user_config.yml.14:48
andymccrthe only change to be aware of is in your /etc/openstack_deploy/env.d/cinder.yml you will need to change the cinder-volumes container's "is_metal" setting to false14:49
matttarbrandes: can you specify that in user_variables.yml ?14:49
matttarbrandes: but if you do ever want to add another backend then you'll be SOL14:49
arbrandesandymccr: I have three hosts (alice, bob, and charlie) that will handle infra, including cinder-volume.  I set "storage-infra_hosts" to those three.  Next, from the examples, it seems I have to set "storage_hosts" as well, where I'm configuring the cinder_backends property.14:49
arbrandesmattt: no other backend intended.14:50
arbrandesandymccr: ok, thanks, I'll take care with is_metal = false. :)14:50
matttarbrandes: my personal opinion is to just duplicate it for each storage host14:51
matttarbrandes: it gives you more flexibility down the line14:51
arbrandesmattt: Alright, I can live with that.  I was just wondering if there was a cleaner way.  user_variables might be the place, if the playbooks support it.  If they don't, it's not a biggie.14:52
matttarbrandes: sec14:52
matttarbrandes: just chatting to someone about this right now14:52
*** javeriak has joined #openstack-ansible14:52
openstackgerritMerged stackforge/os-ansible-deployment: Ensure rsync restarts fully during swift setup  https://review.openstack.org/21734114:52
arbrandesmattt: awesome, thanks14:53
matttarbrandes: chatting to git-harry about this, ideally you should only run one cinder-volume for each backend14:53
matttarbrandes: if that cinder-volume service goes down, you'll just have issues w/ scheduling i believe14:54
arbrandesmattt: ok.  Any trouble if I have two, though?  I know OpenStack is fine about this - just wondering if the playbooks are too.14:54
matttarbrandes: he actually said that the limitation is from an openstack perspective, not os-ansible-deployment14:55
matttarbrandes: os-ansible-deployment will let you do it as far as i can tell14:55
odyssey4mearbrandes user_variables have the highest precedence14:56
arbrandesmattt: well, I know of a couple of clusters that, in switching over to Ceph-only, stuck with running a cinder-volume instance on *every* compute node.  OpenStack might not be using all those, but I know it's not an issue in and of itself.14:56
palendaeA request - please do not take people's in-flight reviews without at least checking with them first.14:56
matttpalendae: agreed14:56
arbrandesodyssey4me: cool, thank you14:57
*** jbweber has joined #openstack-ansible14:57
d34dh0r53odyssey4me: cloudnull: any reason https://review.openstack.org/#/c/219105/5 wasn't +W?14:57
odyssey4med34dh0r53 to get more reviewers ;)14:58
matttarbrandes: yeah i'm not entirely sure about that part, but osad should allow you to do it without an issue14:58
palendaeBased on what meetings I have scheduled today I'll try to hide and get another review on that14:58
matttarbrandes: the only thing to note is if you set a blanket override in user_variables.yml, you'll never to do a per-node config in openstack_user_config.yml as the user_variables.yml file will always take precedence14:59
matttarbrandes: i got burned w/ this when having a blanket drive configuration for ceph in user_variables.yml, and then wanted a single node to have a custom config15:00
odyssey4memattt hughsaunders andymccr d34dh0r53 cloudnull palendae juno sha bump ready for review: https://review.openstack.org/22015115:00
mhaydenthanks for the w+1 on the named-veths spec, everybody -- that's my first ever openstack spec! :)15:01
openstackgerritMerged stackforge/os-ansible-deployment-specs: Adding spec for named veth interfaces  https://review.openstack.org/21910515:01
palendaeArgh! I had a comment on that15:01
palendaeShould mention that doing this will cause restarts to fail should there be more dangling veths15:01
palendaecloudnull: ^15:01
mhaydenpalendae: want me to go and revise the spec right quick?15:01
mhaydeni can submit a follow-on review15:01
odyssey4mepalendae you can still comment on it15:01
palendaecloudnull: Named veths will *not* re-attach when the container restarts. It'll prevent it from restarting at all15:02
*** jwagner_away is now known as jwagner15:02
palendaeodyssey4me: If gerritt will let me sign in, yeah :)15:02
arbrandesmattt: yeah, that's sound advice.  However, I doubt I'll ever have to change the configuration of an individual instance of cinder-volume.  In any case, how would I set container_vars for all storage_hosts in user_variables.yml?15:02
cloudnullpalendae: maybe i missunderstood the other day, but mhayden you wern't able to reproduce the issue with the named veths right?15:03
matttarbrandes: i'd imagine you can just set cinder_backends: in user_variables.yml, but untested by me :P15:03
palendaecloudnull: No, he wasn't - the issue is different. It hard-fails, the container just won't start15:03
mhaydencloudnull: if the box was under heavy contention, i could reproduce :|15:03
arbrandesmattt: ok, I'll try that and come back with results. :)15:03
mhaydenwaiting on osad to build out on a new 8GB server so i can test the LXC fix15:03
matttarbrandes: but obviously those vars will not be limited to the storage nodes, however i don't think it'd be a problem other nodes having those vars15:04
arbrandesmattt: yeah, I figured.15:04
cloudnullIMO hard fail is better than system overload due dangling veth.15:04
palendaecloudnull: Right, and I agree. I just want to be very clear that this is what will happen15:04
matttarbrandes: but honestly i'd just bung it all in openstack_user_config.yml, more flexible that way15:04
palendaeNot that it'll fix it and it's all better15:04
mhaydencloudnull: there's also an opportunity to force a cleanup on container start within the LXC code15:05
mhaydenbut that's lower priority for me at the moment15:05
mhaydenthere's already some network cleanup code in lxc-start that gets used when the container start explodes for other reasons15:05
palendaeYeah, and who knows if that'll get into the ubuntu LTS packages15:05
arbrandesmattt: that's the first thing I'll try, of course.  Just to see it working.  Next, I'll try consolidating it.15:05
matttarbrandes: cool, let us know how you get on15:07
mhaydenpalendae: if the fix gets merged into lxc, i'll go open tickets for a backport into LTS15:07
*** markvoelker has joined #openstack-ansible15:08
mhaydenbut most of my ubuntu tickets get a WONTFIX :P15:08
matttarbrandes: how are you deploying ceph itself, out of curiousity ?15:09
odyssey4mepalendae 11.2.0 is released, and the upstream wheel repo is updated15:09
palendaeodyssey4me: Thanks15:09
arbrandesmattt: ceph-ansible15:09
matttarbrandes: ok, same w/ us15:10
arbrandesmattt: it's not the prettiest thing in the universe, but it seems to work.15:10
*** sigmavirus24 is now known as sigmavirus24_awa15:12
*** sigmavirus24_awa is now known as sigmavirus2415:13
*** phalmos has joined #openstack-ansible15:13
odyssey4mepalendae cloudnull erm, oops: https://github.com/stackforge/os-ansible-deployment/blob/11.2.0/playbooks/inventory/group_vars/all.yml#L1715:17
cloudnull11.2.1 here we come :)15:17
mgariepymhayden, maybe you can go poke stgraber on #lxcontainers about your fix and in the same time ask him to fix it in ubuntu LTS15:17
cloudnullodyssey4me:  it happens15:17
odyssey4mehaha, yeah15:17
palendaeSo does stgraber not use #lxc?15:18
cloudnullupdate-revision.sh in the various branches helps with that15:18
palendaeOh, I just read the topic on #lxc, nm15:18
mgariepyhehe15:19
mhaydenmgariepy: not 100% sure that my fix is totally solid -- will hopefully test in < 15-30 minutes15:21
mhaydeni'd rather ping stgraber when i know it does something good15:21
mhaydenit doesn't break anything, and the code compiles -- i know that much ;)15:21
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update tag version to 11.2.1  https://review.openstack.org/22019515:21
odyssey4mecloudnull ah, I didn't know about that script!15:22
mgariepyhe might have an idea of what is wrong (or know who is)15:22
odyssey4mecloudnull ^ for fast follow release15:22
odyssey4mehughsaunders ^15:22
palendaeWe should probably doc our tagging process ;)15:23
cloudnullodyssey4me: yea update-revision.sh and sources-branch-updater.sh are in all branches to have a common interface to update all the various parts as we've moved bits around.15:23
cloudnullodyssey4me: +215:23
odyssey4mecloudnull yeah, I knew about the second, but not the first15:24
*** javeriak has quit IRC15:25
*** mcarden has quit IRC15:29
*** mcarden has joined #openstack-ansible15:30
evrardjpmattt: I'm just back on the conversation, tried to read the reasons why not to have multiple storage-hosts having the same backend using ceph... didn't get it15:31
matttevrardjp: yeah, i'm just passing over 2nd hand information15:32
evrardjpisn't the scheduler checking first if one cinder-volume is available among all the cinder-volumes? if true, I don't see something wrong having 2 cinder-volumes with ceph15:33
evrardjpit prevents the failure of scheduling if one cinder-volume fails15:33
evrardjpbut I may be mistaken15:34
*** alop has joined #openstack-ansible15:34
odyssey4meevrardjp yes, that's my understanding15:35
odyssey4mehowever, once a volume is attached to an instance the cinder-volume service is not involved unless it needs to re-attach (after an instance reboot or something)15:35
odyssey4meunfortunately that reattachment process has to use the same cinder-volume host15:36
odyssey4meso if it goes down, your instance can't reattach15:36
odyssey4me(unless you force detach and re-attach which will then use a new cinder-volume host)15:36
evrardjpmmm didn't try that15:38
evrardjpthat's why I didn't got the issue15:38
evrardjpget*15:39
odyssey4megive it a go :)15:39
matttevrardjp: aren't volumes tied to a specific cinder-volume tho?15:39
matttevrardjp: so if you have multiple hosts running cinder-volume, they'd all register differently15:39
evrardjpyou are both right15:39
evrardjp><15:39
matttgit-harry knows the most about this, unfortunately he doesn't like to type15:40
odyssey4methe only way to recover from a broken cinder-volume container/host (assuming a back-end like ceph) is to rename the cinder-volume host entry for the volumes in the DB15:40
evrardjpbut that's not the same15:41
evrardjpthat's recover from a crash of your cinder-volume15:41
evrardjpI just meant to improve availability of my system, but for example rebooting one cinder-volume at a time15:42
*** markvoelker_ has joined #openstack-ansible15:42
evrardjpin case that every happens15:42
evrardjpever*15:42
matttevrardjp: my cinder knowledge is limited but you'll still have problems there15:42
evrardjpso your cinder-volume will be down, you won't be able to schedule anything new on this15:42
evrardjpyou'll schedule on the other ones15:43
odyssey4meevrardjp you get no gain from that except for api requests at that particular moment in time15:43
matttevrardjp: yep, and anyone w/ a volume 'attached' to that node is going to get an error if they attempt to do anything15:43
odyssey4memattt evrardjp only api requests15:43
matttyep through API15:43
evrardjpit's a smaller issue than having EVERYONE that can't do anything with a volume15:43
evrardjpit's just a question of sla15:43
matttevrardjp: that is true15:43
odyssey4meyeah, fair enough15:44
odyssey4meit's not high availability - it's just greater api availability15:44
evrardjpand yes, it's only api requests, but that's where we put sla15:44
evrardjp:p15:44
matttunless of course you find that one cinder-volume always seems to grab messages first15:44
matttat which point you're back to square 1 :)15:44
odyssey4meyeah, the sceduler could probably do with a round-robin filter or something15:45
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update kilo for new dev work - 3 Sep 2015  https://review.openstack.org/22020615:45
evrardjpI'll need to look at cinder more cautiously15:45
*** markvoelker has quit IRC15:45
palendaeevrardjp: That's a very reasonable approach15:45
matttevrardjp: i believe git-harry said you could set all cinder-volume hosts to have the same hostname15:45
matttevrardjp: he's our cinder SME, but he's shy15:45
odyssey4memattt if you do that, then you'll only ever be wanting to run one cinder-volume service at a time15:46
evrardjpI'm looking for a way to avoid that ALL my tenants can't do something with cinder at one moment15:46
odyssey4meso cinder-volume would need to fail over between hosts15:46
evrardjpwhat we are talking about is doing real HA on cinder-volume ;)15:47
matttgit-harry: think i found a picture of you as a kid: http://image.shutterstock.com/display_pic_with_logo/732076/732076,1305444102,2/stock-photo-shy-boy-77485684.jpg15:47
*** phalmos has quit IRC15:47
palendaeevrardjp: That's a good idea15:48
hughsaundersI'm not so sure, why not use something that is already HA?15:49
hughsaundersah, I misunderstood.15:50
*** phalmos has joined #openstack-ansible15:56
palendaeodyssey4me: Are the 11.2.1 mirror builds done?16:00
odyssey4mepalendae nope - busy setting that up now16:00
palendaeodyssey4me: Ok, let me know when that happens if you could please16:01
*** phalmos has quit IRC16:01
*** phalmos has joined #openstack-ansible16:01
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Create config_template plugin  https://review.openstack.org/22021216:02
odyssey4mecloudnull meeting?16:02
odyssey4memeeting in #openstack-meeting-4 cloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, mancdaz, dolphm, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung, prometheanfire, evrardjp, arbrandes, mhayden16:04
cloudnull^ beat me to the paste :)16:04
Sam-I-Amwe're all tied up in a meeting16:04
mhaydenbam16:04
Sam-I-Ampretty much the same meeting as the last 3 weeks of osad meetings16:05
mhaydencurrently have a testing loop rolling for the lxc fix...16:05
cloudnullSam-I-Am:  if you can comment here you can join the meeting in #openstack-meeting-416:05
Sam-I-Amcloudnull: i'm there16:06
Sam-I-Amas much as i can be16:06
Sam-I-Amalso looking for a job as a truck driver16:06
cloudnullhahaha16:06
*** jaypipes has joined #openstack-ansible16:07
*** phalmos has quit IRC16:38
evrardjpare we fully ready for keystone v3?16:42
odyssey4meevrardjp we're waiting for upstream16:43
evrardjpI'll stick with v2 for now :p16:43
odyssey4mewe've had a patch in flight for full conversion to it ages ago16:43
odyssey4mewe should be able to switch to it as the default for liberty16:44
evrardjpI'm still reluctant to define the variable that pushes the usage to v316:44
evrardjpok16:44
*** k_stev1 has quit IRC16:47
*** k_stev has joined #openstack-ansible16:50
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: [WIP] Replaced the copy_update module  https://review.openstack.org/21679016:52
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Create config_template plugin  https://review.openstack.org/22021216:52
prometheanfireSam-I-Am: docs link to generated osad docs?16:53
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Create config_template plugin  https://review.openstack.org/22021216:53
prometheanfireSam-I-Am: want to give a link to the ops-meetup page16:53
Sam-I-Amprometheanfire: they're on readthedocs16:53
prometheanfireah, right16:53
Sam-I-Amand supposedly broken right now16:53
Sam-I-Amsomehow16:54
prometheanfirehttps://osad.readthedocs.org/en/latest/16:54
prometheanfireya, 404 or something16:54
Sam-I-Amsome patch broke it16:54
Sam-I-Amand i dont have time to figure out why16:54
Sam-I-Ambecause upgrades16:54
prometheanfireya16:54
prometheanfireok16:55
prometheanfirejust github/lp links then16:55
odyssey4meSam-I-Am prometheanfire http://os-ansible-deployment.readthedocs.org/17:01
odyssey4mejust the lp link should be fine - as it links to everything17:02
odyssey4meie https://launchpad.net/openstack-ansible17:02
odyssey4memaybe the wiki is better though: https://wiki.openstack.org/wiki/OpenStackAnsible17:03
odyssey4mepalendae the python wheels are build for 11.2.1 and 11.2.2 in the upstream repo - we're just waiting for the merges of https://review.openstack.org/220195 and then https://review.openstack.org/22020617:04
odyssey4mestevelle it's probably a good idea to go ahead and backport https://review.openstack.org/217341 and https://review.openstack.org/20881717:07
*** jwagner is now known as jwagner_away17:07
stevelleodyssey4me: thanks for the rebases on them while I was lost, btw17:08
odyssey4mestevelle yeah, it was a right pain to try and get things back on track for master - happy that we're getting patches through the door again though17:08
stevellestill finding my way out of the woods, as I seem to have missed a lot. Hope to get clean backports in this afternoon17:09
*** jwagner_away is now known as jwagner17:10
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Adds the crud_template to ceilometer  https://review.openstack.org/21703017:13
*** markvoelker_ has quit IRC17:16
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Adds the config_template to ceilometer  https://review.openstack.org/21703017:19
palendaeodyssey4me: Ok, but shoudln't need the 2nd one for 11.2.1 to be tagged, right?17:28
palendaeObviously necessary for work to carry on for 11.2.217:30
*** kerwin_bai has joined #openstack-ansible17:33
*** phalmos has joined #openstack-ansible17:35
*** kerwin_bai has quit IRC17:35
*** javeriak has joined #openstack-ansible17:58
openstackgerritSteve Lewis proposed stackforge/os-ansible-deployment: Add sorting_method to swift proxy config as needed  https://review.openstack.org/22026018:03
openstackgerritSteve Lewis proposed stackforge/os-ansible-deployment: Ensure rsync restarts fully during swift setup  https://review.openstack.org/22026118:04
*** phalmos has quit IRC18:05
openstackgerritMerged stackforge/os-ansible-deployment: Update how neutron migrations are handled  https://review.openstack.org/21558418:09
openstackgerritMerged stackforge/os-ansible-deployment: Implement tox.ini config for bashate and pep8 tests  https://review.openstack.org/22008218:10
*** woodard has quit IRC18:19
*** k_stev has quit IRC18:30
*** harlowja has quit IRC18:31
*** tlian has quit IRC18:31
*** harlowja has joined #openstack-ansible18:32
*** phalmos has joined #openstack-ansible18:34
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Shift irqbalance package from lxc_hosts to openstack_hosts  https://review.openstack.org/21835418:35
odyssey4mepalendae yep, that's for 11.2.2 work18:36
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Additional retries for ssh wait check  https://review.openstack.org/21963818:36
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Adds the ability to provide user certificates to HAProxy  https://review.openstack.org/21552518:37
odyssey4mestevelle you can rebase https://review.openstack.org/220261 and https://review.openstack.org/220260 on top of https://review.openstack.org/220206 to respect the order :)18:38
*** tlian has joined #openstack-ansible18:38
odyssey4mecloudnull it looks like many of the projects are tagging their liberty-3 tags today, so I'll prep a liberty-3 related sha update tomorrow when they're all settled18:41
cloudnullyea i'd imagine we'll be in queue for another round of all the things are broken real soon.18:43
odyssey4mecloudnull fyi - I've created the 11.3.0 and 10.2.0 milestones to add any new bugs to. They're not dated, so they're a good catch-all for any new bugs which we can pull back to the appropriate milestone when someone actually picks up the patch18:43
odyssey4methat way we're not making any false promises by adding it to a milestone, then shifting it every time we release the milestone18:44
cloudnullokiedokie18:44
odyssey4meall master bugs need to now go to the 12.0.0 milestone as there won't be any more until then - if we haven't tackled them by then then we'll have to shift them to 12.1.018:45
odyssey4meit is nice having a milestone to tag for trunk though :)18:45
cloudnullit is, allows us to keep better track of things for sure.18:48
odyssey4mecloudnull how do you like https://review.openstack.org/218611 ?18:50
odyssey4mecloudnull time to perhaps update https://review.openstack.org/168976 ?18:52
cloudnullodyssey4me:  https://review.openstack.org/218611 lgtm +218:53
cloudnullthe spec will be updated soon.18:53
odyssey4mecloudnull awesome - it does seem to help improve hpcloud's responses - still not hpcloud-b4, but I have a seperate bug for that18:53
odyssey4mehttps://bugs.launchpad.net/openstack-ansible/+bug/144263018:53
openstackLaunchpad bug 1442630 in openstack-ansible "gate: apt install for containers fail on hpcloud-b4" [High,Confirmed] - Assigned to Jesse Pretorius (jesse-pretorius)18:53
odyssey4mepalendae if you have a gap to look at https://review.openstack.org/218611 it'd be appreciated - it seems to improve gating success for hpcloud18:56
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment-specs: Tunable OpenStack Configuration Specification  https://review.openstack.org/16897618:59
cloudnullodyssey4me: ^ spec updated18:59
*** cloudtrainme has quit IRC19:01
*** cloudtrainme has joined #openstack-ansible19:02
*** galstrom_zzz is now known as galstrom19:02
odyssey4mecloudnull +2 from me :)19:02
odyssey4meI honestly think that's crucial for the Liberty cycle so that we can focus on features and real bugs, instead of silly conf entries.19:03
*** jwagner is now known as jwagner_away19:07
*** jwagner_away is now known as jwagner19:09
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Add variable for cirros url  https://review.openstack.org/21731019:10
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update kilo for new dev work - 3 Sep 2015  https://review.openstack.org/22020619:15
cloudnullodyssey4me:  i 100% agree19:16
*** woodard has joined #openstack-ansible19:19
*** galstrom is now known as galstrom_zzz19:22
*** cloudtrainme has quit IRC19:28
*** thrawn01 has joined #openstack-ansible19:28
*** galstrom_zzz is now known as galstrom19:29
thrawn01Hey guys, what backends does RS support for private cloud cinder backends?19:30
palendaethrawn01: To my knowledge, netapp and LVM. NetApp is the one you'll want if you want redundancy19:30
cloudnullthrawn01: we have lvm, nfs, and netapp19:31
odyssey4methrawn01 and rbd (ceph)19:32
thrawn01so, if I wanted to play with the cinder netapp driver, is there a place perhaps in the lab that might have a netapp to play with?19:32
*** cloudtrainme has joined #openstack-ansible19:33
cloudnullif you have a netapp or access to the netapp virtual appliance you could19:33
palendaethrawn01: Are you in RAX?19:39
thrawn01palendae: yeah, I'm hanging with dolphm at the bugbash19:40
*** cloudtrainme has quit IRC19:43
alextricityAnybody know if there is a way to setup keystone SSL in OSAD?19:44
odyssey4mealextricity yes there is :p19:45
*** cloudtrainme has joined #openstack-ansible19:46
odyssey4mealextricity two methods - one is on keystone itself: https://github.com/stackforge/os-ansible-deployment/commit/caa9733788468886c6ac50cd2fde00a4f8a5832119:46
odyssey4mealextricity the other is using haproxy if you don't have another LB: https://github.com/stackforge/os-ansible-deployment/commit/ff4c7ff746f9eadbdc146016b53ff8f8aabfa17e19:47
palendaealextricity: Yeah, a good question is: In Kilo or Juno?19:47
palendaeSince Juno and Icehouse will live forever19:48
alextricityThis is Kilo19:48
palendaek19:48
palendaeNice answer19:48
alextricityThanks odyssey4me. I'm going to try out configuring it through keystone itself.19:49
odyssey4mealextricity sure :)19:50
openstackgerritMerged stackforge/os-ansible-deployment-specs: Tunable OpenStack Configuration Specification  https://review.openstack.org/16897619:50
arbrandesmattt: Am I going to get in trouble if I co-locate Ceph OSD servers with compute nodes?19:51
* arbrandes grins mischievously19:52
palendaearbrandes: The safe answer there is: yes, don't do that.19:52
mgariepyarbrandes, in dev or in production ?19:52
arbrandespalendae: that's what I expected. ;)19:52
arbrandesmgariepy: dev19:52
arbrandesOh, I'm not crazy enough to do that in production.19:52
mgariepylol ok19:53
arbrandesBut I'm too lazy to fire up 3 more VMs. :P19:53
*** phalmos_ has joined #openstack-ansible19:53
odyssey4mearbrandes dev is fine to some degree19:54
odyssey4meit depends on the load you put on them19:54
odyssey4methe osd's and instances contend for the cpu heavily, so the host does a lot of context switching19:55
odyssey4methe performance for both dives horribly19:55
*** phalmos_ has quit IRC19:55
arbrandesodyssey4me: agreed, entirely.  I'm just wondering if any of the playbooks will mangle my cluster.19:55
*** phalmos has quit IRC19:56
arbrandesMy development cluster, to be more specific. :P19:56
*** dabernie_ has joined #openstack-ansible19:57
*** phalmos has joined #openstack-ansible19:57
*** dabernie_ has left #openstack-ansible19:57
odyssey4mearbrandes I don't think it's tested. I'll hold your beer and watch. :)19:58
cloudnullhahaha19:58
arbrandeslol19:58
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Replaced the copy_update module  https://review.openstack.org/21679019:58
arbrandesOk, I think I'll just fire up 3 more VMs. :D19:58
palendaearbrandes: Cloud on top of VMs, I'm sure it'll be fine :)19:59
cloudnullafk a min19:59
arbrandespalendae: just for testing. ;)19:59
odyssey4mehahaha, we do that all the time - cloud in cloud :)20:00
openstackgerritMerged stackforge/os-ansible-deployment: Fixing haproxy-playbook fails when installing on multiple hosts  https://review.openstack.org/22008920:01
odyssey4mehahaha ^ unexpected bonus patch for 11.2.120:03
*** phalmos has quit IRC20:11
*** cloudtrainme has quit IRC20:12
arbrandesAwesome20:12
arbrandesI intend to use that as well. :D20:12
odyssey4mearbrandes where're you from if you won't mind me asking?20:17
odyssey4mearbrandes you may also be interested in https://review.openstack.org/215525 then - we'll get that into 11.2.2 I hope :)20:20
arbrandesodyssey4me: I'm from Brazil.  I'm guessing you guys are all rackers?20:20
odyssey4mearbrandes several of us are, although we have a few deployers from Belgium too :)20:20
arbrandesodyssey4me: ah, that changeset will indeed be useful.  Thanks for pointing it out!20:21
odyssey4meand a few others are joining the party slowly20:21
*** phalmos has joined #openstack-ansible20:22
*** cloudtrainme has joined #openstack-ansible20:24
*** sdake has joined #openstack-ansible20:25
*** sdake_ has quit IRC20:29
*** woodard has quit IRC20:34
*** galstrom is now known as galstrom_zzz20:35
*** woodard has joined #openstack-ansible20:36
*** jmckind has quit IRC20:38
*** jmckind has joined #openstack-ansible20:42
*** Mudpuppy has quit IRC21:08
*** cloudtrainme has quit IRC21:09
*** KLevenstein has quit IRC21:09
*** cloudtrainme has joined #openstack-ansible21:18
*** woodard has quit IRC21:20
*** javeriak has quit IRC21:21
*** KLevenstein has joined #openstack-ansible21:32
*** jmckind has quit IRC21:34
*** KLevenstein has quit IRC21:37
*** KLevenstein has joined #openstack-ansible21:38
*** phalmos has quit IRC21:46
*** jwagner is now known as jwagner_away21:52
*** Mudpuppy has joined #openstack-ansible21:55
*** tlian2 has joined #openstack-ansible21:56
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Replaced the copy_update module  https://review.openstack.org/21679021:58
*** tlian has quit IRC21:58
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Disable scatter-gather offload on host bridges  https://review.openstack.org/21929222:02
*** woodard has joined #openstack-ansible22:05
*** woodard has quit IRC22:07
*** woodard has joined #openstack-ansible22:07
*** KLevenstein has quit IRC22:11
*** Mudpuppy has quit IRC22:28
*** Mudpuppy has joined #openstack-ansible22:28
odyssey4me\o/ https://review.openstack.org/186684 :)22:29
openstackgerritJesse Pretorius proposed stackforge/os-ansible-deployment: Update tag version to 11.2.1  https://review.openstack.org/22019522:32
*** spotz is now known as spotz_zzz22:38
BjoernTquick question, which part in our new galaxy roles does the git part, for example neutron_git_repo as part of the defaults/repo_packages/openstack_services.yml ?22:56
BjoernTok found it oles/py_from_git, lol22:57
*** mrstanwell has joined #openstack-ansible22:57
openstackgerritKevin Carter proposed stackforge/os-ansible-deployment: Added post and pre hook script for veth cleanup  https://review.openstack.org/22034223:00
*** woodard has quit IRC23:02
*** woodard has joined #openstack-ansible23:04
*** sdake has quit IRC23:06
mrstanwellhi, all.  I've got a multi-node osad kilo install.  I was trying to set up an ipv6 tenant network, and noticed that radvd isn't being installed on neutron_agents_container.  is that a bug, or are v6 tenants not supported by osad?  thanks!23:11
*** markvoelker has joined #openstack-ansible23:13
*** cloudtrainme has quit IRC23:15
*** pradk has quit IRC23:16
*** markvoelker has quit IRC23:17
*** markvoelker has joined #openstack-ansible23:25
palendaemrstanwell: I don't believe so. https://bugs.launchpad.net/openstack-ansible/+bug/142680923:43
openstackLaunchpad bug 1426809 in openstack-ansible trunk "Neutron agent container fails when IPv6 is disabled on the host but enabled on containers" [Medium,Confirmed]23:43
*** shoutm has joined #openstack-ansible23:50
mrstanwellpalendae: well, when I install radvd in neutron_agents_container, SLAAC appears to be working.  tenant configured a v6 addr, and can ping the virtual router.  I can't ping anything else, but I might have some incorrect setup elsewhere.  so it seems like radvd should be getting installed, unless there are bigger obstacles to ipv6 tenants.  That bug looks like a different issue.23:56
cloudnullmrstanwell: Thats awesome !23:59
cloudnullwould you mind regestering a lanuchpad bug for that so we can get that in for liberty / kilo23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!