Thursday, 2018-11-29

*** dcdamien has quit IRC00:20
*** hamerins has quit IRC00:24
*** hamerins has joined #openstack-ansible00:27
*** hamerins has quit IRC00:30
*** weezS has quit IRC00:33
*** vnogin has joined #openstack-ansible00:33
*** xjra has quit IRC00:37
*** vnogin has quit IRC00:37
*** ansmith has joined #openstack-ansible00:41
*** DanyC has joined #openstack-ansible00:48
*** tosky has quit IRC00:50
*** DanyC has quit IRC00:52
*** markvoelker has joined #openstack-ansible01:23
*** markvoelker has quit IRC01:28
*** cshen has joined #openstack-ansible01:46
*** trident has quit IRC01:50
*** cshen has quit IRC01:51
*** gyee has quit IRC01:54
*** trident has joined #openstack-ansible01:55
*** hamerins has joined #openstack-ansible02:00
*** markvoelker has joined #openstack-ansible02:30
*** hamerins has quit IRC02:33
*** radeks has joined #openstack-ansible02:55
openstackgerritMerged openstack/openstack-ansible stable/pike: Update all SHAs for 16.0.23  https://review.openstack.org/61884003:08
*** maddtux has joined #openstack-ansible03:15
*** hamerins has joined #openstack-ansible03:20
*** hamerins has quit IRC03:34
*** cshen has joined #openstack-ansible03:47
*** cshen has quit IRC03:51
*** chhagarw has joined #openstack-ansible04:15
*** vnogin has joined #openstack-ansible04:34
*** dave-mccowan has quit IRC04:34
*** dave-mccowan has joined #openstack-ansible04:35
*** vnogin has quit IRC04:38
*** dave-mccowan has quit IRC04:49
*** udesale has joined #openstack-ansible05:06
*** cshen has joined #openstack-ansible05:49
*** jackivanov has joined #openstack-ansible05:49
*** cshen has quit IRC05:54
*** klamath has quit IRC06:15
*** klamath has joined #openstack-ansible06:15
*** radeks has quit IRC06:20
*** radeks has joined #openstack-ansible06:26
*** DanyC has joined #openstack-ansible06:59
*** fghaas has joined #openstack-ansible07:01
*** ahosam has joined #openstack-ansible07:02
*** DanyC has quit IRC07:04
*** fghaas has quit IRC07:05
*** chkumar|away has quit IRC07:13
*** chandan_kumar has joined #openstack-ansible07:15
*** cshen has joined #openstack-ansible07:24
*** ahosam has quit IRC07:26
*** ahosam has joined #openstack-ansible07:26
*** hamzaachi has joined #openstack-ansible07:50
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Enable support to remove unnecessary tempest options from conf  https://review.openstack.org/62080008:01
chandan_kumarodyssey4me: Hello08:04
chandan_kumarodyssey4me: in openstack-ansible where we generate clouds.yaml file?08:04
*** gkadam has joined #openstack-ansible08:07
*** fghaas has joined #openstack-ansible08:16
*** fghaas has quit IRC08:25
*** hamzaachi_ has joined #openstack-ansible08:26
*** hamzaachi has quit IRC08:27
*** olivierbourdon38 has joined #openstack-ansible08:27
*** hamzaachi__ has joined #openstack-ansible08:28
*** hamzaachi_ has quit IRC08:31
*** hamzaachi_ has joined #openstack-ansible08:36
*** hamzaachi__ has quit IRC08:39
*** hamzaachi__ has joined #openstack-ansible08:41
*** fghaas has joined #openstack-ansible08:42
*** hamzaachi_ has quit IRC08:45
*** pcaruana has joined #openstack-ansible08:48
*** markvoelker has quit IRC08:50
*** tosky has joined #openstack-ansible09:03
*** shardy has joined #openstack-ansible09:04
noonedeadpunkjamesdenton oh, great! Thanks for the notice:)09:09
*** dcdamien has joined #openstack-ansible09:10
*** ahosam has quit IRC09:10
*** ahosam has joined #openstack-ansible09:10
*** fghaas has quit IRC09:10
*** shardy has quit IRC09:13
noonedeadpunkIt seems, that I may abandon then https://review.openstack.org/#/c/620076/209:13
noonedeadpunkOr at least place it for rocky/queens, if https://review.openstack.org/#/c/333829/7 won't be backported09:14
*** DanyC has joined #openstack-ansible09:16
*** hamzaachi_ has joined #openstack-ansible09:20
*** hamzaachi__ has quit IRC09:22
*** kaiokmo has quit IRC09:38
*** DimGR has joined #openstack-ansible09:40
*** shardy has joined #openstack-ansible09:51
*** markvoelker has joined #openstack-ansible09:51
*** flaviosr has quit IRC09:54
*** flaviosr has joined #openstack-ansible09:57
*** hamzaachi__ has joined #openstack-ansible10:06
*** ahosam has quit IRC10:06
*** hamzaachi_ has quit IRC10:08
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible master: Set the utility container service setup interpreter automatically  https://review.openstack.org/62065110:11
odyssey4mecloudnull given it's only really for stable, I'd say let's just avoid overwriting any system implementations10:15
*** cshen has quit IRC10:16
odyssey4mektims by default both ceph-ansible and we lay the repo down IIRC, I don't know why you'd be stuck on lxc_hosts though.... what exactly are you stuck on?10:17
odyssey4mechandan_kumar we use the openstack_openrc role at the moment, although it's not particularly universal - it might be nice to evolve that to something more generally useful, or to switch to another role that already is10:18
odyssey4meevrardjp are you around to look at https://review.openstack.org/#/c/620607/ ? it's a quick voting0->non-voting patch as agreed yesterday10:20
*** cshen has joined #openstack-ansible10:22
*** fresta has joined #openstack-ansible10:22
*** markvoelker has quit IRC10:24
*** udesale has quit IRC10:27
*** ahosam has joined #openstack-ansible10:31
*** sum12 has quit IRC10:32
*** flaviosr has quit IRC10:33
*** mbuil has quit IRC10:33
admin0anyone faced issue of a controller reboot with the ceph mon, and when you bring it back up, its out of quorum ..  and running ansible playbook gives TASK [ceph-mon : collect admin and bootstrap keys] failed": true, "msg": "non-zero return code", "rc": 1, "start": "2018-11-29 10:33:46.040584", "stderr": "INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'\nINFO:ceph-create-keys:ceph-mon is not in quorum:10:39
*** shardy has quit IRC10:41
*** dcdamien has quit IRC10:42
*** shardy has joined #openstack-ansible10:43
*** dcdamien has joined #openstack-ansible10:44
*** hamzaachi__ has quit IRC10:46
evrardjpodyssey4me:10:46
evrardjphey I am here10:47
evrardjpwill look10:47
evrardjpoh already voted this morning10:47
evrardjpping if there is somehting else10:47
odyssey4methanks evrardjp10:49
evrardjpnp10:49
odyssey4meevrardjp this one will also helpo to grease the wheels: https://review.openstack.org/#/c/620629/210:49
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/rocky: Update all SHAs for 18.1.1  https://review.openstack.org/61882210:50
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/rocky: Add extra volume types to AIO  https://review.openstack.org/61705310:51
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/rocky: Update playbook to newer syntax.  https://review.openstack.org/62056910:51
evrardjpokay, I am not super familiar with RDO but it makes sense and is a backport10:51
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/rocky: Remove unnecessary octavia scenario AIO bootstrap  https://review.openstack.org/61979210:52
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/rocky: Ensure AIO container_tech/install_method vars are namespaced  https://review.openstack.org/62027810:52
evrardjpthanksfor the rebase10:52
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/rocky: Implement documentation changes for translations  https://review.openstack.org/62027910:52
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/rocky: [docs] Clean up the AIO user story  https://review.openstack.org/62028010:52
odyssey4meevrardjp ok, that should unblock rocky and there's a whole chain of patches after those now rebased to get them passing10:53
odyssey4meevrardjp what do you think of https://review.openstack.org/620651 ?10:54
odyssey4meI think it's nice from a functionality standpoint, but it's messy. I'm thinking that I might rather want to just document the two settings and add a commented example into user_variables, rather than carry that jinja script variable10:55
*** aedc has joined #openstack-ansible10:57
*** electrofelix has joined #openstack-ansible11:04
*** sum12 has joined #openstack-ansible11:19
*** markvoelker has joined #openstack-ansible11:21
odyssey4mehmm, that's nice - jrosser have you seen http://logs.openstack.org/91/551791/42/check/openstack-ansible-deploy-aio_ceph-ubuntu-bionic/822ab39/job-output.txt.gz#_2018-11-29_02_47_35_014673 - that's now what's failing master's builds11:26
odyssey4meNov 29 02:47:55 localhost ceph-osd-prestart.sh[30605]: OSD data directory /var/lib/ceph/osd/ceph-loop6 does not exist; bailing out.11:27
chandan_kumarodyssey4me: evrardjp https://review.openstack.org/#/c/620800/ please have a look11:30
*** ahosam has quit IRC11:45
*** markvoelker has quit IRC11:55
*** cshen has quit IRC12:05
*** cshen has joined #openstack-ansible12:14
*** dave-mccowan has joined #openstack-ansible12:16
*** flaviosr has joined #openstack-ansible12:22
*** gkadam_ has joined #openstack-ansible12:23
*** gisak has joined #openstack-ansible12:23
gisakhello guys12:24
chandan_kumarodyssey4me: https://review.openstack.org/#/c/620800/ and https://review.openstack.org/#/c/619986/4 both does different tasks12:25
gisakfacing some problems with haproxy service during playbooks, could somebody please advice me with this?12:25
*** gkadam has quit IRC12:25
chandan_kumarodyssey4me: I am not getting how it is similar need help here12:25
odyssey4mechandan_kumar ok, I see - quote honestly, I don't see why there needs to be two options - why not just have one generalised option, something like tempestconf_cli_parameters - then use that for anything extra you want to add into the CLI12:26
odyssey4megisak you'd have to provide specifics for anyone to help - please share logs and failure messages via a paste service or osmething similar12:27
*** gkadam_ has quit IRC12:28
*** gkadam has joined #openstack-ansible12:28
gisaksure, sec please12:29
gisakhttp://paste.openstack.org/show/736399/12:30
*** ansmith has quit IRC12:31
gisakafter rerunning the setup-infrastructure.yml playbook, now I get the following error:12:44
gisakhttp://paste.openstack.org/show/736401/12:45
jamesdentonTry configuring 172.29.236.11 as a secondary IP on br-mgmt12:45
jamesdentonthat is, if you don't see it on the server currently12:46
gisak 172.29.236.11 is br-mgmt IP for controller host12:46
openstackgerritMerged openstack/openstack-ansible stable/rocky: Set SUSE ceph-distro job to non-voting  https://review.openstack.org/62060712:49
jamesdentonyou should consider using a different address for the VIP. What is the output of "netstat -plnt | grep 172.29.236.11"?12:51
*** markvoelker has joined #openstack-ansible12:52
pabelangermorning12:52
pabelangerodyssey4me: did you see my comments yesterday about also making changes to keystone_service_setup_host_python_interpreter for all other services too?  Is that something you are going to do, or should I start to push up patches12:53
*** vakuznet has quit IRC12:59
*** shardy has quit IRC13:03
*** maddtux has quit IRC13:03
chandan_kumarodyssey4me: evrardjp you mean something like this http://paste.openstack.org/show/736405/ ?13:07
*** shardy has joined #openstack-ansible13:10
evrardjpodyssey4me mnaser I think you should join #ansible-community13:13
*** ansmith has joined #openstack-ansible13:23
*** markvoelker has quit IRC13:24
*** gkadam_ has joined #openstack-ansible13:25
*** gkadam has quit IRC13:28
*** kaiokmo has joined #openstack-ansible13:31
*** vakuznet has joined #openstack-ansible13:31
vakuznetsecond review please https://review.openstack.org/#/c/62039913:34
gisakhttp://paste.openstack.org/show/736407/13:36
gisakI am never gonna install correctly openstack ..13:37
jamesdentonnot true!13:38
jamesdenton172.29.236.11 is on your controller node, but you're running haproxy on the network node?13:39
gisakyes13:39
jamesdentonare you intending on running haproxy on both the controller and the network node?13:39
gisakno, just network13:40
gisaknetwork node has 172.29.236.10 on br-mgmt13:40
jamesdentonok - that's likely your issue right now. All of the API services live on the controller, and the VIP is configured on the controller. haproxy can't run on the network node and bind to the IP you've chosen for the VIP.13:40
jamesdentonfor now, you might consider just configuring a secondary IP for the VIP on the network node (i.e. 172.29.236.8) and updating openstack_user_config.yml to use that IP instead13:41
jamesdentonsame for the external VIP address13:41
gisaki need to have haproxy on network, cause probably controller node might be down sometimes during production13:42
jamesdentonwell this is a 3-node cluster right now, right? 1 controller, 1 network, 1 compute?13:42
gisakso instances could run without problems with compute and network nodes up13:43
*** vollman has joined #openstack-ansible13:43
gisakyes13:43
openstackgerritMerged openstack/openstack-ansible stable/rocky: Add MariaDB infrastructure mirrors  https://review.openstack.org/61971413:43
openstackgerritMerged openstack/openstack-ansible stable/rocky: Ensure that a consistent mirror is used for RDO  https://review.openstack.org/62062913:43
jamesdentonNot really a production setup at the moment, but i get what you're saying. In prod, you'd have haproxy running on multiple nodes w/ keepalived keeping the VIP straight13:44
jamesdentonin this scenario, given it's your first time, i would try to keep things as simple as possible13:44
*** vollman has quit IRC13:44
gisakwell I am looking for a setup, that could be completed in the close future13:45
gisak+ 2 compute, + storages etc..13:45
gisakbut for the start I need these 3 nodes13:46
*** vollman has joined #openstack-ansible13:46
jamesdentonWell, you'll need an IP that can float around (configured by keepalived on the active node), so it can't be the primary br-mgmt ip for any host. you'll need to have a secondary address (from the same subnet)13:47
gisaknot sure that fully understand13:49
jamesdentonthe point of haproxy is the load balance services. you can have multiple instances of haproxy scattered across infra hosts that can provide redundancy. keepalived is used to determine which of those load balancers is the ACTIVE load balancer. the others remain in a STANDBY state. If the active haproxy instance failed (i.e. controller node hosting haproxy goes down), then one of the standby nodes becomes active.13:52
jamesdentonFor this to work, the VIP address must be able to float between nodes based on which keepalived becomes master13:52
jamesdentonIf you only have 1 instance of haproxy, that fine for a lab scenario. The VIP, however, must be an address local to that host. Since you are using the network node, the VIP would need to be 172.29.236.10 and not 172.29.236.11, which is configured on the controller. haproxy on the network node cannot bind to an IP address that is not configured.. on that node.13:53
jamesdentonmake sense?13:53
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/rocky: Move ARA install to end of bootstrap  https://review.openstack.org/61778513:54
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/rocky: Add openstackclient bash completion  https://review.openstack.org/61935413:55
gisakyes, now understand, then one more question: if i will bind internal_lb_vip_address to 172.29.236.10, will it work even if all containers except neutron are on controller node ?13:56
jamesdentonit should be fine, yes. haproxy.cfg will use 172.29.236.10 for the front end address and all of the respective backend (container) addresses will be configured for the respective pools13:57
jamesdentonso as long as you can ping those containers from the network node you should be OK13:58
gisakwhat happens if controller node fails while instances are running on compute, will they still be up ?13:59
gisakof course the network is up to13:59
gisakof course the network is up too13:59
jamesdentoni guess the answer is, it depends.14:00
jamesdentonIf you're using VLAN networks without virtual routers, then you shouldn't see an impact to VM traffic. If you are using routers, those would be hosted on the network node and you shouldn't be impacted there, either, in most cases. You wouldn't be able to spin up new instances (obviously) and API would be impacted14:01
jamesdentonyou design redundancy with multiple controller/infra nodes and a load balancer to limit or eliminate impact14:02
gisakyes I understand that no dashboard, API will not work, i just want my instances to be up while controller is down14:02
jamesdentonwith the proper architecture that should not be a concern14:03
odyssey4mepabelanger morning - sorry, only just saw your msg... I could do with help pushing up for the rest of the roles14:05
gisakthanks for advice <jamesdenton> I appreciate that :)14:05
odyssey4mepabelanger did you see I updated the integrated repo patch - although quite honestly I'm not sure doing it is a good idea14:05
jamesdentonsure thing, gisak. I recommend getting a few installs under your belt using tried-and-true methods before deviating too much, that way it's easier to see where things went wrong.14:06
gisakits pretty complex all this infra, not sure I could reach even this step without this channel :)14:08
odyssey4melogan- jrosser there's a new failure in master for the ceph/bionic build ... any chance you guys could dig into it?14:09
odyssey4melogan- jrosser mnaser http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/latest.log.html#t2018-11-29T11:26:09 FYI14:11
odyssey4meno master builds will pass until that's sorted out, and I'm tied up elsewhere right now14:11
mnaserodyssey4me: boo, ok14:20
mnaseri will dig in14:20
odyssey4methanks mnaser14:23
mnaserfound the erorr messasge14:30
mnaser"OSD data directory /var/lib/ceph/osd/ceph-loop6 does not exist; bailing out."14:30
odyssey4memnaser yep, that's in my eavesdrop link above :p14:31
mnaserugh14:31
mnasermaybe i should read more14:31
mnaserllol14:31
mnaserok the weird thing is14:31
mnaseryou're not supposed to do14:31
mnasersystemctl start ceph-osd@<dev>14:31
mnaserits supposed to be systemctl start ceph-osd@<osd-id>14:31
*** KeithMnemonic has joined #openstack-ansible14:32
gisakjamesdenton: this error is the last one I reached yesterday and so I do today .. : http://paste.openstack.org/show/736413/14:33
jamesdentongisak i recommend connecting to controller01_glance_container-5a510c22 and run 'systemctl status glance-api' and see why it failed14:34
jamesdentonor systemctl status var-lib-glance-images.mount14:35
gisakhttp://paste.openstack.org/show/736414/14:36
gisakwell it is running14:36
pabelangerodyssey4me: I did see the update, thanks. I don't mind we we skip landing that, I can see it being too much magic. As long as I can set it up myself I am fine14:37
gisakbut this is the log of syslog on controller: http://paste.openstack.org/show/736415/14:37
jamesdentonsorry, i meant systemctl status var-lib-glance-images.mount14:38
*** mma has joined #openstack-ansible14:40
gisakhttp://paste.openstack.org/show/736417/14:40
*** mma_ has joined #openstack-ansible14:41
jamesdentonDo you have this block? https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.prod.example#L148-L15514:43
*** mma has quit IRC14:45
gisakhttp://paste.openstack.org/show/736418/14:45
jamesdentonok, yeah. that's a configuration that let's you mount an nfs volume for storing glance images. Just remove the container_vars line [112] thru line 11914:46
odyssey4memnaser I dunno if this shed any light on the matter, but https://github.com/ceph/ceph-ansible/commit/2cea33f7fc4bf59eaa249ca26ba326105e392402 merged 12 hours ago into stable-3.2, which seems to co-incide with the new failures14:46
gisakthat is the openstack_user_config.yml I run14:46
jamesdentonunderstood. it's trying to reach out to an nfs target that i assume doesn't exist at 172.29.244.1114:47
gisakwell, but in this case I will not have any images container ...14:49
gisakmaybe I should just put instead of 172.29.244.11, the glance container's IP ?14:50
gisakor network's ?14:50
Miouge-jamesdenton: doesn’t that block assume that an NFS server is available at 172.29.244.15 ?14:51
jamesdentonno, it will simply store the images locally inside the glance container14:51
jamesdentonMiouge- Yes, in that example it would be 172.29.244.15.14:52
jamesdentonBut in gisak's existing config, it's 172.29.244.11, which is likely the ip configured on br-storage if i had to guess14:52
jamesdentonThe IP would need to be an NFS server in the storage network14:53
Miouge-gisak: is there an NFS server there? I think the prod example relies on an external NAS for NFS14:53
*** SimAloo has joined #openstack-ansible14:54
jamesdentonif you eliminate that 'glance_nfs_client' override, then glance images are stored locally, which is fine for a single glance deploy14:54
gisakyes it is on br-storage14:54
gisakso I should either have an NFS server on controller node, or store images locally on glance container14:55
jamesdentonyeah, so forget that particular configuration for now by removing it, and let the images live locally. Once you have NFS configured (on something other than the controller node) and have multiple glance services needing a shared backend, you can put it back14:56
gisakdone it, but still the same issue14:58
*** udesale has joined #openstack-ansible14:58
jamesdentonyou reran the playbook and it failed in the same place?14:59
gisakyes14:59
admin0hello \o15:02
admin0quick gnocchi question .. is the gnocchi stats also replicated in all the controllers ?15:02
odyssey4meadmin0 I know very little about gnocchi, but I would guess that depends on what the storage back-end is for it.15:05
openstackgerritMerged openstack/openstack-ansible-os_tempest master: Added task to list tempest tests  https://review.openstack.org/61902415:05
admin0hmm.. i used the default ansible install :)15:05
admin0just specified the infra hosts for it15:06
Miouge-gisak: can you dump the output in paste? If you removed the 'glance_nfs_client’ it shouldn’t try to mount that NFS15:06
admin0issue what i see is not all UUIDs are in all servers .. so they are either in 2 or skipped in 1 .. or only in 1 ..15:06
admin0biggest issue is df -i  reaching 99% and this folder being the culprit15:06
admin0when df-i reaches 100%, the keepalive/vip ip suddenly disappears15:07
odyssey4mewell yes, it's likely that the default back-end storage is file, and that's not good unless you have NFS file storage or something15:08
odyssey4methe telemetry systems don't get much attention in OSA because of the smaller user-base... it'd be nice to see it grow and to have more sensible defaults15:08
odyssey4meit'd even be nice to see a panko playbook integrated15:08
admin0odyssey4me, file it is . and i do not see option for nfs in playbook .. means the common share needs to be done manually in the container before service can start15:12
odyssey4meadmin0 well, I think gnocchi has database/swift back-end options - you'll have to explore and understand the service better - I know nothing :p15:13
gisakwell strange, I have just installed nfs-kernel-server on controller node and mounted /images folder on 172.29.244.11, and still getting this error15:17
gisakkinda like that looks my /etc/exports:>> "/images 172.29.244.11/24(rw,no_root_squash)"15:20
*** hamerins has joined #openstack-ansible15:23
Miouge-gisak: and a test mount works?15:24
*** udesale has quit IRC15:29
*** udesale has joined #openstack-ansible15:29
*** hamerins has quit IRC15:30
*** hamerins has joined #openstack-ansible15:31
gisakbut for that I need a nfs client on glance container, right ?15:35
Miouge-gisak: first I would try to mount it from somewhere else to check that the NFS server is ready to go15:39
*** ahosam has joined #openstack-ansible15:39
*** cshen has quit IRC15:41
*** neith has joined #openstack-ansible15:42
mnaserodyssey4me: fyi our broken gate is due to https://github.com/ceph/ceph-ansible/issues/338815:46
mnaserhttps://github.com/ceph/ceph-ansible/pull/3391 is the fix15:46
jrosserhow is that happening? i thought we were on 3.1?15:48
odyssey4me3.2 on master15:48
jrosserdoh :(15:48
odyssey4me3.1 on stable/rocky15:48
odyssey4mewe could take the option of just using stable until a new stable15:49
mnaserdo we have a way of testing a pr in github15:49
mnaserjust as a wip to validate that patch15:49
odyssey4memnaser depends-on: <pr url>15:49
mnaserah but that wont check things out from ceph-ansible probably.. im thinking a change in ansible-role-requirements15:50
odyssey4meI dunno if that will actually work though, but it's worth a try15:50
odyssey4mea-r-r can do refspecs and such too15:50
mnaserhttps://github.com/ceph/ceph-ansible/commits/start-ceph-disk-non-container15:50
mnaserok that branch has the fix15:50
mnaserill push up a patch to test it15:50
openstackgerritMohammed Naser proposed openstack/openstack-ansible master: DNM: test start-ceph-disk-non-container  https://review.openstack.org/62094415:51
* logan- pushes up a patch to test the PR with depends-on 15:53
openstackgerritLogan V proposed openstack/openstack-ansible master: [DNM] Test ceph fix  https://review.openstack.org/62094615:53
logan-i think it should work15:53
openstackgerritLogan V proposed openstack/openstack-ansible master: [DNM] Test ceph fix  https://review.openstack.org/62094615:58
mnaseri dont think it will logan- because we don't have anything that includes ceph-ansible in our required-projects15:59
mnaserso zuul wont check it out15:59
gisakyes, i was able to mount from 2 different servers, including glance container, now i rerun the playbook15:59
gisaklets see what it gets )15:59
logan-a project is always considered required when there is a depends-on mnaser15:59
mnaserTIL15:59
logan-the only question is whether our role cloner is smart enough to link it from the zuul src path16:00
mnaseryep thats the #216:00
logan-i believe it is though16:00
odyssey4meI think so16:03
*** weezS has joined #openstack-ansible16:04
gisakMiouge-: thanks, it worked, the mistake was in file /etc/exports I should use 177.29.244.11/22 instead of 177.29.244.11/24  :)16:04
gisak172 *16:05
*** mma_ has quit IRC16:09
openstackgerritMerged openstack/openstack-ansible stable/rocky: haproxy: remove repo_cache service  https://review.openstack.org/62033616:17
*** strattao has joined #openstack-ansible16:19
*** udesale has quit IRC16:20
*** gkadam_ has quit IRC16:20
openstackgerritMerged openstack/openstack-ansible-ops master: Allow the option to specify OSA git repo URL  https://review.openstack.org/62066216:25
*** hamerins has quit IRC16:34
*** hamerins has joined #openstack-ansible16:36
*** ahosam has quit IRC16:46
*** hamzy has quit IRC16:46
*** cshen has joined #openstack-ansible16:56
*** strattao has quit IRC17:03
*** shardy has quit IRC17:09
*** gyee has joined #openstack-ansible17:09
openstackgerritMerged openstack/openstack-ansible-os_tempest master: Enable support to pass override option to tempestconf  https://review.openstack.org/61998617:12
*** mma has joined #openstack-ansible17:18
gisakI saw recently there were some problems regarding this: http://paste.openstack.org/show/736429/17:18
gisakand it should be fixed in branch, but unfortunately running playbook failed ..17:19
*** mma has quit IRC17:22
*** DanyC has quit IRC17:23
*** DanyC has joined #openstack-ansible17:24
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: add short name for distros  https://review.openstack.org/62096717:28
*** DanyC has quit IRC17:29
*** hamerins has quit IRC17:29
*** hamerins has joined #openstack-ansible17:31
gisakif add "default('rabbitmq_all')" to # Nova notifications, will it fix this ?17:31
gisakin /etc/ansible/roles/os_ceilometer/templates/ceilometer.conf.j2 file17:31
*** DanyC has joined #openstack-ansible17:42
*** DanyC has quit IRC17:46
*** dcdamien has quit IRC17:58
openstackgerritMerged openstack/openstack-ansible-lxc_hosts stable/rocky: ensure the mount unit is started after reboot  https://review.openstack.org/62039018:06
*** DanyC has joined #openstack-ansible18:12
*** hamzy has joined #openstack-ansible18:12
*** vnogin has joined #openstack-ansible18:15
*** DanyC has quit IRC18:16
*** vakuznet has quit IRC18:19
*** vnogin has quit IRC18:20
*** hamzaachi has joined #openstack-ansible18:29
*** hamzy has quit IRC18:31
*** brad[] has joined #openstack-ansible18:38
*** hamzy has joined #openstack-ansible18:52
*** chhagarw has quit IRC18:53
*** strattao has joined #openstack-ansible18:54
*** hamzy_ has joined #openstack-ansible18:57
*** hamzy has quit IRC18:58
*** hamzaachi has quit IRC18:59
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Add env options  https://review.openstack.org/62099119:01
jrossercloudnull: superb patch there :)19:05
jrosserUnderstand where that is coming from.....19:06
*** strattao has quit IRC19:06
*** chhagarw has joined #openstack-ansible19:06
*** electrofelix has quit IRC19:12
cloudnullwill have more in a min19:13
cloudnullworking with my folks trying to make that go19:13
*** francois has quit IRC19:19
*** francois has joined #openstack-ansible19:20
openstackgerritJames Denton proposed openstack/openstack-ansible-os_neutron master: Install haproxy on ovn-controller nodes  https://review.openstack.org/62099419:23
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Updated the connection plugins and add env options  https://review.openstack.org/62099119:30
cloudnull^ better commit message :)19:30
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Updated the connection plugins and add env options  https://review.openstack.org/62099119:31
*** gisak has quit IRC19:35
*** markvoelker has joined #openstack-ansible19:36
*** markvoelker has quit IRC19:40
admin0i see ovn :)19:41
admin0nice19:41
admin0jamesdenton, good book .. thanks for writing it19:42
jamesdentonOVN is coming along. Thanks for the feedback!19:42
*** chhagarw has quit IRC19:48
*** derks has joined #openstack-ansible19:55
derksI am having trouble with openstack-aio deployment. for specific purpose we need to disable ssl on the public ports. for some reason setting 'openstack_external_ssl: false' and 'openstack_service_publicuri_proto: http' in /etc/openstack_deploy/user_variables.yml is not taking19:57
derkshaproxy still sets up ssl on the public ports19:57
*** DanyC has joined #openstack-ansible19:57
*** DanyC has quit IRC19:58
*** DanyC has joined #openstack-ansible19:58
*** markvoelker has joined #openstack-ansible20:00
*** cshen has quit IRC20:00
*** cshen has joined #openstack-ansible20:08
openstackgerritGuilherme  Steinmuller Pimentel proposed openstack/openstack-ansible-os_placement master: [WIP] Create base files to install placement  https://review.openstack.org/61882020:25
*** DanyC_ has joined #openstack-ansible20:34
*** radeks has quit IRC20:34
*** radeks has joined #openstack-ansible20:34
*** DanyC has quit IRC20:34
*** DanyC_ has quit IRC20:34
*** mmercer has joined #openstack-ansible20:37
*** DanyC has joined #openstack-ansible20:42
*** DanyC has quit IRC20:46
*** ansmith has quit IRC20:51
*** olivierbourdon38 has quit IRC20:57
*** ivve has quit IRC21:01
*** hamerins has quit IRC21:05
nsmedsdoes openstack-ansible use the `is_admin_project` configuration option anywhere? run into an interesting error21:18
nsmedshttps://gist.github.com/nikosmeds/6bc29b00c05222f94fa62a3adfffff5a21:19
*** hamerins has joined #openstack-ansible21:34
jrossercloudnull: a fair few of these recently http://logs.openstack.org/53/617053/4/check/openstack-ansible-deploy-aio_lxc-ubuntu-xenial/aa73ff3/job-output.txt.gz#_2018-11-29_17_53_35_88086521:37
*** markvoelker has quit IRC21:38
jamesdentonit's been a tough week for the gate21:38
jrosserlooks like the apt package install just stops dead in it's tracks part way through, with a couple of errors in the cache prep log to do with /dev/pts21:38
*** markvoelker has joined #openstack-ansible21:38
jrosserthe timestamp on the cache prep log file is several minutes earlier than when the retries run out, so it looks to have just stalled completely somehow21:39
jonhernsmeds: same error on re-run of the playbook? Looking at https://github.com/openstack/openstack-ansible-os_keystone/blob/stable/queens/tasks/keystone_service_setup.yml#L51 and https://github.com/openstack/openstack-ansible-plugins/blob/stable/queens/library/keystone I don't see "is_admin_project" being used at least21:40
*** markvoelker has quit IRC21:43
nsmedsjonher: I'll go attempt two runs in a row and confirm, be back in a bit21:45
*** hamzy_ has quit IRC21:47
*** radeks has quit IRC21:52
nsmedsjonher: yep, fails on consecutive runs. Reverting the one line change and playbook succeeds.21:52
nsmedso.O21:52
*** hamerins has quit IRC21:57
*** hamerins has joined #openstack-ansible21:57
jonherOK, I couldn't find any reference to "is_admin_project" around that playbook, but maybe someone else will21:59
nsmedsyeah me neither, but I'm just going to review playbook for now and hopefully find something22:00
nsmedsappreciate it <322:00
nsmedsit makes... no sense22:00
*** markvoelker has joined #openstack-ansible22:01
*** hamerins has quit IRC22:02
openstackgerritMerged openstack/openstack-ansible stable/rocky: Move ARA install to end of bootstrap  https://review.openstack.org/61778522:04
*** strattao has joined #openstack-ansible22:11
*** strattao has quit IRC22:12
*** rjgibson has joined #openstack-ansible22:22
*** klamath has quit IRC22:29
*** klamath has joined #openstack-ansible22:29
*** rjgibson has left #openstack-ansible22:30
*** cshen has quit IRC22:45
*** cshen has joined #openstack-ansible22:47
*** cshen has quit IRC22:52
*** weezS_ has joined #openstack-ansible22:52
*** weezS has quit IRC22:54
*** weezS_ is now known as weezS22:54
nsmedsjonher: found the underlying role and disabled `no_log`, got a more detailed error https://gist.github.com/nikosmeds/6bc29b00c05222f94fa62a3adfffff5a22:58
*** SimAloo has quit IRC22:59
*** cshen has joined #openstack-ansible23:04
openstackgerritMerged openstack/openstack-ansible-ops master: Update elk_6x for 6.5.x  https://review.openstack.org/61944823:06
*** jamesdenton has quit IRC23:23
*** dhellmann_ has joined #openstack-ansible23:26
*** dhellmann has quit IRC23:26
*** aedc has quit IRC23:28
*** dhellmann_ is now known as dhellmann23:30
*** jamesdenton has joined #openstack-ansible23:32
*** aedc has joined #openstack-ansible23:36

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!