Sunday, 2015-09-20

*** sdake has quit IRC00:25
*** markvoelker has joined #openstack-ansible00:27
*** markvoelker has quit IRC00:31
*** Mudpuppy has joined #openstack-ansible00:35
cloudnullarbrandes: did you get it to go ?00:55
*** arbrandes1 has joined #openstack-ansible00:55
cloudnullQuick question, just reading through the scroll back, do you have the br-mgmt inteface on all of your hosts?00:57
*** arbrandes has quit IRC00:58
cloudnullAdditionally if you have null entries for your containers it looks like the cause of that was due to not having a management network defined for the ip addresses to be pulled from.00:58
cloudnulllooking at http://paste.openstack.org/show/469139/00:58
cloudnullLine 2300:58
cloudnullip_from_q: "management"00:59
cloudnullmanagement is not defined in cidr_networks00:59
cloudnullarbrandes1: ^^01:01
cloudnulljust looking at the pasted config it seems like you need to s'/container/management/' in the cidr_networks field and youll be good to go.01:02
cloudnullin terms of cleaning it up. for the sake of simplicity, id probably run the lxc-container-destroy.yml play.01:03
cloudnullrm the openstack_inventory.json file01:03
cloudnullfix the entry in config01:03
cloudnulland start the plays fresh01:03
cloudnullhost setup and base config should be all good.01:03
cloudnulllet me know if it doesn't go . im around for the most part this evening.01:04
cloudnullFYI for more available community help most of us are available Mon-Fri GMT+1 / GMT-601:06
cloudnullbut i think the only thing that your missing is the miss configuration between the "container" and "management" network names.01:07
*** arbrandes1 has quit IRC02:06
*** jmccrory has quit IRC02:12
*** arbrandes1 has joined #openstack-ansible02:23
*** markvoelker has joined #openstack-ansible02:28
*** markvoelker has quit IRC02:32
*** abitha has joined #openstack-ansible02:53
prometheanfirecloudnull: hi03:08
prometheanfirecloudnull: thing tomorrow?03:08
cloudnullhi.03:10
cloudnullno, hangning with the wife in tomorrow.03:10
cloudnullwe went wine tasting today . she wasnt feeling well before and now worse03:10
cloudnullso tomorrow will be a down day03:11
prometheanfireok03:13
*** Mudpuppy has quit IRC03:53
*** abitha has quit IRC03:54
* prometheanfire finally figured out gerrit interdiffing04:13
prometheanfiremore reviews04:13
*** Mudpuppy has joined #openstack-ansible04:23
*** Mudpuppy has quit IRC04:27
prometheanfirecloudnull: have any of the upgrade automated stuff used your patch (for split or the other one)04:29
*** markvoelker has joined #openstack-ansible04:29
cloudnullas of today yes04:29
cloudnullIE OSA upgrade test failed on OneOffTest-10.1.14-11.2.1-saved2-ref22413704:30
cloudnullref224137 == https://review.openstack.org/#/c/224137/04:30
prometheanfirenice04:32
prometheanfirecloudnull: I'm getting rid of articles in your docs for the split btw04:32
prometheanfiremight want to have docs look at it though04:32
cloudnullhuh ?04:32
prometheanfirealso, s/that//04:32
prometheanfirea/and/the04:33
prometheanfirebah04:33
prometheanfirea/an/the04:33
cloudnullokiedokie04:33
*** markvoelker has quit IRC04:33
prometheanfireya, more pedantry04:34
cloudnullsounds good to me.04:35
cloudnulli told the docs people that they could write them or they'd have to deal with the garbage I create. :)04:36
cloudnulland now we have the garbage I created so you see how well that worked out :)04:36
cloudnullcc Sam-I-Am ^^04:36
prometheanfireyep04:37
cloudnullprometheanfire: do you know if you've seen this https://bugs.launchpad.net/openstack-ansible/+bug/1497669 <- this issue must be a thing in all kilo unless rpc is resolving the python-ldap pkg elsewhere ?04:57
openstackLaunchpad bug 1497669 in openstack-ansible trunk "python-ldap is missing from the keystone containers" [High,Triaged] - Assigned to Kevin Carter (kevin-carter)04:57
prometheanfirehuh, I've not seen it04:59
prometheanfireI think I remember it mentioned, but could have been the bug triage meeting04:59
*** sdake has joined #openstack-ansible05:02
prometheanfirecat is dreaming (doesn't have full cutof from rem, still acts out)05:03
cloudnullnice05:04
cloudnullwhen felipe dreams he's funny05:04
cloudnullprometheanfire:  did you see my PR in channel ?05:05
cloudnullIE https://review.openstack.org/#/c/225469/05:05
prometheanfireI'll look05:06
prometheanfireI didn't see it if it was in the last day05:06
prometheanfireis the bot dead?05:06
cloudnullit seems so05:06
prometheanfireneat05:06
*** sdake_ has joined #openstack-ansible05:07
cloudnullmaster is not happy right now.05:11
cloudnullthis is the source of suckage > openstack_auth.User.keystone_user_id: (mysql.E001) MySQL does not allow unique CharFields to have a max_length > 25505:11
*** sdake has quit IRC05:11
cloudnullin horizon05:11
prometheanfireneat05:14
*** sdake_ has quit IRC05:18
*** sdake has joined #openstack-ansible05:20
*** sdake has quit IRC05:22
*** sdake has joined #openstack-ansible05:23
*** sdake has quit IRC05:24
*** sdake_ has joined #openstack-ansible05:24
prometheanfirecloudnull: you're welcome05:37
cloudnulltyvm05:58
cloudnullnow to see if the doc people agree/will go fix it :)05:58
cloudnullcc Sam-I-Am ^^ re: https://review.openstack.org/#/c/224137/05:59
prometheanfire:P06:02
*** elo has joined #openstack-ansible06:25
*** markvoelker has joined #openstack-ansible06:30
*** markvoelker has quit IRC06:34
*** elo has quit IRC06:37
*** sdake_ has quit IRC06:59
*** arbrandes1 has quit IRC08:01
*** arbrandes has joined #openstack-ansible08:08
*** markvoelker has joined #openstack-ansible08:30
*** markvoelker has quit IRC08:35
*** pellaeon has joined #openstack-ansible08:50
pellaeonHi, previously I have a single network node, older version of OSAD use this I remember, now I want to use infra1~3 as network node, as suggested in example openstack_user_config.yml, how do I do this?08:53
pellaeonsimmply changing openstack_user_config by replacing network_hosts as infra1~3 doesn't work08:54
pellaeonbecause openstack_inventory.json still contain the old network host08:55
pellaeonbut deleting openstack_inventory.json will cause it to be re-generated, and it will build new containers instead of just using the old containers08:57
pellaeonI made some manual modifications to some containers so I don't want to lose it08:58
*** agireud has quit IRC09:49
*** agireud has joined #openstack-ansible09:50
*** markvoelker has joined #openstack-ansible10:31
*** markvoelker has quit IRC10:35
*** ashishjain has joined #openstack-ansible10:59
ashishjainhello10:59
ashishjainNeed some help with osad10:59
ashishjainwhile running openstack-ansible haproxy-install.yml  -vvv11:02
ashishjainI get the following skipping: no hosts matched11:02
ashishjainany clues11:02
*** markvoelker has joined #openstack-ansible11:32
*** markvoelker has quit IRC11:37
*** shoutm has joined #openstack-ansible11:46
*** gparaskevas has joined #openstack-ansible12:13
*** arbrandes has quit IRC13:30
*** markvoelker has joined #openstack-ansible13:33
*** markvoelker has quit IRC13:37
cloudnullashishjain: to have hproxy work you you need to define a host where it will live.13:42
cloudnullsomething similar to https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.aio#L127-L12913:42
cloudnullpellaeon:  if you want to remove a host from inventory there is a script called inventory-manage.py which can pull that one thing out13:43
ashishjaincloudnull: Thanks for this.13:44
ashishjaincloudnull: Can you please provide me one more help13:44
cloudnullusage is : inventory-manage.py -f <path-to-inventory-file> -l13:44
cloudnullto remove its the same.13:44
cloudnullusage is : inventory-manage.py -f <path-to-inventory-file> -r <hostname-to-remove>13:44
ashishjainCloudnull: I have been trying osad for last 7 days initially with juno and than with kilo w/o any success although :(13:44
cloudnullashishjain:  sure whats up?13:45
ashishjainI will paste my openstack_user_config.yml13:45
ashishjainCan you please validate if this is correct13:45
cloudnulldod you see my comments about your userconfig lastnight  ?13:45
cloudnulli think it was you ...13:45
cloudnulli dont remember. :)13:45
ashishjaincloudnull: No I dont thik it was me13:47
ashishjainhttp://paste.openstack.org/show/471145/13:47
ashishjaincloudnull: Not sure if you have changed your name. But I did get one answer yesterday regarding provider network being part of global ovverirdes13:47
ashishjainansible01 is my deployment as well as target host13:49
*** arbrandes has joined #openstack-ansible13:50
cloudnullah yes13:51
cloudnullone sec13:51
cloudnullthis is what i said looking back at that last night:13:52
cloudnull<cloudnull>     arbrandes: did you get it to go ?13:53
cloudnull <cloudnull>     Quick question, just reading through the scroll back, do you have the br-mgmt inteface on all of your hosts?13:53
cloudnull <cloudnull>     Additionally if you have null entries for your containers it looks like the cause of that was due to not having a management network defined for the ip addresses to be pulled from.13:53
cloudnull <cloudnull>     looking at http://paste.openstack.org/show/469139/13:53
cloudnull <cloudnull>     Line 2313:53
cloudnull <cloudnull>     ip_from_q: "management"13:53
cloudnull <cloudnull>     management is not defined in cidr_networks13:53
cloudnull <cloudnull>     arbrandes1: ^^13:53
cloudnull <cloudnull>     just looking at the pasted config it seems like you need to s'/container/management/' in the cidr_networks field and youll be good to go.13:53
cloudnull <cloudnull>     in terms of cleaning it up. for the sake of simplicity, id probably run the lxc-container-destroy.yml play.13:53
cloudnull <cloudnull>     rm the openstack_inventory.json file13:53
cloudnull <cloudnull>     fix the entry in config13:53
cloudnull <cloudnull>     and start the plays fresh13:53
cloudnull <cloudnull>     host setup and base config should be all good.13:53
cloudnull <cloudnull>     let me know if it doesn't go . im around for the most part this evening.13:53
cloudnull <cloudnull>     FYI for more available community help most of us are available Mon-Fri GMT+1 / GMT-613:53
cloudnull <cloudnull>     but i think the only thing that your missing is the miss configuration between the "container" and "management" network names.13:53
cloudnullashishjain: are you were also working on http://paste.openstack.org/show/469486/ right ?13:55
ashishjaincloudnull: ya looks like my config, but somehow I missed your chat snippets yesterday, I use a web base chat so I usually dont have chat logs :(13:56
ashishjainhttps://webchat.freenode.net/13:56
ashishjainthis is what I use13:56
ashishjaincurrently my latest config is http://paste.openstack.org/show/471145/13:57
ashishjainbtw where is lxc cache downloaded to ?13:57
ashishjainCan I store it permanently in my system?13:58
cloudnullits downloaded to /var/cache/lxc13:58
cloudnullso are you deploying juno ?13:58
cloudnullthat user config looks like the example from juno.13:59
ashishjaincloudnull: not it is kilo.14:00
cloudnullalso do you intend to deploy both flat and vlan networks for neutron ?14:00
ashishjaincloudnull:  I would deploy vlan .... but I thought we need to have both :(14:01
cloudnullyou can have both14:01
cloudnullbut its not required14:01
ashishjaincloudnull: okay14:01
ashishjaincloudnull: Is my kilo config incoorect14:01
cloudnullto have flat networks function you'll need to remove one entry14:01
cloudnullrather change .14:02
cloudnullhost_bind_override: "eth12"14:02
ashishjaingit branch * kilo14:02
ashishjainwhen I run git branch command I get the result as kilo14:02
cloudnullline 37-44 i'd remove14:03
ashishjaincloudnull: done14:03
ashishjainI removed the lines14:03
ashishjainnow I got only vlan14:04
cloudnullhttp://cdn.pasteraw.com/bchpal2z88b5znjgslmx41hsve5xc69 <- a simple config from an AIO i did today.14:05
cloudnullso in the git branch are you running from the kilo head or a tag  ?14:06
cloudnulland where are you stuck now?14:06
ashishjaincloudnull: regarding your config, it seems to be a single node14:08
ashishjaincloudnull: But what about multi-node config14:08
ashishjaincloudnull: how is haproxy_hosts different from internal_lb_vip_address?14:09
ashishjaincloudnull: as per my understanding haproxy_hosts is the one where haproxy will be installed14:10
ashishjainwhat about internal_lb_vip_address ?14:11
ashishjainwill it be the same as where haproxy is installed?14:12
ashishjaincloudnull: I am not sure if I have confused you but my question is will haproxy be installed as a lxc container ?14:14
cloudnullashishjain:  that config is from a single node, that true but extending it for multi node is as simple as adding more nodes.14:14
cloudnullhaproxy runs on the host14:15
cloudnullnot in a contianer14:15
ashishjainokay so I think the snippet I pasted is still wrong14:15
cloudnullin my case the internal lb vip adress is 172.29.236.100 which is on the container management network14:15
ashishjainhaproxy_hosts:   ansible01:     ip: 192.168.57.10014:15
ashishjainbecause I do not have any host with ip 192.168.57.10014:16
ashishjaininstead I have got hosts with ip 192.168.57.11, 192.168.57.12 and 192.168.57.1314:16
cloudnullso you can alias the address to an existing network interface if you need.14:16
cloudnull or just change it14:17
ashishjaincloudnull: So I think the correct config will be to have internal_lb_vip_address: 192.168.57.1114:17
ashishjainand haproxy_hosts:   ansible01:     ip: 192.168.57.1114:17
ashishjaincloudnull: Is this correct?14:17
cloudnullyes, if you dont have 192.168.57.100 anywhere14:17
ashishjaincloudnull: aah that was another mistake.14:17
cloudnullosad wont mangle your network interfaces. it assumes things on the host are aready configured14:18
ashishjaincloudnull: I was stuck at infrastcuture-hosts.yml where my galera container was not able to pip install MySQL-python, pycrypto and memcached14:18
ashishjainI finally realised that it is trying to connect to a webserver at 192.168.57.100:818114:19
ashishjainwhich definitely was not there14:19
cloudnullah , that'll do it14:19
ashishjainI think I will start afresh again...this is probably 6 or 7th time :(14:19
cloudnull8th time is a charm :)14:20
ashishjaincloudnull: can you plz validate my config one more time14:20
cloudnullsure.14:20
ashishjaincloudnull: thanks a ton14:20
cloudnullyou can remove line 2 and 10 from your config14:21
cloudnullenvironment_version is no longer used14:21
cloudnulljuno only14:21
cloudnulland as you've said 192.168.57.100 is no longer present.14:22
cloudnullyour cidr networks are only using a /24 that wont give you a lot of room to grow .14:22
cloudnullif its a small deployment that should be fine14:23
ashishjaincloudnull: just a test deplyoment for now14:23
ashishjainya made the edits as you have suggested14:23
cloudnullbut if its something that your looking to grow over time, id recommend using a /22 or larger.14:23
ashishjainhere is the final config14:23
cloudnullkk14:23
ashishjainit is on my laptop now but soon I want to move into industry grade servers14:23
ashishjainhttp://paste.openstack.org/show/471186/14:24
ashishjainhere is my new config14:24
cloudnulljust to be sure, you have br-vlan br-vxlan, and br-mgmt on all of your hosts already ?14:25
ashishjaincloudnull: yes. I will just paste the result from one of my host14:25
ashishjaincloudnull: http://paste.openstack.org/show/471207/14:27
cloudnullok. are you not deploying cinder ?14:30
cloudnullor do you want to ?14:30
ashishjaincloudnull: not deploying it now, will do it later.14:31
ashishjaincloudnull: thought of having something running first and than update the config to include cinder14:32
ashishjaincloudnull: that was the original plan..but if your advice is than I will configure cinder and swift as well14:32
ashishjaincloudnull: currently I have not configured ceilometer also14:33
cloudnullhttp://cdn.pasteraw.com/536s873abyyyzoxfv4snhdsj4ib0f7914:33
cloudnullI made one edit14:33
cloudnullinfra_hosts / os-infra_hosts14:33
cloudnullthe old entry would've worked but os-infra is more specific14:34
cloudnulli also included commented out sections when you decide to deploy cinder14:34
ashishjainokay got it14:37
*** fawadkhaliq has joined #openstack-ansible14:37
ashishjaincloudnull: why is storage-infra_hosts: different from storage_hosts:?14:38
cloudnullstorage infra runs the api14:38
cloudnullstorage hosts is where the volume services will run14:38
ashishjainokay14:38
ashishjainokay14:40
ashishjainbtw just wanted to tell you the final plan is to use ansible api to deploy osad, Do you think it is viable?14:40
cloudnullthrough tower ?14:41
ashishjainno ansible python api14:41
cloudnulli've never tried .14:42
ashishjainhttp://docs.ansible.com/ansible/developing_api.html14:42
cloudnulli'd be interested if it works. I'd assume it'd go .14:42
ashishjainyes hopefully it may work14:42
cloudnullwe had issues with tower in the past, through i've not tried again for some time.14:43
cloudnullit didnt handle complex vars very well.14:43
cloudnullbut that was ansible 1.5 x14:43
cloudnullso im sure its improved ,14:43
cloudnulli've just not gave it another go.14:43
* cloudnull waiting on ansible 214:43
ashishjainya hopefully it may work14:44
cloudnulli'd dont see why it wouldn't14:44
ashishjainyes you are correct.14:45
ashishjainGoing back to osad when I run lxc-ls I see lot of containers .. to start afresh what shall I do14:45
ashishjainI rant the destroy yml also but still I see the same set of continers14:45
ashishjainshall I use lxc-destroy to remove them one by one?14:46
cloudnullfrom the playbooks directory .14:46
ashishjainyes I ran from the playbooks directory14:46
cloudnullrun: ansible hosts -m shell -a 'for i in $(lxc-ls); do lxc-destroy -fn $i; done'14:47
ashishjaini have messed up I know...i initially deleted everything from /var/lib/lxc manually14:47
cloudnullthen run: ansible hosts -m shell -a 'rm -rf /openstack'14:47
cloudnullthen delete the /etc/openstack_deploy/openstack_inventory.json14:48
cloudnulland start the deployment again14:48
*** Mudpuppy has joined #openstack-ansible14:48
cloudnullive got to run bbl14:52
ashishjaincloudnull: all set to start afrest14:54
ashishjaincloudnull: thanks for your help and time, catch you later14:55
*** galstrom_zzz is now known as galstrom15:06
*** galstrom is now known as galstrom_zzz15:10
*** fawadkhaliq has quit IRC15:26
*** markvoelker has joined #openstack-ansible15:33
*** markvoelker has quit IRC15:38
*** arbrandes has quit IRC16:43
*** fawadkhaliq has joined #openstack-ansible16:57
*** arbrandes has joined #openstack-ansible17:02
*** fawadkhaliq has quit IRC17:11
*** fawadkhaliq has joined #openstack-ansible17:14
*** arbrandes has quit IRC17:21
*** cloudtrainme has joined #openstack-ansible17:21
*** cloudtrainme has quit IRC17:26
*** markvoelker has joined #openstack-ansible17:34
ashishjaincloudnull: U there?17:37
ashishjainI am hitting another issue now "sg: [ALERT] 262/230555 (20527) : Starting frontend keystone_service-front: cannot bind socket"17:38
*** arbrandes has joined #openstack-ansible17:38
*** markvoelker has quit IRC17:38
ashishjainI suspect some port is already in use17:40
cloudnullIs keystone running on your haproxy host and not in a container ?17:43
cloudnullHave you rerun the haproxy role/restarted it ?17:44
*** arbrandes has quit IRC17:44
ashishjaincloudnull: I restarted the process with -vvv flasg and somehow it worked17:44
ashishjain"openstack-ansible haproxy-install.yml -vvv"17:45
ashishjainis their a way to test if the run was successful..even before I go to setup-infrastructure.yml?17:46
cloudnullYou can verify haproxy with haproxy-stat -s /var/run/haproxy.sock17:48
cloudnullRun lxc-ls -f17:48
cloudnullTo see your containers and the assigned IPS17:48
evrardjphello everyone17:49
ashishjainlxc-ls -f gives all my containers :)17:50
ashishjainbut I do not have haproxy-stat  installed on my host17:50
cloudnullI think that's the command. If you installed with the haproxy role it should be there.17:51
ashishjain"haproxy -v HA-Proxy version 1.4.24 2013/06/17"17:53
ashishjainhaproxy -v gives me the version17:53
ashishjainbut I indeed do not have haproxy-stat17:54
ashishjainor /var/run/haproxy.sock17:54
ashishjainand I have not installed with haproxy role17:54
ashishjainI used "openstack-ansible haproxy-install.yml -vvv"17:54
cloudnullThat'll do it.17:55
ashishjainso this would have used probably root user to install haproxy17:55
cloudnullYes.17:55
ashishjainokay17:55
cloudnullhaproxy the press tab a free time.17:55
cloudnull*Few17:55
ashishjaindid that their is no such command as haproxy-stat17:56
cloudnullI'm mobile right now.17:56
ashishjainaah okay17:56
cloudnullI'm not at my compu so I may be saying the wrong command.17:58
ashishjaincloudnull:okay got it17:59
ashishjaincloudnull: I think I can start infrastrucutre setup probably18:00
ashishjainbtw when I see /etc/haproxy/conf.d18:00
ashishjainI can see all ceilometer_api, heat,nova,keystone etc config files.18:00
ashishjainso looks like it is setup18:01
ashishjaincloudnull: one non-osad qs. What is the client you use to login to irc which makes you login 24x7 and probably keeps u in sync whether on compu or mobile?18:02
*** arbrandes has joined #openstack-ansible18:05
evrardjpashishjain: to see haproxy info you can do18:07
evrardjphatop -s /var/run/haproxy.stat18:07
evrardjpyou'll see your backends/frontends, and their status18:07
evrardjpkeep in mind that haproxy working doesn't mean your openstack is fully working ;) it's just the load balancer in front of it18:08
ashishjainevrardjp:I get this insufficient permissions for socket path /var/run/haproxy.stat18:08
ashishjainI am logged in as a root18:08
evrardjpthat's weird18:08
evrardjpwhat are you deploying? kilo ?18:08
ashishjainkilo18:08
evrardjpcould you check /etc/haproxy/haproxy.cfg ?18:09
evrardjpyou should have this:18:09
evrardjpstats socket /var/run/haproxy.stat level admin mode 60018:09
evrardjplevel admin is the required part to send administrative commands to your stat socket18:10
ashishjaintheir is no haproxy.stat in /var/run18:10
evrardjpmmm18:10
evrardjpthat's not normal ;)18:10
evrardjphaproxy not running?18:10
ashishjainbut then the error is misleading18:10
evrardjpls /var/run/ha* ?18:10
ashishjains /var/run/ha* ls: cannot access /var/run/ha*: No such file or directory18:11
ashishjainI think I need to re run the haproxy-install.yml18:11
evrardjpfile /var/run/ ?18:11
evrardjpjust to make sure /var/run exists ;)18:12
evrardjpevrardjp:  haproxy not running?18:12
ashishjainevrardjp: I have restarted the yml18:14
ashishjainand /var/run exists18:14
ashishjain:)18:14
evrardjpps aux |grep haproxy ?18:14
evrardjpyou should have a long list of the config files loaded18:14
ashishjainevrardjp: yml exeuction finised18:14
ashishjainbut ha proxy is not started18:15
ashishjainps aux | grep haproxy does not return anything18:15
ashishjainansible01                  : ok=14   changed=0    unreachable=0    failed=018:15
ashishjainthis is the result of "openstack-ansible haproxy-install.yml"18:16
evrardjpyou should check why haproxy is not starting18:16
ashishjainhttp://paste.openstack.org/show/471461/18:16
evrardjp(does he have the IP he tries to bind on?)18:17
ashishjainya IP of haproxy is same as the ip of the host it is being installed18:17
ashishjain192.168.57.1118:17
evrardjpcould you check if everything is fine in your generated service configs?18:18
evrardjpin /etc/haproxy/conf.d/18:18
evrardjpbut always check your logs first ;)18:18
evrardjpI must go for today18:19
evrardjpdon't hesitate to ping me tomorrow18:19
ashishjainevrardjp: since I starting using osad I am devoid of logs I feel18:19
ashishjainI donot see logs anywhere18:19
ashishjainwhere are the logs for this18:19
ashishjainand /etc/haproxy/conf.d has got all the files18:19
ashishjainevrardjp: sure I will ping, thnaks18:20
*** abitha has joined #openstack-ansible18:20
ashishjainbut it has been a nightmare ;(  using osad18:20
*** abitha has quit IRC18:21
evrardjpdon't hesitate to tell us what you want to improve18:23
*** fawadkhaliq has quit IRC18:23
ashishjain[ALERT] 262/235358 (27141) : Starting frontend keystone_service-front: cannot bind socket18:24
ashishjainthis is the error when I try to manually start /etc/init.d/haproxy  start18:24
ashishjainI suspect it is port 5000 which is probably required by "keystone_service-front: cannot bind socket"18:28
ashishjainbut somehow I am not sure18:28
ashishjainbecuase that port is absolutely free18:28
cloudnullAshishjain nuke the haproxy configs and rerun the play.18:29
ashishjainheh18:29
cloudnullMaybe you have a duplicate.18:30
cloudnullYou could grep through the configs18:30
ashishjaincloudnull: How to nuke it..manually delete it?18:30
cloudnullrm -rf /etc/haproxy ; openstack-ansible  haproxy-install.yml18:31
ashishjainI get the same error again msg: [ALERT] 263/000314 (27812) : Starting frontend keystone_service-front: cannot bind socket18:33
ashishjainand that is the reason for haproxy not starting18:33
cloudnullLook through the configs. There has to be some duplication somewhere ? Or port 5000/35357 are in use.18:34
ashishjaincloudnull: bind 192.168.1.1:500018:35
ashishjaincloudnull: this is the external_lb_vip address I have given 192.168.1.118:36
ashishjainthis is only present in grep 192.168.1.1 * keystone_service:bind 192.168.1.1:500018:36
ashishjainkeystone_service18:36
ashishjainkeystone_service file in /etc/haproxy/conf.d18:37
ashishjainthis ip does not exist...in all the other files in "/etc/haproxy/conf.d" you have got *:<port>18:37
ashishjainwhy is that?18:37
ashishjainonce I change the entry from 192.168.1.1:5000 tp *:5000 I am able to start haproxy18:39
cloudnullmy haproxy configs look like this18:42
cloudnullhttp://cdn.pasteraw.com/zyyf1owjuzvwq7doo6tc478gbwi82o < internal18:42
cloudnullsorry ^ external18:42
cloudnullhttp://cdn.pasteraw.com/373nncha68n2q06mtfamk1fiah3q0lm < internal18:42
cloudnulland they're working. does the bind address that you've set for the interal and external address not exist on your host thats  running haproxy?18:43
cloudnullIE 192.168.1.118:44
ashishjainno 192.168.1.1 does not exist18:44
ashishjainmoreover can you tell the name of the file which has the content in the link18:45
ashishjainI have got only 2 files in /etc/haproxy/conf.d  one is  keystone_service and other is keystone_admin18:45
ashishjainand I do not have any section called "frontend keystone_internal-front" in  any of these files instead I have got  frontend keystone_service-front18:46
cloudnulli have http://cdn.pasteraw.com/8obh298thqur9nl3jri0e224p1r1h2y18:47
ashishjainwhich is what you pasted as part of your first link "http://cdn.pasteraw.com/zyyf1owjuzvwq7doo6tc478gbwi82o"18:47
cloudnullyes18:48
ashishjainI do not have keystone_internal18:48
ashishjainhttp://paste.openstack.org/show/471493/18:48
ashishjainbut in my case it is keystone_Service which is leading to failure as 192.168.1.1 does not exist18:49
ashishjainin your case does this ip exist "104.130.175.168"18:49
cloudnullyes18:50
cloudnullit does.18:50
cloudnullboth the internal and external lb vip addresses need to be on the host running haproxy18:50
cloudnullthey're the interfaces haproxy will use to route all your traffic18:50
ashishjainokay so I am screwed again :0)18:51
ashishjainwhat shall I do to correct this now18:51
ashishjainI shall give the host ip in that case to this external_lb_vip18:52
ashishjaineth1 in my case18:52
ashishjaindo i need to rerun all the playbooks?'18:52
ashishjaincloudnull: this is the network configuration of my host where I got ha proxy installed18:54
ashishjainhttp://paste.openstack.org/show/471501/18:54
ashishjainhere eth1 is basically how ssh into this VM which we can probably say as the external ip address18:54
ashishjainso does it mean I need to give "external_lb_vip_address: 192.168.56.81"18:55
cloudnullyes correct the external vip address setting18:58
cloudnullto use 192.168.56.8118:58
cloudnullbecause thats the "external" network interface for your load balancer.18:58
cloudnullthen simply run setup-everything.ylm18:58
ashishjaincloudnull: what I am doing now is run /opt/os-ansible-deploymemt/playbooks/inventory/dynamic_inventory.py18:59
ashishjainthis will generate the new openstack_inventory.json for me18:59
ashishjainand than I will have to just run openstack-ansible haproxy-install.yml19:00
ashishjainI think this should suffiic19:00
ashishjaincloudnull: looks like that worked19:02
ashishjaineven the command hatop -s /var/run/haproxy.stat is giving lot of thing now19:03
ashishjain:)19:03
ashishjaincloudnull: I think I can start infrastructure yml now19:06
*** sdake has joined #openstack-ansible19:06
cloudnullnice!19:10
cloudnullok. im off today. ashishjain good luck i hope it all works out.19:20
ashishjaincloudnull: thanks a lot for your time and help.19:20
ashishjaincloudnull: thanks for your wishes I really need it :)19:21
cloudnullItll go. We've been running it in prod for some time. So its just a matter of getting the setup right for your env.19:22
cloudnullI'll be back ol  tomorrow.19:22
cloudnullTake care.19:22
*** markvoelker has joined #openstack-ansible19:35
*** markvoelker has quit IRC19:40
prometheanfirem/win 119:42
*** gparaskevas has quit IRC20:43
*** abitha has joined #openstack-ansible20:44
*** abitha has quit IRC20:46
*** sdake_ has joined #openstack-ansible20:57
*** sdake has quit IRC21:01
*** KLevenstein has joined #openstack-ansible21:05
*** KLevenstein has quit IRC21:05
*** subscope has quit IRC21:11
*** markvoelker has joined #openstack-ansible21:36
*** markvoelker has quit IRC21:40
*** sdake_ has quit IRC22:00
*** Mudpuppy has quit IRC22:29
*** ggillies has joined #openstack-ansible22:32
*** Mudpuppy has joined #openstack-ansible22:51
*** markvoelker has joined #openstack-ansible23:21
*** markvoelker has quit IRC23:26
*** openstackgerrit has joined #openstack-ansible23:26
*** openstackgerrit has quit IRC23:31
*** shoutm has quit IRC23:47
*** openstackgerrit has joined #openstack-ansible23:47
*** openstackgerrit has quit IRC23:48
*** openstackgerrit has joined #openstack-ansible23:49
*** arbrandes has quit IRC23:55
*** markvoelker has joined #openstack-ansible23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!