*** renich has quit IRC | 00:42 | |
*** renich has joined #openstack-ansible | 00:43 | |
*** renich has quit IRC | 01:38 | |
*** cshen has joined #openstack-ansible | 01:49 | |
*** cshen has quit IRC | 01:54 | |
*** renich has joined #openstack-ansible | 01:55 | |
*** renich has quit IRC | 02:14 | |
*** renich has joined #openstack-ansible | 02:30 | |
*** renich has quit IRC | 03:03 | |
*** cshen has joined #openstack-ansible | 03:17 | |
*** cshen has quit IRC | 03:21 | |
*** aj_mailing has joined #openstack-ansible | 03:26 | |
*** spatel has joined #openstack-ansible | 03:46 | |
*** aj_mailing has quit IRC | 04:05 | |
*** spatel has quit IRC | 04:12 | |
*** evrardjp has quit IRC | 04:33 | |
*** evrardjp has joined #openstack-ansible | 04:33 | |
*** rh-jelabarre has quit IRC | 04:41 | |
*** viks____ has joined #openstack-ansible | 04:59 | |
*** gyee has quit IRC | 05:07 | |
*** shyamb has joined #openstack-ansible | 05:19 | |
*** aj_mailing has joined #openstack-ansible | 05:39 | |
*** shyamb has quit IRC | 05:55 | |
*** viks____ has quit IRC | 06:01 | |
*** shyamb has joined #openstack-ansible | 06:02 | |
*** viks____ has joined #openstack-ansible | 06:03 | |
*** shyam89 has joined #openstack-ansible | 06:20 | |
*** shyamb has quit IRC | 06:22 | |
*** noonedeadpunk has joined #openstack-ansible | 06:35 | |
*** shyam89 has quit IRC | 06:39 | |
noonedeadpunk | yeh, galera cluster in CI is really broken for centos-8:( https://zuul.opendev.org/t/openstack/build/3498548e6bfc487092dac8dba9c1f050/log/job-output.txt#5252 | 06:43 |
---|---|---|
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Decrease amount of jobs and update distros https://review.opendev.org/746881 | 06:44 |
*** cshen has joined #openstack-ansible | 07:01 | |
jrosser | noonedeadpunk: in the galera logs there it looks like the cluster does form | 07:05 |
jrosser | then the service is restarted and it's not coming back as 3 nodes | 07:05 |
jrosser | looks reasonable here https://zuul.opendev.org/t/openstack/build/3498548e6bfc487092dac8dba9c1f050/log/logs/openstack/container3/mariadb.service.journal.log.txt#570 | 07:07 |
noonedeadpunk | hm, indeed. are we supposed to restart all 3 containers at once? I'd say they not really supposed to return back.... | 07:08 |
noonedeadpunk | somehow works for others though.... | 07:11 |
*** shyamb has joined #openstack-ansible | 07:18 | |
*** shyamb has quit IRC | 07:18 | |
*** shyamb has joined #openstack-ansible | 07:19 | |
noonedeadpunk | btw, other containers log dos not look so good | 07:22 |
noonedeadpunk | they were never healthy I think because of that https://zuul.opendev.org/t/openstack/build/3498548e6bfc487092dac8dba9c1f050/log/logs/openstack/container1/mariadb.service.journal.log.txt#467 | 07:22 |
*** shyam89 has joined #openstack-ansible | 07:27 | |
*** shyamb has quit IRC | 07:29 | |
*** sshnaidm|afk is now known as sshnaidm | 07:32 | |
*** shyamb has joined #openstack-ansible | 07:34 | |
*** shyam89 has quit IRC | 07:37 | |
*** tosky has joined #openstack-ansible | 07:37 | |
*** zerozephyrum has joined #openstack-ansible | 07:40 | |
BlackFX | Figured out my issue. Bad indentation :) | 07:57 |
openstackgerrit | Merged openstack/openstack-ansible-os_cloudkitty stable/ussuri: Add CentOS 8 and Ubuntu Focal support https://review.opendev.org/749355 | 08:04 |
BlackFX | Now getting this: Fail when the host is not in galera_cluster_members | 08:21 |
BlackFX | fatal: [infra1_utility_container-8d88b1ab]: FAILED! => {"changed": false, "msg": "The host infra1_utility_container-8d88b1ab must be in galera_cluster_members."} | 08:22 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: [doc] Fix deployment guide to correspond relevant OS https://review.opendev.org/749460 | 08:22 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: [doc] Update current_series_name https://review.opendev.org/749461 | 08:24 |
jrosser | BlackFX: somehow the utility container is in the galera ansible group, thats not right | 08:25 |
jrosser | that suggests a problem with the inventory | 08:25 |
jrosser | you can look at the inventory with the tool scripts/inventory_manage.py | 08:26 |
BlackFX | infra1_galera_container-b896ba81 | None | galera | infra1 | None | 192.168.2.104 | None | 08:28 |
BlackFX | infra1_utility_container-8d88b1ab | None | utility | infra1 | None | 192.168.2.123 | None | | 08:28 |
BlackFX | what would make it want the utility container to be in the galera group? | 08:29 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: [doc] Fix deployment guide to correspond relevant OS https://review.opendev.org/749460 | 08:36 |
jrosser | BlackFX: the things that are put in /etc/openstack_deploy/openstack_user_config.yml decide what ends up in each ansible group | 08:41 |
noonedeadpunk | I think we should finally make option in inventory manage to remove host from specific group.... | 08:42 |
noonedeadpunk | as that's really annoying... | 08:42 |
BlackFX | There is no mention of galera there though. | 08:42 |
noonedeadpunk | dynamic inventory does not clean up automaticually when you host has been already added | 08:43 |
noonedeadpunk | so you could make some wrong choice once and that may follow you for a while | 08:43 |
jrosser | BlackFX: here https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.example#L299-L321 | 08:44 |
BlackFX | Oh okay, yeah I just have my infra1 there | 08:44 |
*** shyamb has quit IRC | 08:48 | |
*** andrewbonney has joined #openstack-ansible | 08:51 | |
*** shyamb has joined #openstack-ansible | 08:51 | |
*** shyamb has quit IRC | 08:55 | |
BlackFX | Odd thing is the galera_client tasks are all fine | 09:36 |
noonedeadpunk | galera_client is supposed to be run against utility anyway | 09:38 |
*** shyamb has joined #openstack-ansible | 09:40 | |
*** SecOpsNinja has joined #openstack-ansible | 09:43 | |
BlackFX | I can see nothing obvious in the inventory json that should be causing this | 09:44 |
BlackFX | is group membership cached elsewhere? | 09:44 |
openstackgerrit | Brin Zhang proposed openstack/openstack-ansible master: Fix hacking min version to 3.0.1 https://review.opendev.org/728730 | 10:04 |
*** aj_mailing has quit IRC | 10:06 | |
*** cshen has quit IRC | 10:17 | |
*** yolanda has joined #openstack-ansible | 10:26 | |
jrosser | BlackFX: try this on your deploy host ansible localhost -m debug -a "var=groups['galera_all']" | 10:35 |
jrosser | from /opt/openstack-ansible | 10:36 |
*** cshen has joined #openstack-ansible | 11:07 | |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Bump ansible version to 2.9.13 https://review.opendev.org/737936 | 11:12 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Bump ansible version to 2.9.13 https://review.opendev.org/737936 | 11:12 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: WIP - test ansible 2.10 https://review.opendev.org/749484 | 11:17 |
*** jbadiapa has joined #openstack-ansible | 11:20 | |
*** shyamb has quit IRC | 11:23 | |
*** rh-jelabarre has joined #openstack-ansible | 11:53 | |
*** cshen has quit IRC | 12:10 | |
*** dave-mccowan has joined #openstack-ansible | 12:17 | |
*** alvinstarr has quit IRC | 12:24 | |
*** poopcat has quit IRC | 12:25 | |
*** poopcat has joined #openstack-ansible | 12:25 | |
*** cshen has joined #openstack-ansible | 12:28 | |
*** pcaruana has quit IRC | 12:30 | |
*** pcaruana has joined #openstack-ansible | 12:35 | |
*** spatel has joined #openstack-ansible | 13:00 | |
*** johanssone has quit IRC | 13:11 | |
*** cshen has quit IRC | 13:12 | |
akahat|rover | noonedeadpunk, hello.. | 13:39 |
*** cshen has joined #openstack-ansible | 13:43 | |
*** mathlin has joined #openstack-ansible | 13:47 | |
noonedeadpunk | akahat|rover: he | 13:52 |
noonedeadpunk | *hey | 13:52 |
akahat|rover | noonedeadpunk, hey... looks like you broke my patch :) https://review.opendev.org/#/c/727067/ | 13:54 |
noonedeadpunk | I broke it ?:p | 13:55 |
noonedeadpunk | but yeah | 13:55 |
akahat|rover | ?? | 13:56 |
noonedeadpunk | I mean it's your patch breaking osa CI | 13:58 |
*** sshnaidm is now known as sshnaidm|bbl | 13:58 | |
noonedeadpunk | I'm not sure how CI should be configured in order to get that patch working | 13:58 |
*** mathlin has quit IRC | 13:59 | |
noonedeadpunk | the only thing I did is set it as depends-on | 14:01 |
noonedeadpunk | I guess we need neutron configured in some specific way to get that working? | 14:03 |
SecOpsNinja | for example if you want to deplouy instances in a dmz zone and a more secure operation zone, from the security point of view, how do you deploy openstack? would you use same openstack with compute and storage nods in dmz and operations site or would you use 2 diferent openstack deployment (one for each zone)? | 14:04 |
*** d34dh0r53 has joined #openstack-ansible | 14:10 | |
*** spatel has quit IRC | 14:10 | |
*** spatel has joined #openstack-ansible | 14:31 | |
spatel | jrosser: looking at it, i think it should be underscore, does these reference somewhere else so i can fix them also - https://review.opendev.org/#/c/749379/ | 14:33 |
spatel | jrosser: also i saw dash on other name also like 'kvm-compute_hosts' | 14:35 |
spatel | anyway let me make it streamline | 14:35 |
*** johanssone has joined #openstack-ansible | 14:36 | |
*** d34dh0r53 has quit IRC | 14:39 | |
spatel | SecOpsNinja: This is what i am doing for that, we are running VLAN providers (no NAT etc..) my instance gateway is my cisco ASA firewall. I have created 3 network outside_DMZ, inside_DMZ and LAN | 14:39 |
spatel | When someone want to spin up web server with public access they will use outside_DMZ and openstack create instance with correct VLAN | 14:39 |
spatel | noonedeadpunk: https://review.opendev.org/#/c/749365/ look like zuul or ara are not correctly setup, i am not able to see failure report | 14:44 |
jrosser | spatel: i can see the logs, what are you missing? | 14:47 |
*** watersj has joined #openstack-ansible | 14:48 | |
spatel | https://c7af65e4fa3890efbd6b-0471f2d86330b4224239e97c94493f38.ssl.cf2.rackcdn.com/749365/2/check/openstack-ansible-linters/72f5c11/logs/ara-report/ | 14:48 |
spatel | I was looking at check/openstack-ansible-linters FAILURE | 14:48 |
jrosser | i'm not sure the linter ever makes an ara report, as it's just calling some tox/shell so not sure it makes much sense | 14:48 |
spatel | oh! make sense then | 14:51 |
jrosser | you can run it locally i think with ./run_tests.sh linters | 14:52 |
jrosser | or something like that | 14:52 |
spatel | jrosser: let me try | 14:53 |
noonedeadpunk | `Could not find or access '/home/zuul/src/opendev.org/openstack/openstack-ansible-os_senlin/tests/common/test-install-senlin.yml' on the Ansible Controller.` | 14:53 |
spatel | also i found some issue in os_senlin/default/main.yml which fixing them... and re-submitting patch | 14:54 |
noonedeadpunk | I think it's because of https://review.opendev.org/#/c/748693/ | 14:54 |
noonedeadpunk | or maybe we need to add senlin to tests | 14:55 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-tests master: Add os_senlin to required-projects https://review.opendev.org/749530 | 14:57 |
spatel | jrosser: running ./os_senlin/run_tests.sh linters and look like installing bunch of packages may be part of utility | 15:00 |
jrosser | noonedeadpunk: i wonder if not finding tests/common/test-install-senlin.yml points to old style functional test, rather than integrated test | 15:04 |
noonedeadpunk | it is | 15:04 |
noonedeadpunk | `openstack-ansible-linters` is old style one | 15:04 |
noonedeadpunk | I'm making senlin integrated repo patch to be able to use integrated one | 15:05 |
noonedeadpunk | we need env.d, conf.d and lot of stuff there | 15:05 |
spatel | noonedeadpunk: do you want me to add tests/common/test-install-senlin.yml i think its missing | 15:12 |
noonedeadpunk | not really - it should be cloned with run_tests.sh | 15:12 |
noonedeadpunk | oh, wait | 15:12 |
noonedeadpunk | test-install-senlin.yml | 15:12 |
noonedeadpunk | tbh.... | 15:13 |
noonedeadpunk | I don;t think we should add functional tests at all | 15:13 |
openstackgerrit | Satish Patel proposed openstack/openstack-ansible-os_senlin master: Fixing some defauls/mail.yml tunable options https://review.opendev.org/749365 | 15:14 |
noonedeadpunk | spatel: what port senlin is listening on? | 15:16 |
spatel | 8778 | 15:16 |
spatel | only single api port 8778 | 15:16 |
openstackgerrit | Satish Patel proposed openstack/openstack-ansible master: changing dash to underscore for test inventory https://review.opendev.org/749379 | 15:19 |
spatel | at some point we may need to add that role in ansible-role-requirements.yml (otherwise zuul won't able to fetch it) | 15:23 |
noonedeadpunk | spatel: do you have proper playbook to run role? | 15:23 |
spatel | Yes | 15:23 |
noonedeadpunk | (spatel I'm right on it at the moment) | 15:24 |
spatel | I am running on my lab | 15:24 |
noonedeadpunk | can you share it?:) | 15:24 |
spatel | os_senlin playbook? | 15:24 |
noonedeadpunk | yep | 15:24 |
noonedeadpunk | so I won't invent the wheel | 15:24 |
spatel | its here - https://review.opendev.org/#/c/749365/ | 15:24 |
noonedeadpunk | https://review.opendev.org/#/c/749365/3/examples/playbook.yml oh | 15:25 |
noonedeadpunk | yeah | 15:25 |
spatel | just drop this folder in /etc/ansible/role | 15:25 |
spatel | you need to add some inventory stuff in /opt/openstack-ansible/ so it can create container | 15:25 |
noonedeadpunk | yeah, I'm doing it as well at the moment:) | 15:27 |
noonedeadpunk | just 5 mins | 15:27 |
spatel | noonedeadpunk: no worry :) | 15:27 |
noonedeadpunk | does it need octavia, heat or whatever? | 15:28 |
spatel | fyi, my all lab running on centos-8 so if you trying first time on ubuntu may hit wall | 15:28 |
spatel | no it doesn't need anything | 15:28 |
spatel | it works independently (just like heat) | 15:28 |
spatel | once you finish installation just go to utility and source /root/openrc and run "openstack cluster build info" to verify | 15:29 |
* noonedeadpunk has no time for test deployment at the moment | 15:30 | |
spatel | If you give me some test server then i can set it up for you.. so you can play around | 15:31 |
*** tosky has quit IRC | 15:31 | |
noonedeadpunk | spatel: does it have horizon UI? | 15:38 |
noonedeadpunk | found it | 15:38 |
spatel | It does and i am testing it now.. i need to patch it :) | 15:38 |
spatel | But that would be horizon patch | 15:39 |
noonedeadpunk | yeah, just some integrated repo bit will be also needed:) | 15:40 |
noonedeadpunk | jsut trying not to forget anything | 15:40 |
*** cshen has quit IRC | 15:41 | |
spatel | I am testing horizon right now to see hot it goes | 15:41 |
spatel | s/hot/goes/ | 15:42 |
spatel | my fingers :) | 15:42 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Added Openstack Senlin role deployment https://review.opendev.org/749540 | 15:44 |
noonedeadpunk | spatel: I think you should try setting depends-on this patch ^ | 15:44 |
noonedeadpunk | it will fail most likely, but yeah:) | 15:44 |
spatel | noonedeadpunk: on which patch? | 15:46 |
noonedeadpunk | https://review.opendev.org/749540 | 15:46 |
noonedeadpunk | btw do you see openstackgerrit msgs?:) | 15:46 |
spatel | on IRC? | 15:47 |
spatel | yes | 15:47 |
spatel | depends-on: <os_senlin role UUID> right? | 15:48 |
noonedeadpunk | `Depends-On: https://review.opendev.org/749540` | 15:48 |
noonedeadpunk | in commit message of https://review.opendev.org/#/c/749365/ | 15:49 |
spatel | got it | 15:49 |
*** alvinstarr has joined #openstack-ansible | 15:52 | |
openstackgerrit | Satish Patel proposed openstack/openstack-ansible-os_senlin master: added os_senlin role for deployment. https://review.opendev.org/749365 | 15:53 |
spatel | noonedeadpunk: done | 15:54 |
spatel | noonedeadpunk: you missed following two password in user_secrets.yml | 15:57 |
-spatel- senlin_galera_password: | 15:57 | |
-spatel- senlin_auth_encryption_key: | 15:57 | |
spatel | i do have those in my lab | 15:58 |
*** ChiTo has joined #openstack-ansible | 16:06 | |
ChiTo | Hi openstack-ansible team, I just wondered how can I set static IP addresses for my LXC containers? | 16:07 |
ChiTo | I mean from the yaml/openstack-ansible perspective | 16:07 |
noonedeadpunk | spatel: what senlin_auth_encryption_key does? | 16:12 |
noonedeadpunk | I guess I used senlin_container_mysql_password instead of senlin_galera_password... | 16:12 |
noonedeadpunk | Seems we don't have default convention across roles.. some are using _galera_password some are using _container_mysql_password | 16:13 |
noonedeadpunk | most are using _container_mysql_password though | 16:13 |
noonedeadpunk | so let's probably use it.... | 16:14 |
*** cshen has joined #openstack-ansible | 16:14 | |
noonedeadpunk | or what do you think jrosser? _container_mysql_password vs _galera_password ? | 16:14 |
spatel | noonedeadpunk: you know what i don't think we need senlin_auth_encryption_key: | 16:14 |
spatel | but we do need galera_password | 16:14 |
noonedeadpunk | it's just naming thing - senlin_container_mysql_password vs senlin_galera_password | 16:15 |
noonedeadpunk | But senlin_galera_password is better imo | 16:16 |
spatel | i inherited naming scheme from other playbook, so hoping that is better one | 16:18 |
*** cshen has quit IRC | 16:18 | |
noonedeadpunk | just look through your `cat user_secrets.yml | egrep "container_mysql_password|galera_password"` | 16:20 |
noonedeadpunk | So I can't really say what's better tbh | 16:21 |
noonedeadpunk | like nova,glance,neutron and other core use container_mysql_password, and jsut few galera_password | 16:21 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Added Openstack Senlin role deployment https://review.opendev.org/749540 | 16:24 |
*** theintern_ has joined #openstack-ansible | 16:29 | |
*** theintern_ has quit IRC | 16:30 | |
spatel | noonedeadpunk: mysql -> 19 and galera -> 10 :) | 16:36 |
spatel | mysql wins! | 16:36 |
spatel | we can make that adjustment right now if you want | 16:36 |
SecOpsNinja | spatel, sorry for the delay response but regarding your instalation you don't have any compute/storage nodes directly in dmz network right? so you connect the dmz vlan to your openstack serveres outside of dmz? | 16:41 |
noonedeadpunk | spatel: nah, dunno. eventually having `container` is kind of wrong naming pattern.... | 16:42 |
noonedeadpunk | I've set galera for now | 16:43 |
*** spatel has quit IRC | 16:52 | |
*** spatel has joined #openstack-ansible | 17:02 | |
spatel | SecOpsNinja: no i don't have compute/storage in DMZ (only virtual instance in DMZ) | 17:02 |
SecOpsNinja | spatel, so theres is no gain in isolating those instances in a dmz for dmz only workload right? | 17:03 |
spatel | why do you need that level of security? if someone break vm jail and get access of host machine then its very different story | 17:03 |
*** cshen has joined #openstack-ansible | 17:03 | |
spatel | SecOpsNinja: i don't think you need that level of security, I put all compute nodes in single network and running VM instance on different security zone. | 17:04 |
SecOpsNinja | spatel, yep true but was asking because i don't know if that is possible with openstack (if they should be in the smae cluster or they should be separated). yes we probablye don't need bt asking to know what are the best pratices | 17:05 |
SecOpsNinja | regargindd the external ip you have in your br-,mgmt network but nated in you firewall? | 17:06 |
spatel | I don't think openstack has that kind of feature to isolate compute based on security zone but you isolate based on your security practice, like putting compute nodes outside firewall to just run public instance (but again those compute nodes need to talk to controller located somewhere inside firewall) | 17:07 |
*** cshen has quit IRC | 17:08 | |
spatel | you can do some kind of funky NAT or port forwarding on firewall so all your compute nodes outside DMZ can talk to controller nodes | 17:10 |
spatel | SecOpsNinja: or you can use advance implementation like Cellv2 and deploy one cell in outside_DMZ | 17:11 |
SecOpsNinja | spatel, thanks for the info. i was asking this because of the problem that i need to expose the pubblic api to lets encrypt servers ( to see if magnum can already talk to keystone) and was checking the best way (without compremissing the security of opentack deployment) | 17:14 |
spatel | SecOpsNinja: does your openstack control plane going to be on public IP? | 17:20 |
spatel | I meant keystone/neutron etc.. | 17:20 |
SecOpsNinja | spatel, what im planing to do is puting my public ip pointing to internl _lv_vip_adfress by using NAT in firewall | 17:21 |
spatel | We are running private openstack and everything running on private net (some vm instance running on public but they are vm not control plane or api) | 17:21 |
spatel | does your public endpoint on Public IP or private IP range? | 17:22 |
SecOpsNinja | spatel, sorry? | 17:23 |
jrosser | SecOpsNinja: i would recommend that your external VIP is a proper external IP | 17:24 |
spatel | openstack endpoint list (is any endpoint running on public ip range?) | 17:24 |
jrosser | SecOpsNinja: you can make a dedicated interface on your controllers (or even dedicated haproxy nodes if you like) for the external subnet | 17:25 |
spatel | Yeah, I am using F5 load-balancer instead of haproxy | 17:26 |
SecOpsNinja | jrosser, spatel we dont have public network range so i do need to use my dynamic p public ip to expose openstack to lest encrypt servers. What i was thinking was to put my router firewall to do a port forwaring to the vprivate vip network (atm we don0t have a exclusive lb only for openstack) . so what sare you recomendding is internet > router > openstack external network > LB > br-mgmt o | 17:27 |
SecOpsNinja | penstack? | 17:27 |
jrosser | imho making the external network (even if it's behind your router) something that you can reason about on it's own will make things nicer | 17:29 |
jrosser | nothing stops you NAT/port forward to a mgmt net IP if you want to | 17:29 |
jrosser | do you have dynamic DNS for your external IP? | 17:30 |
SecOpsNinja | jrosser, atm no but i do need to expose it otherwise lets encrypt renovation will not work | 17:31 |
spatel | SecOpsNinja: this is what my openstack look like https://imgur.com/a/4G3JGHa | 17:31 |
spatel | blue VLAN is my external VIP | 17:32 |
SecOpsNinja | ah ok | 17:33 |
SecOpsNinja | so you external ip is in F5 LB. so it you want to use lest encrypt (instead of buying proper certs) you you expose that external ip in you intenet firewall right? | 17:35 |
SecOpsNinja | but the blue vlan shound't only be connected to haproxy/f5 load balancer ? whty expose all other openstack nodes in there? | 17:35 |
spatel | Yes my 10.30.x.x is on F5 external VIP, which i use in openrc file or terraform etc.. | 17:41 |
spatel | F5 has two network external(10.30.x.x) and br-mgmt(172.28.x.x) | 17:41 |
spatel | i never expose my external IP to internet. (we have VPN to access openstack). we are running private cloud in our datacenter. | 17:42 |
*** aj_mailing has joined #openstack-ansible | 17:42 | |
spatel | SecOpsNinja: You are right i don't need external VIP to expose on other compute nodes but if you see its my br-host (I use this network to SSH, monitoring, log shipping etc) | 17:44 |
SecOpsNinja | spatel, yep but to resolve my probem of magnum not accepting the keyston self signed cert it seams the easir solution whould the to public allow at least the http for at least haproxy in openstack is able to responde to lets encrypt serveres... | 17:45 |
spatel | This is how i implemented without thinking about high security, but in your case you can isolate external VIP and br-host network | 17:45 |
spatel | SecOpsNinja: do you have valid certificate? | 17:45 |
spatel | Use valid certificate and DNS name instead of IP in configuration files | 17:46 |
spatel | This is what i did in my environment, install certificate on haproxy/f5 and change configuration of magnum to call for DNS name instead of IP | 17:47 |
SecOpsNinja | no to have a valid certificate i would nee to manage a private ca and create cert for each of the public endpoints and i think that will be harder thatn exposing the external ip to the intenet and allow haproxy to create lets encrypt certs for each service (keystone, dashboard) but i still need to check how i m going to deploy this | 17:47 |
spatel | why certs for each public endpoint? | 17:48 |
spatel | can't you have single cert? | 17:48 |
SecOpsNinja | i was under the impression that you need a valid certificate for each public endpoint. | 17:49 |
SecOpsNinja | so if you penstack is private you created a norprivate ca than created one for haproxy with local ndns name and changed your magnum to call for haproxy instead of kyestone public ip? | 17:50 |
spatel | SecOpsNinja: this is what my endpoint looks - http://paste.openstack.org/show/797388/ | 17:51 |
jrosser | SecOpsNinja: OSA deploys each service on a different port at the same IP so only one cert is required | 17:51 |
*** aj_mailing has quit IRC | 17:52 | |
spatel | I have single certificate installed on haproxy/F5 on openstack-eng.example.com | 17:52 |
SecOpsNinja | yep but if i understand the problem in magnum was that the certificate return by haproxy wasn't any one the valid CA's. i was think of lets encrypt because it was easier than forcing creating a custom ca and aply it to all the culester nodes to accept it, roght? | 17:53 |
SecOpsNinja | and yep i confused a bit but you are right i only need to change my external lb vip to a DNS name where the lets encrypt can confirm and return a valid cert for it | 17:56 |
spatel | SecOpsNinja: not sure where you getting confused but your problem is when magnum talk to keystone it use https://keystone.foo.com and if that SSL cert isn't valid then it will complain. if you give valid cert to haproxy external_vip then it shouldn't complain | 17:57 |
spatel | Make sure in all configuration use dns name (instead of IP) | 17:58 |
SecOpsNinja | spatel, so the problem is not in unknown CA but the back than the certificate is not with its name or IP in SAN? | 17:58 |
SecOpsNinja | yesterday in magnum logs and wireshatk/tcmdump chacpture i was under the impression that the problem is that the certificate wasn't a valid certificate signed by a valid CA (atm its using the automated self signed) | 18:00 |
spatel | try this curl https://openstack.example.com:5000 and see what it returns | 18:00 |
*** zerozephyrum has quit IRC | 18:00 | |
spatel | you should post logs or configuration etc.. its hard to guess what happening there. | 18:01 |
SecOpsNinja | if i dont use the insecure flag it returns "curl failed to verify the legitimacy of the server and therefore could not | 18:01 |
SecOpsNinja | establish a secure connection to it. To learn more about this situation and | 18:01 |
SecOpsNinja | how to fix it, please visit the web page mentioned above." | 18:01 |
spatel | could you copy paste snippet of magnum and output of openstack endpoint list ? | 18:02 |
spatel | magnum config i meant | 18:03 |
SecOpsNinja | yep i posted yesterday but the problem is that haproxy fails with ssl handshkae error to all magnum comunications to https enpoint of keystone. magnum log doesnt say much "Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed'". What i did was trying to capture the tls packages when magnum comunicated with haproxy (to | 18:03 |
SecOpsNinja | keystone public https endpoint) and the failure was always returning a invlid CA from magnum) | 18:03 |
SecOpsNinja | one second | 18:04 |
SecOpsNinja | http://paste.openstack.org/show/797390/ and http://paste.openstack.org/show/797391/ | 18:06 |
SecOpsNinja | and this is magnum problem http://paste.openstack.org/show/797392/ | 18:08 |
spatel | Here you go. | 18:08 |
spatel | you need to change this to DNS name www_authenticate_uri = https://172.30.0.251:5000/v3 | 18:08 |
spatel | ip add won't work period. | 18:08 |
spatel | ip addr won't work period. | 18:08 |
spatel | you need to change all public endpoint or (just keystone/magnum endpoint to DNS name) | 18:09 |
SecOpsNinja | so if i put and name the cert that is generated by haproxy we be ok even being self seigned? | 18:09 |
*** renich has joined #openstack-ansible | 18:09 | |
spatel | I had same problem when first time i deployed magnum | 18:09 |
spatel | if you don't have valid authority certificate in that case use self sign (but you need to copy CA etc to you magnum and keystone container to authorized self sign) | 18:10 |
SecOpsNinja | but that is only after deploying the change in haproxy right? i don't see any way where i can put the ca certificate that was generated bfor haproxy in magnum and keystone contaienr without recreating/reconfigure them, right? | 18:12 |
spatel | not sure where to put CA in magnum/keystone, assuming they used some kind of python module to make call so its a good question | 18:13 |
spatel | why are you not using verify_ca = false (something like that in magnum) | 18:14 |
spatel | check configuration option of magnum | 18:14 |
-spatel- # Indicates whether the cluster nodes validate the Certificate Authority when | 18:15 | |
-spatel- # making requests to the OpenStack APIs (Keystone, Magnum, Heat). If you have | 18:15 | |
-spatel- # self-signed certificates for the OpenStack APIs or you have your own | 18:15 | |
-spatel- # Certificate Authority and you have not installed the Certificate Authority to | 18:15 | |
-spatel- # all nodes, you may need to disable CA validation by setting this flag to | 18:15 | |
-spatel- # False. (boolean value) | 18:15 | |
-spatel- #verify_ca = true | 18:15 | |
SecOpsNinja | jrosser, explaing that to me that it will not work for that point (and i have tested and magnum still gives the same error) | 18:16 |
SecOpsNinja | spatel, but yes i have tested that yesterday and it didn't work... the documentation is a bit confiusing on that part but from what understand that configuration os for mangum itself and not for is comunications with other services | 18:18 |
spatel | SecOpsNinja: if that won't work then change public endpoint to call http:// | 18:18 |
SecOpsNinja | thats was the second problem. The magnum.conf.js has keystone public uri instead of private uri | 18:19 |
spatel | This is what i did | e85c012588a942e8bad8a01eadba5b5d | RegionOne | keystone | identity | True | public | http://172.29.236.100:5000 | 18:19 |
spatel | http:// instead of https:// | 18:19 |
SecOpsNinja | but from what i seedn in os_magnum to change that configuyration i would need to hancge keystone itself to be using http in its public endpoint and i dind't want to do that | 18:20 |
SecOpsNinja | or i didn't see all os_magnum overrides that were able to change that.... | 18:20 |
spatel | is this production deployment? | 18:21 |
spatel | if not production then why are you worried to do that? | 18:21 |
SecOpsNinja | not atm but will be after i can put all to work :D | 18:22 |
spatel | when you deploy on production by valid cert authority and then you don't need to troubleshoot this :) | 18:22 |
spatel | s/by/buy/ | 18:22 |
SecOpsNinja | i will have to go now but again thank for all your help spatel and jrosser . tomorow i will reorganize my openstack nodes to have the external network and trying to use the est encrypt and see if it works and i will post here the result :D | 18:23 |
spatel | +1 | 18:24 |
*** cshen has joined #openstack-ansible | 18:31 | |
*** SecOpsNinja has left #openstack-ansible | 18:43 | |
*** renich has quit IRC | 18:45 | |
*** cshen has quit IRC | 18:55 | |
*** renich has joined #openstack-ansible | 19:03 | |
*** andrewbonney has quit IRC | 20:12 | |
BlackFX | @jrosser: | 20:14 |
BlackFX | ansible localhost -m debug -a "var=groups['galera_all']" | 20:14 |
BlackFX | localhost | SUCCESS => { | 20:14 |
BlackFX | "groups['galera_all']": [ | 20:14 |
BlackFX | "infra1_galera_container-b896ba81" | 20:14 |
BlackFX | ] | 20:14 |
BlackFX | } | 20:14 |
*** gyee has joined #openstack-ansible | 20:17 | |
*** ChiTo has quit IRC | 20:31 | |
*** cshen has joined #openstack-ansible | 20:51 | |
*** cshen has quit IRC | 20:55 | |
*** spatel has quit IRC | 21:11 | |
*** jbadiapa has quit IRC | 21:52 | |
*** prometheanfire has quit IRC | 21:59 | |
*** partlycloudy has quit IRC | 21:59 | |
*** nsmeds has quit IRC | 21:59 | |
*** arxcruz|ruck has quit IRC | 21:59 | |
*** arxcruz has joined #openstack-ansible | 22:00 | |
*** partlycloudy has joined #openstack-ansible | 22:02 | |
*** prometheanfire has joined #openstack-ansible | 22:04 | |
*** djhankb has quit IRC | 22:05 | |
*** djhankb has joined #openstack-ansible | 22:05 | |
*** sshnaidm|bbl is now known as sshnaidm|afk | 22:47 | |
*** cshen has joined #openstack-ansible | 22:51 | |
*** cshen has quit IRC | 22:55 | |
*** irclogbot_2 has quit IRC | 23:29 | |
*** irclogbot_3 has joined #openstack-ansible | 23:32 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!