hamburgler | Hey all - Question about OSA - Zed, is Zookeeper considered production ready via the OSA deployment as it is still fairly new to this project? | 06:53 |
---|---|---|
jrosser | hamburgler: you can look here at what the zookeeper deployment touches https://review.opendev.org/q/topic:osa%252Fzookeeper | 08:06 |
jrosser | i have it deployed in 3 different OSA where we have cinder active/active with ceph, designate H/A and also octavia H/A - those are the things that currently expect some kind of coordination service | 08:07 |
jrosser | so zookeeper/coordination is needed for some specific situations like those, if you don't have them it doesnt matter | 08:08 |
jrosser | if you do have them, i've had no trouble so far with the zookeeper deployment and it's actually made a significant improvement to designate in a H/A config now it is present | 08:09 |
hamburgler | jrosser: thank you :) very helpful! | 08:29 |
noonedeadpunk | admin1: there is a ML thread regarding magnum and latest fedora - there were patches pushed but not merged yet | 09:25 |
noonedeadpunk | I'd actually expect significant changes to octavia behaviour as well with amphorav2 | 09:26 |
jrosser | damiandabrowski: if you want to discuss anything with the haproxy stuff, let me know | 09:33 |
damiandabrowski | hi jrosser ! so yesterday you suggested to remove haproxy_preconfigured_services and use only haproxy_services for both service types(preconfigured and configured during service playbooks execution) | 09:58 |
noonedeadpunk | yeah, looking at these 2 files I'm not really sure what the difference is there | 10:12 |
jrosser | i was also wondering if we have covered like (horizon + LE) (no horizon + LE) (horizon + no LE) (no horizon + no LE) | 10:17 |
jrosser | theres tons of combinations that need supporting in the initial haproxy playbook to get that going | 10:17 |
jrosser | and then also not to break it if you run haproxy again after horizon playbook as run and make it's own settings for haproxy | 10:17 |
jrosser | also the support of security.txt is an outlier here as well which is served from keystone but also an haproxy ACL | 10:18 |
noonedeadpunk | I think most tricky can be no horizon + LE | 10:20 |
noonedeadpunk | and yeah, security.txt is interesting usecase indeed | 10:21 |
damiandabrowski | noonedeadpunk: the difference is facts gathering and "manual" handler execution. But yes, they share the same service.j2 template | 10:23 |
damiandabrowski | all these combinations should be supported, the logic is quite simple: | 10:24 |
damiandabrowski | if nothing listens on port 80, create a temporary service on that port, issue initial LE certificate and remove that service afterwards. | 10:24 |
damiandabrowski | can't comment on security.txt right now as I'm not familiar with it, but i will look into this | 10:24 |
damiandabrowski | noonedeadpunk: you added a comment "We need to have proper yaml file start as well as license for all newly created files." | 10:28 |
damiandabrowski | i will add license, but '---' is something i still can't understand :D why we define it? looks like it's completely unnecessary in our case: | 10:29 |
damiandabrowski | https://stackoverflow.com/a/53691066/9413768 | 10:29 |
damiandabrowski | or maybe there is some reason? | 10:30 |
noonedeadpunk | damiandabrowski: well, out of gathering vars for OS we need only haproxy_system_ca. As it's used only in template, I'd assume we can define it in jinja instead of vars. Regarding service restart... Hm. I'm not really sure right now | 10:30 |
noonedeadpunk | damiandabrowski: while yaml document start is optional, it's smth we follow throughout the code and I don't see any reason why we should stop doing that and remove document start everywhere | 10:32 |
damiandabrowski | less code and simplicity? | 10:33 |
noonedeadpunk | it's not a code at the first place? And if you check ansible-lint rules - it's going to fail without that if strict is used https://ansible-lint.readthedocs.io/rules/yaml/#correct-code | 10:38 |
damiandabrowski | ah so it really is recommended to use it | 10:43 |
damiandabrowski | ok, i will add these dashes but it's still super weird for me :D for ex. I learned that thee dots indicate the end of document, but we don't use them for some reason :D | 10:45 |
noonedeadpunk | well, it;'s all about coding style I would say. I can compare that to pyhton - you can use tabs or spaces in code | 10:46 |
noonedeadpunk | both will work equally good. but not a lot of ppl will be happy if you'll use tabs :D | 10:46 |
*** dviroel|out is now known as dviroel | 10:58 | |
jrosser | damiandabrowski: really i think that my preference would be that we don't make any changes at all in the haproxy_server role if possible | 11:00 |
damiandabrowski | so you want to drop letsencrypt support from haproxy_role? | 11:01 |
noonedeadpunk | I need to check on handlers topic, as I for some reason think it should be possible to run handlers where they're supposed to run... But I'd need to spawn up env for that | 11:01 |
noonedeadpunk | I'm not sure how that's related? | 11:01 |
noonedeadpunk | ah, you mean about temporary service | 11:03 |
jrosser | things like the change to fix selinux on centos should be broken out into their own patch so we can just merge those anyway | 11:03 |
jrosser | i don't think that minimal changes to the haproxy role mean dropping LE - as i put in my review comments earlier i think that the bring-up of LE should be handled in playbooks/haproxy-install.yml and vars passed into the haproxy_server role, not by adding more complexity into that role itself. | 11:21 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Prepare haproxy role for separated haproxy config https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/871188 | 11:30 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Move selinux fix to haproxy_post_install.yml https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/873703 | 11:30 |
damiandabrowski | jrosser: ^ | 11:32 |
damiandabrowski | i will think about this temporary LE service feature in haproxy role. But IMO it may be useful also for non-openstack cases | 11:33 |
jrosser | i use haproxy_server a ton outside OSA already with LE, it works pretty well | 11:33 |
jrosser | admin1: you might be interested in this https://review.opendev.org/c/openstack/magnum/+/849156 | 12:33 |
admin1 | jrosser.. thanks | 13:43 |
admin1 | i was looking into https://opendev.org/openstack/magnum/src/branch/master/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml to make a note of all config options that can be passed | 13:45 |
admin1 | just a thougth .. br-lbaas and br-storage can share the same interface for example right ? | 14:09 |
admin1 | or with br-mgmt | 14:09 |
admin1 | and its already in the controllers as well | 14:09 |
admin1 | br-mgmt especially | 14:09 |
noonedeadpunk | admin1: I don't think you want to share br-lbaas with br-storage at very least due to security reasons | 14:12 |
noonedeadpunk | As br-lbaas is passed to the amphora VMs, which will have access to the storage | 14:13 |
admin1 | issue i have is i only got 5 dedicated vlans, and 2 is being used for external network .. .. so i only have 3 i can use | 14:13 |
noonedeadpunk | but yes, sure you can share with br-mgmt | 14:13 |
admin1 | noonedeadpunk, does this sound correct ? https://gist.githubusercontent.com/a1git/0bccd468b96ecffb0490cbb1183fce4e/raw/8a1c9eb32dd9f179577169840719446db51baed9/gistfile1.txt | 14:30 |
admin1 | changes is octavia_provider_segmentation_id is removed and bridge is br-mgmt | 14:31 |
noonedeadpunk | admin1: you don't wanna do that either.... | 14:32 |
noonedeadpunk | Best thing to share - mgmt with storage networks | 14:32 |
admin1 | oh | 14:32 |
admin1 | hmm.. that works | 14:32 |
noonedeadpunk | I mean - you're about to pass internal network with all communications happening to the VM that is basically managed through API and customer traffic is passing through it | 14:33 |
jrosser | admin1: you can still make a vxlan if you need | 14:34 |
noonedeadpunk | I was thinking about that, but didn't dare to suggest | 14:34 |
admin1 | i am ok for all ideas :) | 14:34 |
admin1 | this is a semi-dev-poc cluster | 14:34 |
noonedeadpunk | Though my suggestion would include also using OVS for bridges then, but I've never tried on how this is going to stack/work | 14:35 |
noonedeadpunk | (ovs for control bridges is doable, but I think we may have still open patches to finilize this) | 14:35 |
admin1 | what i have is a single interface with 5 tagged vlans .. so i gave br-vlan an ip .. and then of the 5 vlans, 2 = external .. , and then br-mgmt, br-vxlan and br-storage each | 14:35 |
admin1 | the tagged interfaces are in linuxbrige while the master interface is on ovn | 14:36 |
noonedeadpunk | What point of having br-vlan if you're limited on vlans? | 14:36 |
admin1 | i have to make do on what is given to me :) | 14:36 |
noonedeadpunk | yeah, but I don't think you will be creating vlan tenant networks? | 14:37 |
admin1 | tenant networks are all vxlan | 14:37 |
noonedeadpunk | yeah, so why do you need br-vlan? | 14:37 |
admin1 | for external | 14:37 |
admin1 | ext-net1 and ext-net2 are on 2 dedicated vlans .. so i only have 3 more to go .. and so far i am using it for mgmt, storage and vxlan each | 14:38 |
admin1 | and want to add lbaas on it also | 14:38 |
noonedeadpunk | ah, ok, yeah, I guess I mixed external for tenants and external for public interface of cloud | 14:38 |
noonedeadpunk | So yeah, I'd either tried to use vxlan for lbaas or merged storage with mgmt | 14:39 |
admin1 | how do I merge storage with mgmt ? replace container_bridge: br-storage with br-mgmt and add the mgmt ip in the same br-mgmt ? | 14:40 |
admin1 | change line2 to 22 ? => https://gist.github.com/a1git/1ba0d24eaea553eee5fb0fc8a5f374cd | 14:41 |
noonedeadpunk | I think you can jsut drop br-storage? | 14:42 |
noonedeadpunk | as you don't need any extra interface or anything basically | 14:43 |
noonedeadpunk | br-mgmt is already present anyway on all hosts that should have access to storage | 14:43 |
admin1 | got it | 14:43 |
admin1 | will give it a try | 14:43 |
noonedeadpunk | but if you're deploying ceph-ansible with osa - you will likely need to define couple of overrides | 14:44 |
admin1 | not in this one | 14:44 |
admin1 | this one is non ceph .. | 14:44 |
admin1 | using nfs for cinder/glance | 14:44 |
noonedeadpunk | yeah. so then just ensure that nfs has mgmt vlan | 14:45 |
admin1 | wait .. jrosser said its also possile to route using vxlan | 14:45 |
admin1 | how ? | 14:45 |
admin1 | create a manual vtep and bridge ? | 14:45 |
jrosser | yes | 14:46 |
admin1 | i want to test/expore this idea | 14:46 |
jrosser | if you were really on it you could make that correspond to the lbaas network in neutron | 14:46 |
admin1 | becuase right now i have 5 vlans. maybe one day i get just 2 vlans and i still need to do stuff like lbaas and storage and trove etc that may need their own bridge .. | 14:46 |
jrosser | like same VNI on the controller | 14:47 |
noonedeadpunk | yeah vxlan is really good idea to use with lbaas | 14:47 |
admin1 | since osa only needs a bridge and does not care about what is underlying in the bridge, its a good idea | 14:47 |
jrosser | it can be just standard /etc/<whatever-you-use> to configure networks for a vxlan | 14:47 |
noonedeadpunk | oh, wait, you said OVN? Then not vxlan but geneve :D | 14:47 |
jrosser | both i think these days? | 14:48 |
admin1 | well, ovn and geneve can co-exist in different bridges | 14:48 |
noonedeadpunk | One of these | 14:48 |
noonedeadpunk | Not both at the same time afaik | 14:48 |
noonedeadpunk | Also - vxlan for ovn is limited with 4096 networks | 14:48 |
noonedeadpunk | at least I was told so by slaweq after couple of beers (sound legit, does it? hehe) | 14:49 |
jrosser | admin1: https://vincent.bernat.ch/en/blog#tag-network-vxlan | 14:49 |
noonedeadpunk | I haven't tested ovn with vxlan though | 14:50 |
noonedeadpunk | #startmeeting openstack_ansible_meeting | 15:03 |
opendevmeet | Meeting started Tue Feb 14 15:03:34 2023 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:03 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:03 |
opendevmeet | The meeting name has been set to 'openstack_ansible_meeting' | 15:03 |
noonedeadpunk | #topic office hours | 15:03 |
noonedeadpunk | #topic rollcall | 15:03 |
noonedeadpunk | o/ | 15:04 |
noonedeadpunk | sorry for using wrong topic at first | 15:04 |
damiandabrowski | hi | 15:05 |
jrosser | o/ hello | 15:07 |
noonedeadpunk | #topic bug triage | 15:08 |
noonedeadpunk | We have couple of new bug reports, and one I find very scary/confusing | 15:08 |
noonedeadpunk | #link https://bugs.launchpad.net/openstack-ansible/+bug/2007044 | 15:09 |
noonedeadpunk | I've tried to inspect code at very least for neutron and haven't found any possible opportuninty of such thing happening | 15:09 |
noonedeadpunk | I was thinking to maybe add extra conditions here https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/tasks/python_venv_install.yml#L43-L48 to check for common path we use for distron install | 15:10 |
noonedeadpunk | As I can recall patching some role to prevent running python_venv_build for distro path | 15:11 |
noonedeadpunk | As then it will be passed venv_install_destination_path as <service>_bin and <service>_bin for sure comes from distro_install.yml for distro path | 15:12 |
noonedeadpunk | But bug overall looks a bit messy overall | 15:12 |
jrosser | we could have a default thats says `/openstack` in that role | 15:12 |
jrosser | and if it doesnt match that start then `fail:` | 15:12 |
noonedeadpunk | Well. I do use this role outside of openstack as well... | 15:13 |
jrosser | right - so some more generic way of defining a "safe path" | 15:13 |
noonedeadpunk | I just can't think of good way of doing that to be frank | 15:15 |
noonedeadpunk | We use `/usr/bin` mainly for distro path | 15:15 |
noonedeadpunk | But basically - ppl are free to set venv_install_destination_path to any crazy thing... | 15:16 |
noonedeadpunk | I was going to check for some more roles if we might run the role somehow for distro path... | 15:18 |
jrosser | i think we need to ask for a reproduction and log in the bug report | 15:19 |
jrosser | as i've never seen anything like that before | 15:19 |
noonedeadpunk | Another thing from same person you've never seen... | 15:20 |
noonedeadpunk | #link https://bugs.launchpad.net/openstack-ansible/+bug/2006986 | 15:21 |
noonedeadpunk | I was going to create a sandbox, but havn't managed to | 15:22 |
noonedeadpunk | But since I know you're using dns-01 and having some envs on zed - I'm not really sure I will be able to reproduce that either | 15:23 |
damiandabrowski | "Haproxy canno't using fqdn for binding and wait for an IP." | 15:24 |
damiandabrowski | is that really true? | 15:24 |
noonedeadpunk | Well, as I wrote there - we have haproxy binded to fqdn everywhere... | 15:25 |
noonedeadpunk | I can assume that it might not be true with newer haproxy versions or when having DNS RR or failing to resolve DNS.... | 15:25 |
noonedeadpunk | But I don't see referring binding on FQDN on haproxy docs https://www.haproxy.com/documentation/hapee/latest/configuration/binds/syntax/ | 15:29 |
noonedeadpunk | I kind of wonder if debian or smth is shipped with newer haproxy where bind on fqdn is no longer possible | 15:30 |
noonedeadpunk | `The bind directive accepts IPv4 and IPv6 IP addresses.` | 15:30 |
noonedeadpunk | Actually, I'm thinking if it's not time to try to rename internal_lb_vip_address | 15:31 |
noonedeadpunk | It's hugely confusing | 15:31 |
damiandabrowski | works fine at least on HA-Proxy version 2.0.29-0ubuntu1.1 2023/01/19 | 15:31 |
noonedeadpunk | Well. That could be some undocumented behaviour we've taken as granted.... | 15:32 |
jrosser | comment #9 suggests it is working now? | 15:33 |
jrosser | i'm pretty unclear what is going on in the earlier comments | 15:33 |
noonedeadpunk | yeah... | 15:33 |
jrosser | oh right but `haproxy_keepalived_external_vip_cidr ` will stop the fqdn being in the config file? | 15:34 |
noonedeadpunk | in keepalived file | 15:34 |
jrosser | well not sure actually | 15:34 |
noonedeadpunk | for haproxy you'd need haproxy_bind_internal_lb_vip_address | 15:35 |
noonedeadpunk | I think we should get rid of internal/external_lb_vip_address by using smth with more obvious naming | 15:36 |
noonedeadpunk | As basically what we want this variable to be - represent public/internal endpoints in keystone? | 15:36 |
noonedeadpunk | And serve as a default for keepalived/haproxy whenever possible | 15:37 |
noonedeadpunk | so maybe we can introduce smth like openstack_internal/external_endpoint and set it's default to internal/eternal_lb_vip_address and replace _lb_vip_address everywhere in docs/code with these new vars? | 15:39 |
jrosser | having it actually describe what it is would be good | 15:40 |
jrosser | though taking into account doing dashboard.example.com and compute.example.com rather than port numbers would be good too | 15:40 |
jrosser | there is perhaps a larger piece of work to understand how to make that tidy as well | 15:41 |
noonedeadpunk | what confuses me a lot - saying that address can be fqdn... | 15:41 |
noonedeadpunk | yeah, I assume that would need quite ACLs, right? | 15:41 |
jrosser | yeah but perhaps that makes it clearer what we need | 15:41 |
jrosser | as the thing that haproxy binds to is either some IP or a fqdn | 15:42 |
noonedeadpunk | I'm not sure now if it should bind to fqdn.... or if it does in 2.6 for example... | 15:42 |
jrosser | and we completely dont handle dual stack nicely either | 15:43 |
jrosser | feels like we get to PTG topic area with this tbh | 15:43 |
noonedeadpunk | yeah, totally... Let me better write it down to etherpad :D | 15:43 |
jrosser | dual stack is possible - we have it but the overrides are really quite a lot | 15:44 |
noonedeadpunk | I'd say one of problems as of today - <service>.example.com is part of the role | 15:45 |
noonedeadpunk | service role I mean | 15:46 |
noonedeadpunk | As I guess we should join nova_service_type with internal_lb_vip_address by default for that | 15:47 |
noonedeadpunk | So this leads us to more relevant topic | 15:47 |
noonedeadpunk | #topic office hours | 15:48 |
noonedeadpunk | Current work that happens on haproxy with regards to internal TLS | 15:48 |
damiandabrowski | today I'm working on: | 15:49 |
damiandabrowski | - removing haproxy_preconfigured_services and stick only with haproxy_services | 15:49 |
damiandabrowski | - adding support for haproxy_*_service_overrides variables | 15:49 |
damiandabrowski | - evaluating possibility of moving LE temporary haproxy service feature from haproxy_server role to openstack-ansible repo | 15:49 |
damiandabrowski | i'll push changes today/tomorrow | 15:49 |
damiandabrowski | I also pushed PKI/TLS support for glance and neutron(however i need to push some patches to dependent roles to get them working): | 15:49 |
damiandabrowski | https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/821011 | 15:49 |
damiandabrowski | https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/873654 | 15:49 |
noonedeadpunk | damiandabrowski: I have a question - was there a reason why we don't want to include haproxy role from inside of service roles? | 15:50 |
noonedeadpunk | As it feels right now, that implementation of this named endpoints will be way easier, as we will have access to vars that are defined inside roles | 15:51 |
noonedeadpunk | Or it was somehow more tricky with delegation? | 15:51 |
jrosser | what would we do for galera role there? | 15:51 |
noonedeadpunk | And handlers? | 15:51 |
jrosser | do we want to couple the galera role with the haproxy one like that when they are currently independant | 15:52 |
noonedeadpunk | jrosser: I should return to my work on proxysql to be frank I've put on hold year ago... | 15:52 |
jrosser | i am also using galera_server outside OSA | 15:53 |
damiandabrowski | hmm, i'm not sure if i understand you correctly, can you provide some example? | 15:53 |
damiandabrowski | we do you think it would be better to patch each role? | 15:53 |
noonedeadpunk | Well. It's doesn't make usage of haproxy really good option.... | 15:53 |
noonedeadpunk | for galera balancing | 15:53 |
jrosser | anyway fundamental question seems to be if we should call haproxy role from inside things like os_glance | 15:55 |
jrosser | or if it should be done somehow in the playbook | 15:55 |
noonedeadpunk | yes ^ | 15:55 |
jrosser | and then also i am not totally following damiandabrowski> - evaluating possibility of moving LE temporary haproxy service feature from haproxy_server role to openstack-ansible repo | 15:56 |
jrosser | ^ is this about how the code is now, or modifiying the new patches | 15:56 |
noonedeadpunk | jrosser: Galera for me doesn't make much sense for me personally to make dependant on haproxy | 15:56 |
noonedeadpunk | I'm not sure though if you wanted to do that or not | 15:56 |
jrosser | i think we should keep those decoupled, and also rabbitmq | 15:57 |
noonedeadpunk | But I'd rather not, and left galera in default_services or whatever var will be | 15:57 |
noonedeadpunk | Yes | 15:57 |
noonedeadpunk | but for os_<service> I think it does make sense to call haproxy role from them | 15:57 |
damiandabrowski | "^ is this about how the code is now, or modifiying the new patches" - modifying patches, that was your suggestion, right? | 15:58 |
jrosser | yes, thats right | 15:58 |
jrosser | is it possible to make nearly no change to haproxy role? | 15:59 |
damiandabrowski | i don't think so... | 16:00 |
damiandabrowski | but i can at least try to make as little changes as possible | 16:01 |
damiandabrowski | i still have no idea how can we avoid having haproxy_service_config.yml for "preconfigured" services and other one for services configured by service playbooks | 16:02 |
jrosser | we can talk that though if you like | 16:03 |
jrosser | *through | 16:03 |
noonedeadpunk | We can make some call even if needed | 16:03 |
damiandabrowski | yeah, sure | 16:05 |
noonedeadpunk | #endmeeting | 16:05 |
opendevmeet | Meeting ended Tue Feb 14 16:05:25 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:05 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-02-14-15.03.html | 16:05 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-02-14-15.03.txt | 16:05 |
opendevmeet | Log: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-02-14-15.03.log.html | 16:05 |
noonedeadpunk | Sorry, we were out of time :( | 16:05 |
noonedeadpunk | But I think we at least need to decide how we want to call haproxy role - from playbooks or roles. Calling from playbooks leave role cleaner, but indeed I'm thinking if role vars might be useful | 16:06 |
damiandabrowski | i can evaluate if it's possible | 16:07 |
noonedeadpunk | Or, I can recall in one of iterations, where definition was in role defaults and include on playbook level - I assume that it was reading somehow these vars? | 16:08 |
jrosser | well i already did not like having haproxy_* vars in role/defaults/main.yml | 16:08 |
jrosser | anyway | 16:08 |
noonedeadpunk | jrosser: well, if they will be glance_haproxy_service? | 16:08 |
noonedeadpunk | like we do definition for pki for example | 16:08 |
jrosser | yeah thats not so bad | 16:09 |
noonedeadpunk | Also changes you're reffering to are already abandoned :) But I was more talking about - then on playbook level you somehow got access to role vars | 16:10 |
noonedeadpunk | Then basically it doesn't matter where to trigger role | 16:10 |
noonedeadpunk | Though I still don't get how this https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/871193/1/defaults/main.yml was working... | 16:11 |
damiandabrowski | sorry, i'm getting distracted by personal matters | 16:14 |
damiandabrowski | but I'm also not sure if triggering haproxy role from the inside of other roles(openstack services, repo, rabbitmq, galera etc.) is the right thing to do | 16:16 |
noonedeadpunk | ok, good then :) | 16:22 |
damiandabrowski | so maybe i can fix all your comments which are fairly easy to fix first, and during next iteration(later this week) we can discuss more complex issues together | 16:26 |
damiandabrowski | is that ok for you? | 16:26 |
jrosser | sure | 16:28 |
spatel | jamesdenton around? | 18:44 |
spatel | I have question on netplan, if i create bridge using netplan br-mgmt, br-vlan etc... does STP default disabled on netplan created bridges? | 18:45 |
spatel | Ubuntu official document saying STP default enabled. But when i use brctl show command its saying STP enabled: no | 18:46 |
spatel | who i should believe? | 18:46 |
noonedeadpunk | you should believe what you see on your system as a result :) | 18:53 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-haproxy_server master: Add a variable to allow extra raw config to be applied to all frontends https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/873745 | 18:57 |
jrosser | noonedeadpunk: i applied the workaround to haproxy with that, and well, "nothing broke" :) | 18:58 |
noonedeadpunk | jrosser: I wonder if we should take apply |unique as well ? | 19:00 |
noonedeadpunk | but yeah, we never did that before, so | 19:01 |
spatel | noonedeadpunk you are saying whatever brctl saying is true? | 19:03 |
jrosser | i thought we could also add another role scope variable here (item.service.haproxy_frontend_raw|default([ `haproxy_frontend_raw` ])) + haproxy_frontend_extra_raw | 19:04 |
jrosser | becasue the default to [] is a bit sad there | 19:04 |
jrosser | then you can do conditional global setting if the service doesnt have anything already, as well as additive "extra" raw statements | 19:05 |
noonedeadpunk | spatel: netplan is just configuration tools that creates config and passes to systemd_networkd | 19:07 |
noonedeadpunk | brctl shows you currently applied configuration basically | 19:07 |
spatel | Hmm but why netplan saying they default enable STP on bridges they created. I was confused there. | 19:08 |
spatel | We had some strange STP loop from one of compute nodes and i am debugging and found this contradict statement | 19:08 |
noonedeadpunk | well... it's indeed true in the code https://github.com/canonical/netplan/blob/0.105/tests/generator/base.py#L115 | 19:10 |
noonedeadpunk | disrtegard that please | 19:11 |
noonedeadpunk | but yeah, I think in code it's indeed set to true... even though it's a C code... | 19:13 |
noonedeadpunk | that could be actually quite legit bug in netplan... | 19:13 |
spatel | I have CentOS and Ubuntu compute and i have noticed STP loop only in Ubuntu computes.. Multiple time it happened | 19:19 |
spatel | Now i am nervous that its possible stupid bug hurting me.. | 19:19 |
spatel | How do i prove that STP is really really off on bridges. | 19:20 |
admin1 | brctl show will say STP is on yes or no | 21:40 |
admin1 | that is not enough ? | 21:40 |
ElDuderino | Another ‘not sure how to ask you’ question. Background: I had 3k (test) vms that were booted from volumes. I successfully deleted all 3k vms in one go/pass. I then issued a command to delete the orphaned vols, via openstack cli which succeeded. | 22:07 |
ElDuderino | While monitoring all the vhosts in the rabbit dashboard, I didn't note many non-acks. Everything seems to run as expected. | 22:07 |
ElDuderino | o, while deleting the 3k orphaned vols, its also worth noting that I had 0 vms at this point. | 22:07 |
ElDuderino | At that time, I issued a concurrent vm create request, boot from volume (to test volume CRUD limits) for ~400 vms. | 22:07 |
ElDuderino | At which point nova filtering (we don't use the placement service, I know, I know) found valid hosts, neutron assigned IPs as expected, glance cached the images to the hosts but they all failed to build b/c they were waiting on cinder to process the concurrent (new) create requests while also processing the prior delete requests. | 22:07 |
ElDuderino | I looped a script to tell me how many vols were left to delete to monitor it and once the number left dropped below 1k or so, I was able to create the delta (up to the 1k point, or whatever the number was, I don’t recall at the moment) with no failures. | 22:07 |
ElDuderino | All of that to say, where do I set/tune/test different values? Or is there a max quota set elsewhere that concerns itself with cinder calls? I’ll keep googling but wanted to ask you also. THX. | 22:08 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!