Saturday, 2023-02-04

*** mtreinish_ is now known as mtreinish01:39
*** mtreinish_ is now known as mtreinish02:41
*** dasm|afk is now known as dasm|off04:15
EugenMayer4sean-k-mooney that sounds like way too much effort for that. I'am fairly familiar with OPNsense but deploying it (manually, no tf suppport) just for that purpose seems over the top). I assumed that a DMZ should be something that neutron should be dealing with without bigger issues.07:04
sean-k-mooney[m]not really that is not something that they support07:06
sean-k-mooney[m]your only way to do this in neutron is security groups07:06
sean-k-mooney[m]or the firewall as a service project which as i said is nolonger activly developed07:06
sean-k-mooney[m]so if security groups dont work for your usecase then you need to create something your self. i assume you have tried security groups and that was not enough07:08
sean-k-mooney[m]security groups can match on ports/adreees adn on ingress vs egress07:09
sean-k-mooney[m]but if you want to enforce that policy regradless of how the vms are booted on the dmz you cant reallly do that at the network level in neutron out of the box07:11
sean-k-mooney[m]you could try https://docs.openstack.org/neutron/latest/admin/fwaas.html but as i said im not sure if that is even maintaiend anymore you shouold talk to the neutron team about it first because the project was being discontinued at one point07:16
opendevreviewmelanie witt proposed openstack/nova master: Reproducer for bug 2003991 unshelving offloaded instance  https://review.opendev.org/c/openstack/nova/+/87247009:53
opendevreviewmelanie witt proposed openstack/nova master: Enforce quota usage from placement when unshelving  https://review.opendev.org/c/openstack/nova/+/87247109:53
EugenMayer4sean-k-mooney[m] thank you for clarifying. The scope of security groups is hard to grasp. I tried secgroups on the VM, but i cannot limit the ingress to the intranet lan while opening the egress for internet via 0.0.0.0/0 12:06
EugenMayer4also, i assume that port (router port) based SG with OVS are entirely broken12:06
sean-k-mooney[m]not broken security groups were only ever intended for vm ports12:07
sean-k-mooney[m]the firewall as a service api was for router based firewalling.12:07
EugenMayer4i understand that (know) after digging int things. But that is not how they are exposed, described and "advertised"12:07
sean-k-mooney[m]with that said you have differnt behavior with iptables and openflow in some cases12:08
sean-k-mooney[m]the openflow firewall is more permissive interms fo what packets are allowed12:09
EugenMayer4yes, the iptables vs openflow stuff is entirely on me. We use OVS and i'am actually only okish with iptables in general, understand the concepts and know how to debug and isolate. Openflow is rather blowing my mind up12:09
EugenMayer4So that one is very much on me. I seem to struggle a lot of netns/openflow and all the tools one needs (and a lot of more complex concepts)12:09
sean-k-mooney[m]i learnd openflow and ovs at the same time i was learning linux networking so i gnereally unserdstand the ovs side better12:10
sean-k-mooney[m]if you were using the iptables firewalll driver you might be able to open egrees to the world and limit ingress12:11
sean-k-mooney[m]the way the connection tracker stuff works is different between the two12:11
EugenMayer4well yes, i could limit the other side, but that seems overkill closing down the ingress on all the other targets. But surely doable somewhat12:12
sean-k-mooney[m]https://docs.openstack.org/neutron/latest/admin/config-ovsfwdriver.html#differences-between-ovs-and-iptables-firewall-drivers12:13
EugenMayer4I'am used to limit what networks can do, since that really helps reducing the extra cost. I mean networks in openstack, beside the segmentation, has no real usage otherwise, if you cannot really control the flow, right. I mean you could also opt in for a huge 16 network or whatever and then use ingress for limitations (somewhat)12:13
sean-k-mooney[m]neutron model is all based aroudn the ports not the networks12:14
sean-k-mooney[m]and doign qos/firewallign on the ports12:15
EugenMayer4interesting, reading the OVS part it would mean, if i have an egress 0.0.0.0/0 and on 10.10.5.5/32 and i want to talk to 10.10.5/32 it might block it (but it does not make sense)12:15
sean-k-mooney[m]partly because form most of its life it only had contol at the endpoints12:15
EugenMayer4yes, maybe i have to adopt that more. I usually tend to do more with networks12:15
sean-k-mooney[m]i.e. linux bridge and ovs really could not influcance the core network at all12:15
EugenMayer4So you would, potentially, instead of controlling the egress of my DMZ network, rather control the ingress of the VMS in the the intranet network12:16
sean-k-mooney[m]so there were those that wanted to do that and they created the firewall as a service and service function chaning projects to try and do that12:16
EugenMayer4that would be more the neutron way, right?12:16
sean-k-mooney[m]the problem is they didnt keep maintianing them12:16
sean-k-mooney[m]contolling ingress to the vms is the normally way yes12:17
sean-k-mooney[m]security groups by defaull block all trafffic12:17
sean-k-mooney[m]and you are expected to only open the port to the clinets that need acess12:17
EugenMayer4Yeah, i think the openstack hype was 2018 and has fallen quiet a bit. Also looked for some literature recently. People basically stopped publishing in 201812:17
sean-k-mooney[m]i tought the hype died before that but ok :)12:18
EugenMayer4Well i'am not the one to define that, i came in here very much after that. I think we run openstack since jan 202112:19
sean-k-mooney[m]the problem escpailly on the neutron side si none of the network vendors wanted to work and maintian the core12:19
sean-k-mooney[m]they wanted to integrate there network stack so they could sell you that12:19
EugenMayer4well i think, the "vendors" wanted to make a difference in their services they offer, and to have that, they stopped sharing to be "different and better" then the competitor vendor12:19
EugenMayer4looking at what is really missing in openstack, and looking who is using openstack big scale, i see that most of the things i miss, have been solved on-platform by that vendor12:20
sean-k-mooney[m]right but since neutron allows vendor extentions in the api cisco or juniper would just add a vendor exteion instead of implement a common shared api12:20
EugenMayer4yeah, i see that.12:20
sean-k-mooney[m]there is less of a porblem for that in cinder12:21
sean-k-mooney[m]they do not allow arbitray api extentions and use microverions like nova and keystone12:21
sean-k-mooney[m]so there is driver to driver variance12:22
sean-k-mooney[m]but the api is much more uniform12:22
EugenMayer4I would also say that in terms of API, openstack seems like designed by sysops people opting on microservices, while forgetting / not caring about all the dowsides and complications. There are so much lose ends, nobody is responsible for a task. Creating a backup with glance of a machine in nova ends up to be a task nobody has control over. One12:23
EugenMayer4starts it, the other "does something but has no clue who and what it belongs to" .. and that funny thing then, if anything errors, does not know how and what to recover or where to report the error to even12:23
EugenMayer4I think the hypversor/network/encapsulation part of Openstack is really solid, that's where a lot of the sysops people could do a lot of the good stuff. But the API and Software stack, the architecture, seems to have been neglected and as an "end user" that really bothers me12:24
EugenMayer4A war story, from time to time, no specific pattern, on of our VMs is just shutdown. In the VM audit log i see an anon user (so it will be something systemic) telling the VM to shutdown (cleanly). That's it. No trails no nothing .So something told something to do something somewhere - and i cannot even track the "next layer". Or i have a12:29
EugenMayer4hard-anti-affinity of a server-group. 3 VMS, 3 computes. first deployment, happy time, they land on 1 compute each. From time to time, openstack decides to reschedule them, and on VM lands on the same compute, so one compute has none. This happened about 3 times now and finally i will basically do the stupid thing and tie each of them to one12:29
EugenMayer4specific compute. What is responsible for the re-scheduling, IDK. 12:29
EugenMayer4Enough of the ranting (if it was any). Thank you a lot for your insight (as always!)12:29
opendevreviewmelanie witt proposed openstack/nova master: Reproducer for bug 2003991 unshelving offloaded instance  https://review.opendev.org/c/openstack/nova/+/87247013:19
opendevreviewmelanie witt proposed openstack/nova master: Enforce quota usage from placement when unshelving  https://review.opendev.org/c/openstack/nova/+/87247113:19

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!