*** setra has joined #openstack-neutron-ovn | 00:19 | |
*** setra has quit IRC | 00:44 | |
russellb | jamesdenton: for which database, the main ovs database? (not the OVN dbs) | 00:51 |
---|---|---|
russellb | jamesdenton: "ovs-vsctl set-manager <...>" -- that persists it in the db | 00:52 |
russellb | there are separate commands for the equivalent settings in the OVN northbound and southbound dbs | 00:53 |
russellb | for those, it's "ovn-nbctl set-connection <...>" and "ovn-sbctl set-connection <...>" | 00:54 |
*** yamamoto has joined #openstack-neutron-ovn | 01:15 | |
openstackgerrit | Dong Jun proposed openstack/networking-ovn master: Python3.5 RuntimeError: dictionary changed size during iteration https://review.openstack.org/501062 | 01:43 |
openstackgerrit | Dong Jun proposed openstack/networking-ovn master: Set requested-chassis with binding host_id. https://review.openstack.org/499820 | 01:44 |
*** viaggarw has joined #openstack-neutron-ovn | 01:48 | |
*** vikrant has joined #openstack-neutron-ovn | 01:48 | |
*** fzdarsky_ has joined #openstack-neutron-ovn | 01:52 | |
*** fzdarsky has quit IRC | 01:54 | |
*** viaggarw has quit IRC | 02:08 | |
*** vikrant has quit IRC | 02:08 | |
*** setra has joined #openstack-neutron-ovn | 02:12 | |
*** yamamoto_ has joined #openstack-neutron-ovn | 03:25 | |
*** yamamoto has quit IRC | 03:28 | |
*** janki has joined #openstack-neutron-ovn | 04:35 | |
*** trinaths has joined #openstack-neutron-ovn | 04:42 | |
*** yamamoto_ has quit IRC | 05:07 | |
*** yamamoto has joined #openstack-neutron-ovn | 05:09 | |
*** pcaruana has joined #openstack-neutron-ovn | 05:27 | |
*** janki has quit IRC | 05:28 | |
*** janki has joined #openstack-neutron-ovn | 05:53 | |
*** zefferno has joined #openstack-neutron-ovn | 06:02 | |
*** janki has quit IRC | 06:29 | |
*** yamamoto has quit IRC | 06:34 | |
*** yamamoto has joined #openstack-neutron-ovn | 06:35 | |
*** janki has joined #openstack-neutron-ovn | 06:57 | |
*** ajo has quit IRC | 07:33 | |
*** ajo has joined #openstack-neutron-ovn | 07:45 | |
*** lucas-afk is now known as lucasagomes | 08:28 | |
*** ajo has quit IRC | 08:39 | |
*** ajo has joined #openstack-neutron-ovn | 09:04 | |
*** openstackgerrit has quit IRC | 09:18 | |
*** setra has quit IRC | 09:23 | |
*** setra has joined #openstack-neutron-ovn | 09:29 | |
*** yamamoto has quit IRC | 09:40 | |
*** openstackgerrit has joined #openstack-neutron-ovn | 09:55 | |
openstackgerrit | Merged openstack/networking-ovn master: Python3.5 RuntimeError: dictionary changed size during iteration https://review.openstack.org/501062 | 09:55 |
*** openstackgerrit has quit IRC | 10:03 | |
*** yamamoto has joined #openstack-neutron-ovn | 10:11 | |
*** yamamoto has quit IRC | 10:16 | |
*** numans has joined #openstack-neutron-ovn | 10:19 | |
numans | lucasagomes, Hi | 10:20 |
lucasagomes | numans, hi there | 10:20 |
numans | lucasagomes, i need your help a bit in this review - https://review.openstack.org/#/c/494293/ | 10:21 |
lucasagomes | numans, sure, lemme take a look | 10:21 |
numans | lucasagomes, basicaly the job - 'gate-tripleo-ci-centos-7-scenario007-multinode-oooq' is failing | 10:21 |
* lucasagomes check logs | 10:21 | |
numans | lucasagomes, so this job deploys OVN using tripleo and then runs the tempest tests | 10:21 |
lucasagomes | any visible error ? | 10:21 |
numans | lucasagomes, the test just times out. | 10:21 |
numans | lucasagomes, i think it is blocking when tempest test calls GET v2/securitygroups API | 10:22 |
lucasagomes | right | 10:22 |
lucasagomes | one strange thing there, why are we running the dhcp agent ? | 10:22 |
lucasagomes | http://logs.openstack.org/93/494293/13/check/gate-tripleo-ci-centos-7-scenario007-multinode-oooq/32582c3/logs/undercloud/var/log/neutron/dhcp-agent.log.txt.gz | 10:22 |
numans | lucasagomes, dhcp agent shouldn't be started | 10:23 |
numans | let me see | 10:23 |
lucasagomes | it seems that it's running | 10:23 |
numans | lucasagomes, you are seeing the logs of undercloud neutron | 10:23 |
numans | lucasagomes, see subnodes-2 | 10:23 |
lucasagomes | oh right | 10:24 |
numans | lucasagomes, here is the tempest logs - http://logs.openstack.org/93/494293/13/check/gate-tripleo-ci-centos-7-scenario007-multinode-oooq/32582c3/logs/undercloud/home/jenkins/tempest_output.log.txt.gz | 10:24 |
* lucasagomes looks | 10:24 | |
numans | any pointers would be really helpful. i am looking into this issue since almost 3 days | 10:24 |
numans | lucasagomes, on a side note, i want to know if we can run just the one test - tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops in our networking-ovn job | 10:26 |
lucasagomes | numans, cool, I will jump in see if I can see something in the logs | 10:26 |
numans | lucasagomes, thanks | 10:26 |
numans | lucasagomes, have a look at this review as well - https://review.openstack.org/#/c/500609/ (this is same patch except that i am running 2 tests) and the log here - http://logs.openstack.org/09/500609/1/check/gate-tripleo-ci-centos-7-scenario007-multinode-oooq/db52f31/logs/undercloud/home/jenkins/tempest_output.log.txt.gz | 10:28 |
numans | lucasagomes, this shows the time out exception | 10:28 |
lucasagomes | numans, http://logs.openstack.org/93/494293/13/check/gate-tripleo-ci-centos-7-scenario007-multinode-oooq/32582c3/logs/undercloud/home/jenkins/tempest/tempest.log.txt.gz | 10:31 |
lucasagomes | as well | 10:31 |
lucasagomes | here's with the timestamp: http://logs.openstack.org/93/494293/13/check/gate-tripleo-ci-centos-7-scenario007-multinode-oooq/32582c3/logs/undercloud/home/jenkins/tempest/tempest.log.txt.gz#_2017-09-06_03_10_01_494 | 10:32 |
*** yamamoto has joined #openstack-neutron-ovn | 10:32 | |
numans | lucasagomes, i saw that and i dont think that's an issue | 10:32 |
numans | lucasagomes, we see the same error even in networking-ovn jobs | 10:32 |
lucasagomes | oh ok | 10:32 |
numans | lucasagomes, to fix this issue i submitted a patch here - https://review.openstack.org/#/c/500815/ | 10:33 |
lucasagomes | cause I was looking at the containerized version of that job and I didn't see that ping error | 10:33 |
* lucasagomes looks | 10:33 | |
numans | lucasagomes, in the containerized job, tempest is not run | 10:33 |
lucasagomes | numans, http://logs.openstack.org/93/494293/13/check/gate-tripleo-ci-centos-7-containers-multinode/d892bcb/logs/undercloud/home/jenkins/tempest_output.log.txt.gz#_2017-09-06_03_23_13 | 10:34 |
numans | lucasagomes, only gate-tripleo-ci-centos-7-scenario007- deploys OVN, rest is default neutron | 10:34 |
lucasagomes | right on | 10:34 |
numans | lucasagomes, yeah, the issue is seen only with OVN jobs not with ml2ovs | 10:34 |
numans | lucasagomes, so something is definitely wrong either with networking-ovn or with OVN | 10:35 |
numans | lucasagomes, in the case of OVN, the test tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops is successful, but cleanup is failing | 10:36 |
lucasagomes | yeah right at the tearDownClass | 10:38 |
* lucasagomes looks at the tempest code | 10:38 | |
*** setra has quit IRC | 10:48 | |
*** yamamoto has quit IRC | 10:48 | |
lucasagomes | numans, I'm still wondering about that ping test, I don't see any other apparent error | 11:03 |
lucasagomes | even increasing the number of pings from 1 to 3 the ping command still returns 1 (return code) | 11:03 |
lucasagomes | which is seem as a failure | 11:04 |
numans | lucasagomes, that's true. | 11:04 |
lucasagomes | cause 2 pings succeeded and 1 was lost | 11:04 |
lucasagomes | as you describe in the commit message | 11:04 |
lucasagomes | and once that happens, it seems to be seem as a timeout | 11:04 |
numans | lucasagomes, but why it doesn't complete the test and timeout ? | 11:04 |
lucasagomes | look here (1 sec lemme grab the code) | 11:04 |
numans | ok | 11:05 |
lucasagomes | numans, https://github.com/openstack/tempest/blob/263d551edef01eb654c7f32e6821e3caec5f1ea3/tempest/scenario/manager.py#L962-L967 | 11:06 |
lucasagomes | the _check_remote_conectivity() (the private one) | 11:06 |
lucasagomes | is the one invoking ping_host | 11:06 |
lucasagomes | which will return a failure due that missing ping in OVN | 11:06 |
lucasagomes | and will log that time out message | 11:06 |
numans | lucasagomes, if we remove this "set -eu -o pipefail;" it would pass i suppose | 11:07 |
numans | lucasagomes, i mean the ping result | 11:07 |
lucasagomes | right yeah | 11:07 |
numans | lucasagomes, i will submit a test patch in tempest and make it depends on ... let me try that | 11:08 |
lucasagomes | numans, let's add a patch doing it ? Or maybe there's a way to tell ping to ignore 1 packet lost | 11:08 |
lucasagomes | like a success rate or something, idk | 11:08 |
lucasagomes | numans, ack | 11:08 |
lucasagomes | because I don't see anything else that ressembles a error apart from that | 11:08 |
numans | lucasagomes, set -eu -o pipefail is the one which is making the command fail | 11:08 |
lucasagomes | I think that's the thing, it fails to ping and then it goes to teardown and it fails there too | 11:08 |
numans | lucasagomes, that's true. but why the test blocks in cleanup ? | 11:08 |
lucasagomes | so one exception is covering the other | 11:09 |
lucasagomes | numans, I think teardown will be called upon success or failure right ? | 11:09 |
numans | lucasagomes, it does cleanup right ? | 11:09 |
lucasagomes | so if it fails on the test and then fails on the teardown as well maybe one exception will overwrite the other ? idk | 11:09 |
lucasagomes | yeah it def calls the cleanup | 11:09 |
lucasagomes | which fails listing the security groups for some reason | 11:10 |
numans | lucasagomes, may be. let me try this. thanks for your help and time | 11:10 |
numans | lucasagomes, it actually blocks or times out | 11:10 |
lucasagomes | numans, cool np I will continue to look | 11:10 |
numans | lucasagomes, thanks | 11:10 |
lucasagomes | let's see if the ping solves it | 11:10 |
lucasagomes | if not we can continue to dig into it | 11:10 |
numans | lucasagomes, i just hope, tripleo will pick up the tempest patch and use that | 11:10 |
lucasagomes | it should, with the depends-on in place | 11:10 |
numans | lucasagomes, you happened to see the code where ping command is framed ? | 11:12 |
lucasagomes | numans, yeah, grep for "def ping_host" | 11:12 |
numans | lucasagomes, ok | 11:12 |
lucasagomes | numans, tempest/lib/common/utils/linux/remote_client.py: | 11:13 |
numans | lucasagomes, got it thanks | 11:13 |
lucasagomes | np | 11:13 |
* lucasagomes fingers crossed | 11:13 | |
numans | lucasagomes, luckily i dont have to submit a patch in tempest - https://github.com/openstack/tempest/blob/263d551edef01eb654c7f32e6821e3caec5f1ea3/tempest/config.py#L707 | 11:15 |
numans | lucasagomes, i can just update this patch - https://review.openstack.org/#/c/500815/ | 11:16 |
lucasagomes | ah nice | 11:16 |
lucasagomes | numans, by always failing the first ping in case the openflow rule is not in place, isn't it a bug in OVN ? | 11:16 |
lucasagomes | or seem as one ? | 11:17 |
numans | lucasagomes, that is a design decision | 11:17 |
*** jchhatbar has joined #openstack-neutron-ovn | 11:17 | |
lucasagomes | ok | 11:17 |
numans | lucasagomes, if you see the commit message | 11:17 |
numans | lucasagomes, what happens is that ovn-controller stores the learnt mac/ip in MAC_Binding table in southbound db | 11:17 |
lucasagomes | yeah i saw that, I was just wondering if we should consider that a bug in ovn or not | 11:17 |
lucasagomes | I understand it's setting up the rule upon the first ping | 11:18 |
numans | lucasagomes, in case for an ipv4 packet, if it doesn't know the mac for the ip, it generates ARP request packet and sends it | 11:18 |
lucasagomes | I see | 11:18 |
numans | lucasagomes, it's tricky to fix right :). you need to kind of enqueue that packet until the ARP reply is seen and then resend it :) | 11:19 |
lucasagomes | yeah, I just wondering because, everyone testing the connective by the return code of the ping command will have some trouble | 11:19 |
lucasagomes | yeah it's very tricky | 11:19 |
numans | lucasagomes, yeah even i thought about it :) | 11:19 |
lucasagomes | conectivity with the return* | 11:19 |
*** janki has quit IRC | 11:20 | |
jamesdenton | russellb - thanks for the help | 11:20 |
lucasagomes | numans, yeah well, fair enough one step at time... let's see if removing the pipefail solves the problem | 11:20 |
lucasagomes | numans, I will grab some quick lunch and be right back | 11:20 |
numans | lucasagomes, bon appetite :) | 11:21 |
lucasagomes | cheers :D | 11:21 |
*** lucasagomes is now known as lucas-hungry | 11:21 | |
*** trinaths has left #openstack-neutron-ovn | 11:28 | |
*** yamamoto has joined #openstack-neutron-ovn | 11:45 | |
*** yamamoto has quit IRC | 12:13 | |
numans | lucas-hungry, lets see how this one goes :) - https://review.openstack.org/#/c/500609/ | 12:26 |
*** yamamoto has joined #openstack-neutron-ovn | 12:28 | |
*** lucas-hungry is now known as lucasagomes | 12:34 | |
lucasagomes | numans, cool! I will keep an eye | 12:35 |
lucasagomes | on it* | 12:35 |
*** mmichelson has joined #openstack-neutron-ovn | 12:57 | |
*** thegreenhundred has joined #openstack-neutron-ovn | 13:30 | |
*** openstackgerrit has joined #openstack-neutron-ovn | 13:51 | |
openstackgerrit | Merged openstack/networking-ovn master: Delete dummy files https://review.openstack.org/500866 | 13:51 |
*** zefferno has quit IRC | 13:57 | |
numans | russellb, can you please have a look at this patch - https://review.openstack.org/#/c/500367/ | 14:00 |
russellb | done | 14:02 |
numans | russellb, thanks | 14:03 |
numans | russellb, and this one as well - https://review.openstack.org/#/c/500512/ | 14:04 |
*** jchhatbar is now known as janki | 14:11 | |
*** janki has quit IRC | 14:14 | |
openstackgerrit | Merged openstack/networking-ovn master: Small refactor of metadata bits https://review.openstack.org/493038 | 14:14 |
*** yamamoto has quit IRC | 14:17 | |
lucasagomes | numans, the job failed on gate but the error seems to be misconfiguration ? See: http://logs.openstack.org/09/500609/3/check/gate-tripleo-ci-centos-7-scenario007-multinode-oooq/7ed70d9/logs/undercloud/home/jenkins/tempest_output.log.txt.gz#_2017-09-06_13_56_26 | 14:30 |
numans | lucasagomes, yeah i saw that :( | 14:30 |
lucasagomes | :-/ | 14:30 |
dalvarez | russellb, numans i have an issue when starting networking-ovn-metadata-agent in OVN from tripleO. When the agent starts (under neutron user) it tries to connect to OVSDB server through the UNIX socket but it fails because ovsdb-server is started under 'openvswitch' user and it lacks permissions | 14:58 |
dalvarez | russellb, numans the thing is that same may happen in ML2/OVS and they set a manager if the connection fails: https://github.com/openstack/neutron/blob/master/neutron/agent/ovsdb/native/connection.py#L35 | 14:59 |
dalvarez | so that they can connect to ovsdb via TCP | 14:59 |
dalvarez | as stated in https://github.com/openstack/ovsdbapp/blob/master/ovsdbapp/schema/open_vswitch/helpers.py#L29 it would be better if tripleO or the deployment tool would set this manager | 14:59 |
dalvarez | russellb, numans what do you guys think? should i patch metadata agent to enable it? try in tripleo instead? | 15:00 |
dalvarez | ajo ^ | 15:00 |
ajo | dalvarez reading | 15:00 |
russellb | or run the agent as openvswitch user | 15:00 |
dalvarez | russellb doesn't it feel weird? | 15:01 |
russellb | if we open tcp, that's fine, but i'd rather do it from tripleo | 15:01 |
ajo | yes we may do it from tripleO | 15:01 |
dalvarez | russellb, yeah i agree although im not quite sure where to do that and how | 15:01 |
dalvarez | i'd go with tripleo approach too | 15:02 |
ajo | numans could may be give you some pointers | 15:02 |
ajo | numans when a tripleo/OVN internals deep dive? :) (or did we had something already and I missed it?) | 15:02 |
dalvarez | hah | 15:02 |
jamesdenton | Is it advisable/not-advisable to leave neutron_sync_mode as 'repair' permanently? | 15:03 |
dalvarez | russellb, ajo numans: https://github.com/openstack/puppet-neutron/blob/master/manifests/plugins/ovs/opendaylight.pp#L96 | 15:04 |
dalvarez | they're already doing it in ODL, shall i mimic that or just do it on compute nodes? | 15:05 |
dalvarez | if possible | 15:05 |
russellb | jamesdenton: i wouldn't ... we're working to get rid of the need for it completely ... | 15:05 |
jamesdenton | thanks russellb. Is the goal some sort of self-healing process? | 15:06 |
numans | dalvarez, russellb it would be better to use the local unix socket though right ? | 15:07 |
numans | i understand about the permissions, | 15:07 |
russellb | jamesdenton: more eliminating all known cases where things could get out of sync, let me link to the current proposal | 15:07 |
russellb | numans: yes | 15:07 |
dalvarez | numans, security reasons? | 15:07 |
numans | in my setup, i don't see ovs-vswitchd running as openvswitch user | 15:08 |
numans | it is running as root | 15:08 |
dalvarez | numans, tripleo here: | 15:08 |
dalvarez | [heat-admin@overcloud-novacompute-0 ~]$ ps -ef | grep ovsdb-server | 15:08 |
dalvarez | openvsw+ 7159 1 0 09:17 ? 00:00:03 ovsdb-server /etc/openvswitch/conf.db | 15:08 |
russellb | jamesdenton: https://review.openstack.org/#/c/490834/ | 15:08 |
dalvarez | deployed today running master | 15:08 |
numans | dalvarez, but ovn-controller also connects to unix socket right | 15:08 |
dalvarez | numans, ovn-controller runs as root | 15:08 |
numans | dalvarez, i need to deploy a fresh one i guess. | 15:08 |
numans | dalvarez, ok | 15:08 |
jamesdenton | thanks! | 15:09 |
*** amuller has joined #openstack-neutron-ovn | 15:09 | |
dalvarez | russellb, numans ajo so i guess that unless we run the agent under 'openvswitch' user we need to use TCP | 15:10 |
dalvarez | is it a security concern? it would listen only on localhost | 15:10 |
dalvarez | it's already done by ovs agent in ml2/ovs and ODL sets the manager too | 15:10 |
russellb | seems fine | 15:10 |
ajo | dalvarez I wonder if it could also be done via ACLs or groups | 15:11 |
ajo | via ACLs we could set the domain socket in a context, and the metadata agent into the same context | 15:11 |
ajo | I haven't done that before but I suspect it could be possible | 15:12 |
dalvarez | ajo maybe that works but i'd do that only if we have security concerns | 15:12 |
dalvarez | which i personally can't see now but i'm probably missing something | 15:13 |
ajo | may be there aren't any big ones | 15:13 |
ajo | dalvarez may be if it's on the openvswitch group it would be able to fiddle or trash the db files directly if it wanted | 15:13 |
dalvarez | but that won't happen with the TCP connection right? | 15:14 |
dalvarez | so i can see two options now :) | 15:14 |
ajo | if, may be hacked from a metadata request somehow which I highly doubt because the workflow is quite simple | 15:14 |
ajo | not the files | 15:14 |
dalvarez | 1. let tripleo set the manager | 15:14 |
ajo | but I guess you can do many evil things via ovsdb directly :D | 15:14 |
dalvarez | 2. add neutron user to openvswitch group so that it can connect to it | 15:14 |
dalvarez | yeah that's true | 15:15 |
dalvarez | it's kinda exposed since it's accepting requests on a unix socket | 15:15 |
ajo | 3. run metadata agent under openvswitch user ? | 15:15 |
dalvarez | so evil instances could potentially exploit it | 15:15 |
ajo | or do we need the neutron group too ? | 15:15 |
dalvarez | ajo yeah 3 should be fine too but breaks the general philosophy we already have ? | 15:15 |
ajo | rootwrap daemon I guess | 15:15 |
dalvarez | not sure about that one | 15:15 |
ajo | need to AFK for a while, may be I'd go with 1. even if more costly, it's the same thing we do with other things | 15:16 |
ajo | just to be uniform | 15:16 |
*** yamamoto has joined #openstack-neutron-ovn | 15:17 | |
dalvarez | i'd go with 1 as well | 15:18 |
dalvarez | i'll give it a go, will do that the same way as ODL even tho we need it just on compute nodes | 15:18 |
dalvarez | numans, you know of a way to do https://github.com/openstack/puppet-neutron/blob/master/manifests/plugins/ovs/opendaylight.pp#L96 only on compute nodes? | 15:18 |
numans | dalvarez, i will come back on this later :) | 15:19 |
dalvarez | sure! :) | 15:19 |
*** yamamoto has quit IRC | 15:25 | |
otherwiseguy | should be able to add user to the openvswitch group since the socket should be created 0770 according to bind_unix_socket(). | 15:31 |
openstackgerrit | Merged openstack/networking-ovn master: Track router and floatingip quota usage using TrackedResource https://review.openstack.org/500367 | 15:36 |
dalvarez | otherwiseguy++ | 15:44 |
dalvarez | $ stat -c %a /var/run/openvswitch/db.sock | 15:47 |
dalvarez | 750 | 15:47 |
* otherwiseguy sighs | 15:49 | |
dalvarez | :( | 15:50 |
openstackgerrit | Merged openstack/networking-ovn master: Add DNS db mixin in l3 plugin https://review.openstack.org/500512 | 15:54 |
otherwiseguy | dalvarez, with strace I can definitely see it calling fchmod(fd, 0770) then bind(fd, ...), but then the file ends up owned 0750... | 16:06 |
*** mmichelson has quit IRC | 16:07 | |
*** lucasagomes is now known as lucas-afk | 16:07 | |
dalvarez | otherwiseguy, weird! | 16:27 |
dalvarez | otherwiseguy, i think we definitely need to set the manager from tripleo :) i asked beagles to find out how to do it only on compute nodes | 16:28 |
otherwiseguy | dalvarez, it at least seems like a reasonable workaround. i remember tearing my hair out trying to get permissions stuff sorted last time I looked at it. "THIS SHOULD BE WORKING". | 16:30 |
otherwiseguy | Never got it working without changing umask, though that changes the permissions of any files created (log files, etc.) | 16:31 |
dalvarez | yeah :( | 16:31 |
otherwiseguy | dalvarez, https://stackoverflow.com/questions/11781134/change-linux-socket-file-permissions some comment said that fchmod didn't work for them, they had to use chmod. | 16:35 |
dalvarez | otherwiseguy, yeah i see the last comment..."I have tested fchmod() on Linux. None of the combinations (before bind, after listen) worked. In all cases it returned 0 but did not change the file permissions. Only chmod() worked" | 16:47 |
dalvarez | otherwiseguy, shall we fill a bug against ovs and use chmod instead (assuming it's using fchmod now) | 16:47 |
otherwiseguy | dalvarez, I'm checking to see if that makes any difference right now. :) | 16:48 |
dalvarez | otherwiseguy++ | 16:48 |
* otherwiseguy waits for ovs to build | 16:48 | |
otherwiseguy | the problem with chmod is that if files are moved etc. i think it fails. | 16:49 |
otherwiseguy | as in https://en.wikipedia.org/wiki/Time_of_check_to_time_of_use | 16:53 |
dalvarez | ahh i see | 16:53 |
dalvarez | otherwiseguy, from https://stackoverflow.com/questions/1892501/is-it-better-to-use-fchmod-over-chmod "With chmod you run the risk of someone renaming the file out from under you and chmodding the wrong file. In certain situations (especially if you're root) this can be a huge security hole." | 16:54 |
dalvarez | otherwiseguy, i gotta run now it's been a looooooong first day after PTO but drop a message if you want me to try something tomorrow first thing in the morning and i'll do that! | 16:56 |
dalvarez | thanks for your awesome help :) | 16:56 |
dalvarez | cya tomorrow! | 16:57 |
otherwiseguy | dalvarez, have a nice night | 16:57 |
otherwiseguy | dalvarez, i'm checking to see if it works anyway just out of curiosity. | 16:57 |
*** yamamoto has joined #openstack-neutron-ovn | 17:13 | |
*** yamamoto has quit IRC | 17:17 | |
otherwiseguy | dalvarez, yep, chmod after bind works even if it is a bad idea. :p | 17:39 |
*** anilvenkata has quit IRC | 18:10 | |
*** lrichard_ has joined #openstack-neutron-ovn | 18:52 | |
*** lrichard_ has quit IRC | 18:53 | |
*** lrichard_ has joined #openstack-neutron-ovn | 18:54 | |
*** lrichard has quit IRC | 18:54 | |
*** pcaruana has quit IRC | 19:34 | |
*** mmichelson has joined #openstack-neutron-ovn | 20:04 | |
*** amuller has quit IRC | 20:13 | |
*** mmichelson has quit IRC | 21:49 | |
*** thegreenhundred has quit IRC | 22:55 | |
*** mmichelson has joined #openstack-neutron-ovn | 23:35 | |
*** mmichelson has quit IRC | 23:40 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!