*** mlavalle has quit IRC | 00:01 | |
*** bvandenh has quit IRC | 00:09 | |
*** reaper has quit IRC | 00:10 | |
*** SumitNaiksatam has quit IRC | 00:17 | |
*** nati_ueno has joined #openstack-neutron | 00:18 | |
*** nati_uen_ has quit IRC | 00:21 | |
openstackgerrit | Dane LeBlanc proposed a change to openstack/neutron: Improve unit test coverage for Cisco plugin model code https://review.openstack.org/58125 | 00:30 |
---|---|---|
*** matsuhashi has joined #openstack-neutron | 00:31 | |
*** nati_ueno has quit IRC | 00:33 | |
*** carl_baldwin has quit IRC | 00:35 | |
*** aymenfrikha has left #openstack-neutron | 00:39 | |
*** balar has joined #openstack-neutron | 00:41 | |
*** openstack has joined #openstack-neutron | 00:46 | |
*** salv-orlando has quit IRC | 00:53 | |
openstackgerrit | Salvatore Orlando proposed a change to openstack/neutron: Test commit for testing parallel job in experimental queue https://review.openstack.org/57420 | 00:55 |
*** openstackgerrit has quit IRC | 00:56 | |
*** openstackgerrit has joined #openstack-neutron | 00:56 | |
*** dims has quit IRC | 01:04 | |
*** Abhishek has quit IRC | 01:10 | |
*** unicell has joined #openstack-neutron | 01:12 | |
*** unicell has joined #openstack-neutron | 01:12 | |
*** dims has joined #openstack-neutron | 01:18 | |
*** wcaban has quit IRC | 01:21 | |
*** julim has quit IRC | 01:22 | |
*** nati_ueno has joined #openstack-neutron | 01:26 | |
*** nati_ueno has quit IRC | 01:32 | |
*** nati_ueno has joined #openstack-neutron | 01:32 | |
*** nati_ueno has quit IRC | 01:33 | |
*** nati_ueno has joined #openstack-neutron | 01:33 | |
*** banix has joined #openstack-neutron | 01:43 | |
*** Abhishek has joined #openstack-neutron | 01:51 | |
*** Abhishek has quit IRC | 02:00 | |
*** dzyu has joined #openstack-neutron | 02:00 | |
*** banix has quit IRC | 02:04 | |
*** banix has joined #openstack-neutron | 02:06 | |
*** dzyu has quit IRC | 02:09 | |
*** gdubreui has quit IRC | 02:11 | |
*** Abhishek has joined #openstack-neutron | 02:17 | |
*** Jianyong has joined #openstack-neutron | 02:23 | |
*** dims has quit IRC | 02:23 | |
*** gdubreui has joined #openstack-neutron | 02:31 | |
*** rwsu has quit IRC | 02:35 | |
*** rwsu has joined #openstack-neutron | 02:39 | |
*** marun has joined #openstack-neutron | 02:46 | |
*** Abhishek has quit IRC | 02:52 | |
*** gongysh has joined #openstack-neutron | 02:53 | |
*** yfujioka has joined #openstack-neutron | 03:01 | |
*** enikanorov__ has joined #openstack-neutron | 03:17 | |
*** aveiga has joined #openstack-neutron | 03:17 | |
*** AndreyGrebenniko has quit IRC | 03:17 | |
*** gdubreui has quit IRC | 03:17 | |
*** AndreyGrebenniko has joined #openstack-neutron | 03:19 | |
*** harlowja has quit IRC | 03:19 | |
*** pvo has quit IRC | 03:19 | |
*** enikanorov_ has quit IRC | 03:20 | |
*** gdubreui has joined #openstack-neutron | 03:20 | |
*** pvo has joined #openstack-neutron | 03:22 | |
*** suresh12 has quit IRC | 03:53 | |
*** amotoki has joined #openstack-neutron | 03:55 | |
*** aveiga has quit IRC | 03:55 | |
*** nati_ueno has quit IRC | 04:08 | |
*** netavenger-jr has joined #openstack-neutron | 04:26 | |
*** x86brandon has joined #openstack-neutron | 04:28 | |
*** gongysh has quit IRC | 04:35 | |
*** chandankumar has joined #openstack-neutron | 04:36 | |
*** banix has quit IRC | 04:38 | |
*** jp_at_hp has quit IRC | 04:39 | |
*** banix has joined #openstack-neutron | 04:44 | |
*** suresh12 has joined #openstack-neutron | 05:04 | |
*** Jianyong has quit IRC | 05:08 | |
*** suresh12 has quit IRC | 05:09 | |
*** SumitNaiksatam has joined #openstack-neutron | 05:12 | |
*** SumitNaiksatam_ has joined #openstack-neutron | 05:17 | |
*** amir1 has joined #openstack-neutron | 05:17 | |
*** dcahill1 has joined #openstack-neutron | 05:17 | |
*** aryan_ has joined #openstack-neutron | 05:19 | |
*** gfa_ has joined #openstack-neutron | 05:19 | |
*** marun has quit IRC | 05:22 | |
*** dcahill has quit IRC | 05:22 | |
*** asadoughi has quit IRC | 05:22 | |
*** matrohon has quit IRC | 05:22 | |
*** decede has quit IRC | 05:22 | |
*** SumitNaiksatam has quit IRC | 05:22 | |
*** SumitNaiksatam_ is now known as SumitNaiksatam | 05:22 | |
*** gfa has quit IRC | 05:22 | |
*** aryan has quit IRC | 05:22 | |
*** pvo has quit IRC | 05:22 | |
*** marun has joined #openstack-neutron | 05:22 | |
*** pvo has joined #openstack-neutron | 05:22 | |
*** matrohon has joined #openstack-neutron | 05:23 | |
*** decede has joined #openstack-neutron | 05:23 | |
*** amotoki has quit IRC | 05:30 | |
*** x86brandon has quit IRC | 05:31 | |
*** alex_klimov has joined #openstack-neutron | 05:36 | |
*** yfujioka has quit IRC | 05:48 | |
*** banix has quit IRC | 05:59 | |
openstackgerrit | A change was merged to openstack/neutron: Add vpnaas and debug filters to setup.cfg https://review.openstack.org/59870 | 06:00 |
*** yfried has joined #openstack-neutron | 06:01 | |
*** gongysh has joined #openstack-neutron | 06:05 | |
*** suresh12 has joined #openstack-neutron | 06:05 | |
*** towen27 has joined #openstack-neutron | 06:16 | |
towen27 | I was wondering if anyone here could help me with a multi l3_agent setup. | 06:17 |
*** alex_klimov has quit IRC | 06:18 | |
*** alex_klimov has joined #openstack-neutron | 06:18 | |
*** alex_klimov1 has joined #openstack-neutron | 06:19 | |
towen27 | everyone asleep? | 06:20 |
*** alex_klimov has quit IRC | 06:23 | |
*** towen27 has quit IRC | 06:41 | |
openstackgerrit | Jenkins proposed a change to openstack/neutron: Imported Translations from Transifex https://review.openstack.org/59632 | 06:44 |
*** gdubreui has quit IRC | 06:49 | |
*** amritanshu_RnD has joined #openstack-neutron | 06:49 | |
*** nati_ueno has joined #openstack-neutron | 06:52 | |
*** bashok has joined #openstack-neutron | 06:56 | |
*** marun has quit IRC | 06:57 | |
openstackgerrit | Ann Kamyshnikova proposed a change to openstack/neutron: Fix mistake in usage drop_constraint parameters https://review.openstack.org/59910 | 07:02 |
*** nati_ueno has quit IRC | 07:10 | |
*** towen has joined #openstack-neutron | 07:20 | |
towen | Hello, is anyone up? | 07:20 |
lifeless | towen: hi, try #openstack for support | 07:22 |
towen | Thank you | 07:22 |
*** towen has quit IRC | 07:22 | |
*** jlibosva has joined #openstack-neutron | 07:22 | |
*** rwsu has quit IRC | 07:24 | |
*** yongli has quit IRC | 07:28 | |
*** amotoki has joined #openstack-neutron | 07:33 | |
*** rwsu has joined #openstack-neutron | 07:48 | |
*** suresh12 has quit IRC | 08:04 | |
*** amuller has joined #openstack-neutron | 08:04 | |
*** marun has joined #openstack-neutron | 08:09 | |
*** alagalah has joined #openstack-neutron | 08:17 | |
*** jistr has joined #openstack-neutron | 08:18 | |
*** alagalah has left #openstack-neutron | 08:19 | |
openstackgerrit | Armando Migliaccio proposed a change to openstack/neutron: Handle exceptions on create_dhcp_port https://review.openstack.org/57812 | 08:20 |
openstackgerrit | Roman Podoliaka proposed a change to openstack/neutron: Fix a race condition in agents status update code https://review.openstack.org/58814 | 08:23 |
*** markmcclain has joined #openstack-neutron | 08:33 | |
*** matsuhashi has quit IRC | 08:35 | |
*** ygbo has joined #openstack-neutron | 08:36 | |
*** matsuhashi has joined #openstack-neutron | 08:36 | |
*** netavenger-jr has quit IRC | 08:39 | |
*** networkstatic has joined #openstack-neutron | 08:40 | |
*** fouxm has joined #openstack-neutron | 08:41 | |
*** SumitNaiksatam has quit IRC | 08:42 | |
*** SumitNaiksatam has joined #openstack-neutron | 08:42 | |
*** smcavoy has quit IRC | 08:48 | |
*** zigo has joined #openstack-neutron | 09:05 | |
*** amotoki has quit IRC | 09:08 | |
*** jpich has joined #openstack-neutron | 09:08 | |
openstackgerrit | Evgeny Fedoruk proposed a change to openstack/neutron: Extending quota support for neutron LBaaS entities https://review.openstack.org/58720 | 09:10 |
*** metral_ has joined #openstack-neutron | 09:10 | |
*** gongysh has quit IRC | 09:12 | |
*** zigo_ has quit IRC | 09:12 | |
*** metral has quit IRC | 09:12 | |
*** metral_ is now known as metral | 09:12 | |
*** markmcclain has quit IRC | 09:13 | |
*** suresh12 has joined #openstack-neutron | 09:14 | |
*** yongli has joined #openstack-neutron | 09:17 | |
*** suresh12 has quit IRC | 09:19 | |
marios_ | morning neutron | 09:20 |
*** pbeskow has joined #openstack-neutron | 09:29 | |
*** rossella_s has joined #openstack-neutron | 09:29 | |
pbeskow | Anyone have any resources on how to make neutron-ovs-plugin and docker driver interoperate? | 09:31 |
*** salv-orlando has joined #openstack-neutron | 09:33 | |
*** afazekas has joined #openstack-neutron | 09:36 | |
openstackgerrit | Ann Kamyshnikova proposed a change to openstack/neutron: Sync models with migrations https://review.openstack.org/55411 | 09:38 |
openstackgerrit | A change was merged to openstack/python-neutronclient: Fix i18n messages in neutronclient https://review.openstack.org/57522 | 09:38 |
openstackgerrit | A change was merged to openstack/neutron: Imported Translations from Transifex https://review.openstack.org/59632 | 09:39 |
openstackgerrit | A change was merged to openstack/python-neutronclient: Updates .gitignore https://review.openstack.org/59026 | 09:52 |
marun | salv-orlando: ping | 09:57 |
salv-orlando | hi marun | 09:57 |
marun | hi salvatore | 09:57 |
marun | quite the mess we have on our hands :\ | 09:58 |
marun | Just for kicks I tested with the dhcp notification being sent regardless of agent status. | 09:59 |
salv-orlando | I've seen your emails | 09:59 |
salv-orlando | and I've even read them, which is perhaps more incredible :) | 09:59 |
marun | heh | 09:59 |
marun | Am I making a mountain out of a molehill? | 09:59 |
marun | Things just seem so...broken. | 09:59 |
*** salv-orlando_ has joined #openstack-neutron | 10:01 | |
salv-orlando_ | marun: I'm back what did I lose? | 10:02 |
marun | salv-orlando_: In a test where 75 vms were booted with dhcp notification always being sent, the hosts file ends up with 100 entries, but only 43 VMs actually made it all the way to active. | 10:03 |
salv-orlando_ | yes - all the vms made to active according to nova, but only 43 got an IP thus really becoming active? | 10:03 |
*** salv-orlando has quit IRC | 10:03 | |
*** salv-orlando_ is now known as salv-orlando | 10:03 | |
*** jorisroovers has joined #openstack-neutron | 10:03 | |
marun | salv-orlando: not quit. out of 75 boot attempts, only 43 made it to ACTIVE according to nova. | 10:04 |
marun | salv-orlando: the others ended up in error state due to nova having trouble communicating with neutron. | 10:04 |
salv-orlando | marun: sounds like there is another problem beyond the skipped notification? | 10:05 |
*** steven-weston has joined #openstack-neutron | 10:06 | |
salv-orlando | my thought was the the issue you identified with the dhcp agent being unwisely marked as down did not affect the vm boot workflow? | 10:06 |
marun | salv-orlando: yes. when neutron is overloaded the nova integration fails while trying to boot vm's. | 10:06 |
salv-orlando | we are seeing that in the large_ops job as well. | 10:07 |
marun | salv-orlando: what is the large_ops job exactly? I'm afraid I'm ignorant. | 10:07 |
salv-orlando | Launchpad bug 1250168 | 10:07 |
salv-orlando | https://bugs.launchpad.net/neutron/+bug/1250168 | 10:07 |
salv-orlando | It does something similar to what you're doing (150 vms) but uses a fake virt driver | 10:08 |
marun | fake virt driver, I should enable that! | 10:08 |
marun | *sigh* | 10:08 |
salv-orlando | marun: Our understanding is that job fails because nova is terribly slow when interacting with neutron, and we should reduce the chatter | 10:09 |
salv-orlando | dims already did a great job in fixing an issue requiring a lot of round-trips to keystone | 10:09 |
salv-orlando | but something in the last week made the issue reapper | 10:09 |
marun | salv-orlando: Is someone actively working on the problem of optimization, then, and I should leave it to them? | 10:09 |
marun | salv-orlando: It sounds like something a profiler would be useful for. | 10:09 |
salv-orlando | however, in your case, you said you had instances going into ERROR, which is not optimisation, but a real bug | 10:10 |
marun | salv-orlando: Find the hotspots and optimize. | 10:10 |
salv-orlando | marun: attaching a profiler is something I have on a post-it on my desk which is not covered in dust and stained with coffee | 10:10 |
salv-orlando | which is *now | 10:10 |
marun | salv-orlando: so do you want me to leave it to you? | 10:10 |
salv-orlando | Nope, because otherwise it will just stay on that postit | 10:11 |
marun | salv-orlando: ok, fair enough | 10:11 |
salv-orlando | marun: but am I right you were saying you saw instance go into ERROR state, which means at some point neutron started throwing 500s | 10:11 |
marun | salv-orlando: the errors I'm seeing in the logs, btw are 'Caught error: Connection to neutron failed: Maximum attempts reached' | 10:11 |
salv-orlando | like bug 1291115 | 10:11 |
salv-orlando | https://bugs.launchpad.net/neutron/+bug/1211915 | 10:12 |
marun | salv-orlando: btw I tried the fix suggested in the QueuePool bug by garyk, to set the pool timeout to something low, and then I started seeing 500s from nova because neutron was throwing queuepool timeout errors. | 10:12 |
marun | salv-orlando: I really don't understand how fiddling with queuepool conf is supposed to work | 10:12 |
marun | salv-orlando: yeah, like that bug | 10:13 |
marun | salv-orlando: except the error was reported by nova api | 10:13 |
salv-orlando | reducing the queue pool timeout will make things better because it allows quicker recycle of db connections into the pool, but does not permanently solve the issue | 10:14 |
marun | salv-orlando: do you have a suggested value? Setting it to '2' made things worse for me. | 10:14 |
salv-orlando | bottom line in my opinion is that if you want to handle X concurrent requests in neutron your pool size should be at least X+1 | 10:14 |
marun | salv-orlando: ah, ok. | 10:14 |
salv-orlando | marun: I'm not talking about the timeout, but the pool size | 10:15 |
marun | salv-orlando: that suggests rate limiting then | 10:15 |
salv-orlando | marun: yeah, that's another post-it on my desk. We had a guy working on it, but then we did not see him anymore. | 10:15 |
salv-orlando | *We I mean neutron, not my people at vmware | 10:15 |
marun | salv-orlando: right | 10:15 |
marun | salv-orlando: so, hypothetically, if we were to rate-limit maybe we could treat a rate limit failure differently than a connection failure and have a longer time between connection attempts in the client? | 10:17 |
marun | salv-orlando: maybe even progressively adjust the time between connection failures? | 10:17 |
salv-orlando | I was going to ask you the same thing. The approach seems reasonable to me. | 10:18 |
salv-orlando | Definetely better than failing a request without retry or doing complex things like queueing requests in the neutron server | 10:18 |
marun | salv-orlando: I'm hoping we can keep the scope on a fix small enough to backport. | 10:19 |
marun | salv-orlando: though I would argue that an ideal solution will involve queueing requests. | 10:19 |
marun | salv-orlando: breaking apart the api from the 'conductor' is probably the way to go, longer term. refusing client requests when we're at capacity does not seem like the best idea. | 10:20 |
marun | salv-orlando: but i digress | 10:20 |
marun | salv-orlando: ok, I guess there are two parallel requirements to fix the scaling issue. rate-limiting (and handling this in the client) and optimizing common paths. | 10:21 |
*** Sreedhar has joined #openstack-neutron | 10:21 | |
*** matsuhashi has quit IRC | 10:22 | |
*** markmcclain has joined #openstack-neutron | 10:22 | |
marun | salv-orlando: as far as the notification problem, is notifying regardless of agent status an option? | 10:23 |
Sreedhar | Hi All, During the concurrent VM deployment of 30 instances some instances are going into error state due to neutron rpc timeouts and some instances are getting duplicate fixed Ips , once we have around 150 instances already active | 10:26 |
marun | Sreedhar: Yeah, this is a known issue: https://bugs.launchpad.net/bugs/1192381 | 10:28 |
Sreedhar | Have already tuned the sqlalchemy queuepool size and increased the agent_down_time | 10:29 |
Sreedhar | Hi Marun, Thanks, I am following that bug | 10:29 |
Sreedhar | I had the same HW setup and did similar tests in Grizzly. After the tuning of "sqlalchemy queuepool size and increased the agent_down_time", all the instances were active, none went to error state in Grizzly. Also did not had the duplicate fixed IP issues | 10:30 |
salv-orlando | marun: I think it is valuable for fixing the issue in the short term and back porting, as you said. | 10:31 |
salv-orlando | Alternatively the only thing I see is changing the logic for declaring an agent down by allowing for more tolerance for missed notifications | 10:31 |
marun | salv-orlando: maybe both? | 10:31 |
marun | salv-orlando: What do you think of increasing the tolerance for missed notifications and logging when sending to an agent reported as down? | 10:33 |
salv-orlando | marun: I think it's worth doing. | 10:33 |
marun | salv-orlando: Is there any reason not to send a notification to a down agent? Would the alternative be reporting an exception? | 10:33 |
marun | down agent -> 'agent reported as down' | 10:34 |
salv-orlando | marun: I don't see why it should be bad, but jog0 raised the point in the mailing list, so I'm trying to understand why it would be really bad | 10:34 |
*** markmcclain has quit IRC | 10:35 | |
*** markmcclain has joined #openstack-neutron | 10:35 | |
marun | salv-orlando: do you have a link to jog0's concern? I'm afraid I missed it | 10:35 |
openstackgerrit | Isaku Yamahata proposed a change to openstack/neutron: l3-agent-consolidation(WIP): framework for consolidating l3 agents https://review.openstack.org/57627 | 10:36 |
salv-orlando | in the email thread he said that sending notifications to down agent is, in his opinion, as bad as not sending them | 10:37 |
jog0 | what was my concern? | 10:37 |
*** nati_ueno has joined #openstack-neutron | 10:37 | |
salv-orlando | sorry jog0 I think it was not you :( | 10:37 |
salv-orlando | I'm messing up email threads | 10:37 |
salv-orlando | jog0: I hope I did not wake you up or distract from some other task | 10:38 |
jog0 | salv-orlando: heh, I didn't think I chimed in on that thread. my only concern is icehouse-2 | 10:38 |
jog0 | salv-orlando: heh, no I am UTC+2 this week | 10:38 |
openstackgerrit | Armando Migliaccio proposed a change to openstack/neutron: Handle failures on update_dhcp_port https://review.openstack.org/59664 | 10:38 |
*** nigel_r_davos has joined #openstack-neutron | 10:38 | |
markmcclain | jog0: you already in Israel? | 10:38 |
jog0 | markmcclain: yup | 10:38 |
*** armax has joined #openstack-neutron | 10:38 | |
markmcclain | ah cool… I'll be there Sunday | 10:38 |
jog0 | markmcclain: cool see you sunday | 10:39 |
jog0 | time to go find some lunch | 10:39 |
ygbo | markmcclain: Hi, do you have a second? | 10:40 |
markmcclain | ygbo: sure what's up? | 10:41 |
ygbo | I tried to add some docstring explaining why dnsmaq requires --addn-hosts parameter : https://review.openstack.org/#/c/52930/ just let me know if anything is unclear. | 10:42 |
marun | armax: ping | 10:43 |
armax | marun: pong | 10:43 |
marun | armax: regarding https://review.openstack.org/#/c/59664, is there a reason that update and create have to separately test all the failure cases? Why not do test _port_action directly instead | 10:44 |
marun | ? | 10:44 |
*** matsuhashi has joined #openstack-neutron | 10:45 | |
armax | If you want complete coverage the number of combinations are the same | 10:46 |
*** jistr has quit IRC | 10:46 | |
marun | salv-orlando: uh, not true | 10:46 |
marun | sorry, armax: not true | 10:46 |
armax | all exceptions for each supported action, no? | 10:46 |
marun | armax: no | 10:46 |
armax | k | 10:46 |
marun | that's falacious | 10:46 |
marun | armax: if it were true, testing would be combinatorial for complete coverage | 10:47 |
salv-orlando | marun: if you want I can slap him in the face for writing untrue statements :) | 10:47 |
armax | indeed it is | 10:47 |
marun | armax: it does not need to be | 10:47 |
marun | armax: and it can't be, if we want our efforts to be useful | 10:47 |
armax | I'm happy to hear how I can improve it | 10:48 |
marun | armax: an alternative is testing the error conditions with just 'create_port'. Then test golden-path (non-error condition) with 'update_port' | 10:48 |
marun | armax: these are whitebox tests - the error is set by mock anyway | 10:48 |
marun | armax: voila, coverage. | 10:49 |
armax | ok | 10:49 |
armax | so are you saying I remove some test methods? sorry I don't follow you | 10:50 |
marun | armax: my suggested strategy, in general, is testing paths at as low a level as possible (i.e. test the error conditions by calling _port_action directly). | 10:50 |
ygbo | markmcclain: as you can see, dnsmasq does not resolve hosts defined in --dhcp-hostsfile if it did not give a lease for it (it is only a lease mapping and not a list of hosts to resolve). So if you have HA with 2 dnsmaq instances on same subnet (subnet being tunnelled between several network nodes) currently hosts resolve only other hosts on the same network which got their lease from the same dnsmasq instance. | 10:50 |
armax | maybe if you did your review on gerrit I might be able to follow more | 10:50 |
marun | armax: Yes, I'm suggesting removing the new tests that check that update_port handles the error conditions appropriately. | 10:51 |
marun | armax: Ok, I'll add on gerrit. | 10:51 |
armax | tnx | 10:51 |
marun | armax: apologies, I figured a conversation would move things quicker. | 10:51 |
armax | that's okay | 10:51 |
marun | salv-orlando: I think you were talking about Clint Byrum's comments on that mailing list thread. He was concerned that sending notifications blindly would be problematic. | 10:53 |
marun | salv-orlando: I figure logging warnings if agents are not up should alleviate some of that concern, and that the amqp queue will provide some assurance of eventual delivery if an agent is actually down. The alternative would seem much more involved and hard to backport. | 10:54 |
salv-orlando | marun: correct. I don't know how I managed to mix the two of them | 10:54 |
salv-orlando | marun: I agree with you, I though it was worth digging into clint's concerns | 10:54 |
marun | salv-orlando: ok, cool. | 10:55 |
openstackgerrit | Evgeny Fedoruk proposed a change to openstack/python-neutronclient: Extending quota support neutron LBaaS entities https://review.openstack.org/59192 | 10:56 |
pbeskow | am I correct in understanding that if I use the ml2 plugin for a dedicated neutron network node I should be able to use the neutron-linuxbridge plugin on one compute node and the neutron-ovs-plugin on another compute node? | 11:02 |
pbeskow | and then be able to use the docker driver with neutron-linuxbridge to enable network connectivity? | 11:03 |
*** ygbo has quit IRC | 11:04 | |
Sreedhar | Marun: Per this bug https://bugs.launchpad.net/neutron/+bug/1160442, with sqlalchemy queuepool size, did not observe these duplicate fixed in IP's in Grizzly but in Havana even with sqlalchemy queuepool tuning, still see the duplicate fixed IPs. Also per this https://bugs.launchpad.net/bugs/1192381 bug, instances are active but they are not getting IPs, but in my case instances are going to error state due to neutron ti | 11:05 |
*** ygbo has joined #openstack-neutron | 11:07 | |
*** jistr has joined #openstack-neutron | 11:09 | |
*** networkstatic is now known as networkstatic_zZ | 11:09 | |
Sreedhar | marun: In Grizzly, once we have more than 210 instance active during subsequent 30 parallel instance deployment, some instances are not able to get IP address during their first boot. There is a considerable delay (close to 2min) in updating the port status (during security group rule update) due to which instances are not able to get their IP even though the port details are added in hosts file. Till the port status i | 11:11 |
*** bvandenh has joined #openstack-neutron | 11:12 | |
*** jorisroovers has quit IRC | 11:12 | |
salv-orlando | Sreedhar: patch 57420 is rationaling the ovs agent loop | 11:24 |
salv-orlando | or trying to make it less crappy | 11:24 |
salv-orlando | there is also another race being investigated when a port_create_end arrives before the sync_state routine in the dhcp agent processes the network and adds it to the cache. | 11:25 |
salv-orlando | As a result, the port update is not processed until the next sync_state iteration, and DHCPDISCOVER from vms are not handled by dnsmasq as the entry is not added in the hosts file | 11:25 |
salv-orlando | this might cause the vm to timeout on dhcp requests on boot | 11:26 |
*** pcm_ has joined #openstack-neutron | 11:26 | |
salv-orlando | Sreedhar: ^^ and this is an example of it http://logs.openstack.org/20/57420/35/experimental/check-tempest-dsvm-neutron-isolated-parallel/cdf95ef/logs/ | 11:26 |
*** pcm_ has quit IRC | 11:28 | |
*** pcm_ has joined #openstack-neutron | 11:28 | |
Sreedhar | salv-orlando: Thanks for the info. I see some build failures in patch 57420. Is this complete, can i merge that code | 11:29 |
salv-orlando | Sreedhar: patch 57420 is a wip - Once the builds become green I will extract several patches out of them and push them for revie | 11:31 |
salv-orlando | there is a lot of LOG code in there | 11:31 |
*** jp_at_hp has joined #openstack-neutron | 11:31 | |
Sreedhar | salv-orlando: Thanks. | 11:32 |
*** KA has quit IRC | 11:38 | |
openstackgerrit | Armando Migliaccio proposed a change to openstack/neutron: Handle failures on update_dhcp_port https://review.openstack.org/59664 | 11:39 |
*** armax has quit IRC | 11:40 | |
*** bvandenh has quit IRC | 11:49 | |
*** jorisroovers has joined #openstack-neutron | 12:03 | |
anteaya | mlavalle: awesome job on the API tests gap analysis | 12:08 |
anteaya | I have added a note to the etherpad encouraging anyone to select one item from the list and identify themselves and then create a launchpad bug for the item | 12:09 |
anteaya | the person signing up does not have to file a patch for the bug | 12:09 |
anteaya | once the list is in the bug tracker it is easier to track progress, but Mark didn't want you to have to enter the list into launchpad yourself | 12:10 |
anteaya | https://etherpad.openstack.org/p/icehouse-summit-qa-neutron | 12:10 |
*** bvandenh has joined #openstack-neutron | 12:18 | |
*** dims has joined #openstack-neutron | 12:18 | |
*** salv-orlando_ has joined #openstack-neutron | 12:33 | |
*** enikanorov_ has joined #openstack-neutron | 12:35 | |
*** salv-orlando has quit IRC | 12:36 | |
*** salv-orlando_ is now known as salv-orlando | 12:36 | |
*** enikanorov has quit IRC | 12:38 | |
Sreedhar | salv-orlando: During the concurrent instance creation, getting these errors in the DHCP agent log - TRACE neutron.agent.dhcp_agent Timeout: Timeout while waiting on RPC response - topic: "q-plugin", RPC method: "get_dhcp_port" info: "<unknown>" and Timeout: Timeout while waiting on RPC response - topic: "q-plugin", RPC method: "get_active_networks_info" info: "<unknown>". Any idea why these are coming and any tuning | 12:46 |
marun | Sreedhar: the load is simply too high. did you say you were on grizzly? | 12:49 |
marun | Sreedhar: or can you use havana or trunk? | 12:49 |
Sreedhar | marun: I have ran similar tests on Grizzly but never had issues. I have installed the fresh Havana bits from Ubuntu Cloud and seeing these issues | 12:50 |
marun | There is a patch introduced in icehouse that allows running wsgi workers for the neutron service in a separate process which should allow for more performant rpc handling as a side-effect: https://review.openstack.org/#/c/37131/ | 12:51 |
*** safchain has joined #openstack-neutron | 12:51 | |
Sreedhar | marun: I was using the same HW and same network setup.. Able to deploy 240+ instances with concurrently of 30 in Grizzly without any issue with SQLpool tuning and increasing agent_down_time and report_interval. I have upgraded the setup with Havana (with fresh install) since then i could not go more than 150 instances. Some of the instances are going into error state and some getting duplicate fixed IPs | 12:51 |
Sreedhar | Marun: I am in the process of implementing those changes - adding more worker threads. But what puzzles me is how come it was working in Grizzly and not in Havana | 12:53 |
marun | Sreedhar: I'm seeing the same results in icehouse. I'm surprised you were able to deploy so many instances in grizzly. I've seen reports that booting lots of instances results in hosts not being configured via dhcp. | 12:53 |
marun | Sreedhar: The duplicate fixed ip and error state may be a new problem, though. | 12:54 |
marun | Sreedhar: What sql pool configuration have you found effective? I posted on the mailing list for assistance with that one and didn't receive a good answer. | 12:54 |
Sreedhar | Marun: I have done the tests more than 10 to 15 times. I was able to get IP address all the times up to 210 instances in Grizzly. Only when crossed 210 instances, 2 or 3 instances gets IP address with a delay of 2-3min. But eventually all instances could get IP address. This behavior was consistent across all times | 12:55 |
beagles | salv-orlando, markmcclain, marun: I've added you guys as reviewers to 59542. In some respects, I consider this an "experimental" patch in that I've not examined thoroughly the implications and code paths, but there were several types of errors that disappeared in my environment after implementing the patch... the rest were NetworkNotFound related with get_dhcp_port() etc, but a fix for some of t | 12:56 |
beagles | hat was already merged by the time I "got to it" | 12:56 |
marun | Sreedhar: That is extremely surprising, and indicates that your configuration has performance headroom to spare. | 12:56 |
Sreedhar | In Grizzly i set these values Quantum sqlalchemy_pool_size = 60 sqlalchemy_max_overflow = 120 sqlalchemy_pool_timeout = 2.. Same values were set in Havana as well - max_pool_size = 60 max_overflow = 120pool_timeout = 2 | 12:56 |
*** jprovazn has joined #openstack-neutron | 12:58 | |
marun | Sreedhar: thank you, I will try those settings. | 12:58 |
Sreedhar | Marun: My configuration includes 16 compute nodes (each has 16 cores) - I could even go up to 300 instances with concurrently of 30 parallel instance creation, but once we cross 240, most of the instances won't get IP fast enough (there is a delay of close to 2min) | 12:59 |
*** amuller_ has joined #openstack-neutron | 12:59 | |
marun | beagles: Excuse my ignorance, but is a network only ever configured on a single dhcp agent? | 12:59 |
Sreedhar | Marun: With the performance enhancements in Havana, I was expecting neutron to performance better than Quantum but its proving otherwise | 12:59 |
marun | Sreedhar: Frankly, I continue to be surprised when people attempt to use the ovs or linuxbridge plugins as more than POC. As we're discovering there are many reasons to choose a better supported solution. | 13:00 |
Sreedhar | Marun: With the above mentioned sql pool configuration, never seen duplicate fixed IP issues in Grizzly | 13:01 |
markmcclain | beagles: looking | 13:01 |
openstackgerrit | Oleg Bondarev proposed a change to openstack/neutron: LBaaS: agent monitoring and instance rescheduling https://review.openstack.org/59743 | 13:01 |
*** amuller has quit IRC | 13:02 | |
*** matsuhashi has quit IRC | 13:03 | |
beagles | marun, mmm... if we are doing distributed dhcp agents then that would be multiple agents per network, wouldn't it? | 13:05 |
*** bvandenh has quit IRC | 13:05 | |
marun | beagles: the reason I ask is that that the approach in the task presumes to know when resyncing is happening based on local state only. | 13:06 |
marun | task -> patch | 13:06 |
*** bashok has quit IRC | 13:07 | |
marun | Sreedhar: The likely culprit is a performance regression. You have big enough hardware that you never saw the problem before, but it might have been there for people with slower hosts. Certainly the VMs running the gate jobs and developers like me running in a VM on a laptop seem to replicate the problems with ease. | 13:07 |
beagles | marun, but for the purposes of this patch it is only local state that is relevant... as the purpose is to avoid changes that are in conflict with activities that are "in progress" that affect a particular network setting. | 13:08 |
*** yamahata_ has joined #openstack-neutron | 13:09 | |
beagles | s/setting/config/ | 13:09 |
marun | oh, duh | 13:09 |
marun | beagles: I think I see the problem. | 13:09 |
Sreedhar | marun: how to proceed further. I feel instance going into error state and getting duplicate fixed IPs are serious problem, | 13:10 |
marun | beagles: the utils.synchronized decorator was intended to limit access to the function to a single caller at a time, but the use of spawn_n would fire off greenthreads that could be still running after the function was exited. | 13:11 |
marun | Sreedhar: I agree. I'm working on it. | 13:12 |
Sreedhar | marun: Thanks | 13:12 |
marun | beagles: I'm going to comment on the patch. | 13:13 |
marun | markmcclain: are you seeing the same thing? | 13:13 |
markmcclain | marun: got pulled into another convo, so haven't gotten there yet | 13:14 |
markmcclain | go ahead and add your comment | 13:14 |
marun | markmcclain: Ok, hopefully this will save you the time. | 13:15 |
markmcclain | marun: thanks for beating me to the review | 13:16 |
marun | :) | 13:16 |
beagles | marun, yeah, that's what I was thinkin' | 13:16 |
*** dims has quit IRC | 13:17 | |
*** dims has joined #openstack-neutron | 13:17 | |
beagles | marun, further to that since operations tend to involve external commands they are outside of the eventlet scheduling so it is quite easy to conceive of multiple threads "stacking up" for a given network and causing things to happen in an "insane order" | 13:18 |
beagles | marun, what would be better (maybe) is a queue of updates on network ids so that sync_state operations per-network are always run in order | 13:19 |
beagles | marun, if it were a pre-emptive multi-tasking environment, we'd be driven in that direction anyways because spawning threads per update would lead to scaling issues due to proliferation of threads | 13:20 |
marun | beagles: Is that complexity necessary? The original design used a single lock. | 13:20 |
marun | beagles: remember that we are not using real threads, though. greenthreads are free. | 13:20 |
marun | (basically) | 13:21 |
beagles | marun: relatively yeah... I only bring it up because an idiom that works in one often has applicability to the other | 13:21 |
marun | beagles: I'm going to look at the patch that introduced the greenthreading in the hopes of understanding why the spawn_n was introduced in the first place. | 13:21 |
beagles | marun: yup | 13:21 |
marun | Have you already done the same? If not, maybe do the same in case I miss something | 13:22 |
marun | ? | 13:22 |
beagles | marun: no I didn't | 13:22 |
beagles | marun: although I can see how sync_state calls might timeout or pile up if threads aren't used there | 13:22 |
marun | beagles: right, io blocking | 13:23 |
*** matsuhashi has joined #openstack-neutron | 13:32 | |
markmcclain | marun, salv-orlando: just wanted to make sure you saw this: http://lists.openstack.org/pipermail/openstack-dev/2013-December/021127.html | 13:36 |
markmcclain | looks like we're still #1 | 13:36 |
marun | markmcclain: cripes, that network isolation one is still at #2 :( | 13:37 |
*** markmcclain has quit IRC | 13:38 | |
marun | salv-orlando: is there an exception that i should throw when there are no dhcp agents to notify? | 13:41 |
*** armax has joined #openstack-neutron | 13:47 | |
*** jorisroovers has quit IRC | 13:48 | |
*** jorisroovers has joined #openstack-neutron | 13:48 | |
*** jorisroovers has quit IRC | 13:50 | |
openstackgerrit | Sean M. Collins proposed a change to openstack/neutron: Quality of Service API extension - RPC & Driver support https://review.openstack.org/59970 | 13:50 |
openstackgerrit | Sean M. Collins proposed a change to openstack/neutron: Ml2 QoS API extension support https://review.openstack.org/59971 | 13:50 |
openstackgerrit | Sean M. Collins proposed a change to openstack/neutron: QoS API and DB models https://review.openstack.org/28313 | 13:50 |
*** steven-weston has quit IRC | 13:51 | |
*** jorisroovers has joined #openstack-neutron | 13:52 | |
*** yfried has quit IRC | 13:54 | |
*** safchain has quit IRC | 13:56 | |
*** mengxd has joined #openstack-neutron | 14:00 | |
*** markmcclain has joined #openstack-neutron | 14:02 | |
*** mengxd has quit IRC | 14:02 | |
dkehn | has the ml2 meeting changed | 14:02 |
dkehn | mestery: ^^^^^ | 14:02 |
mestery | dkehn: Yes, it's at 1600UTC | 14:03 |
dkehn | mestery: ok, thx | 14:03 |
mestery | dkehn: And it's on #openstack-meeting-alt, FYI | 14:03 |
*** SushilKM has joined #openstack-neutron | 14:03 | |
dkehn | mestery: yaaaaaa | 14:03 |
pcm_ | mestery: Thanks. i was wondering too. | 14:04 |
*** aymenfrikha has joined #openstack-neutron | 14:04 | |
*** amuller__ has joined #openstack-neutron | 14:04 | |
mestery | No worries, had sent email to openstack-dev, but that list has grown to an almost unimaginable size. | 14:04 |
*** amuller_ has quit IRC | 14:05 | |
*** julim has joined #openstack-neutron | 14:05 | |
*** yamahata_ has quit IRC | 14:07 | |
*** yamahata_ has joined #openstack-neutron | 14:07 | |
dkehn | mestery: true, you might want to update the https://www.google.com/calendar/ical/bj05mroquq28jhud58esggqmh4@group.calendar.google.com/public/basic.ics, just a thought | 14:09 |
mestery | dkehn: The calendar invite isn't updated? I thought somehow that happened automatically, but let me do that myself, thanks! | 14:10 |
*** safchain has joined #openstack-neutron | 14:10 | |
*** heyongli has joined #openstack-neutron | 14:11 | |
dkehn | mestery: how'd you guys fair on the snow? | 14:12 |
mestery | dkehn: About 2-3 inches so far, but mixed with rain, so it's frozen all over. | 14:12 |
mestery | How about you? | 14:12 |
*** markmcclain has quit IRC | 14:12 | |
dkehn | mestery: still snowing here, about 6 inch down here, I hear the mountains got 28 inch, great for the skiing, but cold as hell, which we don't see to often | 14:13 |
*** amuller__ is now known as amuller | 14:13 | |
*** armax has quit IRC | 14:15 | |
*** armax has joined #openstack-neutron | 14:15 | |
anteaya | salv-orlando: this bug had 61 hits in the last 48 hours: https://bugs.launchpad.net/tempest/+bug/1253896 | 14:16 |
anteaya | salv-orlando: any thoughts? | 14:16 |
openstackgerrit | Marios Andreou proposed a change to openstack/neutron: Validate CIDR given as ip-prefix in security-group-rule-create https://review.openstack.org/59212 | 14:16 |
mestery | dkehn: Wow, that sounds cold and snowy :) | 14:18 |
salv-orlando | anteaya: I have to check what merged in the past 48 hours; I would rather avoid reverting if we can find easy fixes | 14:18 |
dkehn | anteaya: responded via HP email about the sprint | 14:19 |
*** bvandenh has joined #openstack-neutron | 14:20 | |
dkehn | mestery: hard to get a snow day when working from home | 14:20 |
anteaya | salv-orlando: sound reasonable to me, thank you | 14:22 |
anteaya | dkehn: great thanks | 14:22 |
*** SushilKM has quit IRC | 14:23 | |
mestery | dkehn: Agree. | 14:23 |
* mestery works from home as well. | 14:24 | |
salv-orlando | anteaya: I think I have skewed the stats | 14:24 |
mestery | dkehn: You planning to come to the Montreal sprint too? | 14:24 |
salv-orlando | on monday I've been launching a lot of parallel jobs which exhibit a failure like bug 1253896 | 14:24 |
salv-orlando | but the stats from the last 24 hours are of "only" 19 hits | 14:24 |
anteaya | hmmmmmm | 14:24 |
dkehn | mestery: yes, trying to plan it | 14:25 |
mestery | dkehn: Cool! | 14:25 |
dkehn | anteaya: is the location confirmed | 14:25 |
anteaya | any suggestions for how me might proceed to get an accurate reflection of what is 1253896? and what might be due to the parallel jobs? | 14:25 |
anteaya | dkehn: the location address I sent in the email is tentatively booked | 14:26 |
anteaya | I am going to pay the deposit when I return home next week | 14:26 |
dkehn | anteaya: ok, so its not likely to change then, or should one wait until you confirm? | 14:26 |
anteaya | not likely to change | 14:27 |
anteaya | they will contact me if they get an inquiry for the same time | 14:27 |
anteaya | and i will pay the deposit from where I am if that happesn | 14:27 |
anteaya | happens | 14:27 |
*** stackKid has joined #openstack-neutron | 14:27 | |
anteaya | for security reasons I would just like to do that from home | 14:28 |
anteaya | I have my airfare booked for it since i am coming from Australia and had to book that | 14:28 |
anteaya | but not my hotel or travel home yet | 14:28 |
anteaya | so that is where I am personally | 14:28 |
*** nigel_r_davos has quit IRC | 14:28 | |
salv-orlando | anteaya: I think it won't be straightforward with the log collection process. On the other hand I might stop randomly running jobs on the gate and use internal infrastructure where possible. | 14:29 |
anteaya | jog0: ping | 14:30 |
anteaya | jog0: any thoughts on salv-orlando's possible direction? | 14:30 |
dkehn | anteaya: ok, thanks for the info, just want to get as close as possible to the venue with hotel | 14:30 |
anteaya | not sure if jog0 is online right now or not | 14:31 |
anteaya | dkehn: understood | 14:31 |
*** ocherka_ has joined #openstack-neutron | 14:31 | |
marun | armax: ping | 14:33 |
armax | marun: pong | 14:33 |
marun | armax: Have you seen beagles' patch? | 14:34 |
armax | not yet, I am bit behind on reviews | 14:34 |
armax | which one is it? | 14:34 |
marun | https://review.openstack.org/#/c/59542/ | 14:35 |
marun | Given your focus on the dhcp issues I think it's important that you look at this. | 14:35 |
armax | it looks like I should look at this one | 14:35 |
armax | thanks for the heads-up | 14:35 |
armax | I'll give it a look | 14:36 |
marun | Did you add the utils.synchronized decorator to sync_state? | 14:36 |
armax | I think I did | 14:36 |
marun | I think there may be a problem with that approach (comment inline documents my concern) | 14:36 |
*** jlibosva1 has joined #openstack-neutron | 14:36 | |
marun | beagles suggested that a queue might be a better solution that his current proposal, maybe you can speak on that issue as well. | 14:37 |
marun | that -> tahn | 14:37 |
marun | oy | 14:37 |
marun | ok, I'm done :) | 14:37 |
armax | I added it because I saw log traces that led me to believe that sometime event handlers were preempted by the periodic sync worker | 14:38 |
marun | armax: your instincts were definitely correct | 14:38 |
*** jlibosva has quit IRC | 14:39 | |
marun | armax: but the use of non-blocking spawn_n() calls to invoke the configuration doesn't allow a function-level lock to work | 14:39 |
armax | that lock is file-based though | 14:39 |
armax | wouldn't that work still? | 14:39 |
marun | armax: I'm afraid not. | 14:39 |
marun | armax: function called -> greenthreads spawned -> function ended and lock released -> (possibly sometime later, greenthreads exit) | 14:40 |
marun | armax: it's the non-blocking nature of spawn_n that is the problem. | 14:40 |
*** jlibosva1 has quit IRC | 14:40 | |
armax | my understanding was that the decorator was dropping a cookie (i.e. a file) on the file systme | 14:41 |
armax | 14:41 | |
armax | and the method would yield if the cookie was already there | 14:41 |
marun | armax: that is correct. | 14:41 |
armax | so concurrent calls would still serialize | 14:41 |
*** jlibosva has joined #openstack-neutron | 14:41 | |
armax | I am not overly expert | 14:41 |
marun | armax: concurrent calls to sync_state, yes. | 14:41 |
armax | of that piece of code | 14:41 |
armax | though | 14:41 |
marun | armax: but greenthreads that have been spawned - that call safe_configure - are not guaranteed to have run to completion before the lock is relinquished | 14:42 |
armax | right | 14:42 |
marun | armax: the only way the lock would be effective would be if spawn() was used instead of spawn_n() and wait() was called on every greenthread in the pool. | 14:43 |
*** peristeri has joined #openstack-neutron | 14:43 | |
marun | armax: which, frankly, might be a good idea. | 14:43 |
armax | so in a nutshell you're saying that the decorator is ineffective | 14:43 |
armax | so far | 14:43 |
marun | armax: As the function is currently written, yes. | 14:44 |
armax | and we need to tweak the body as you suggested | 14:44 |
armax | ? | 14:44 |
* beagles posits that the decorator is still a good thing^TM but it doesn't scope out the way you think | 14:44 | |
marun | armax: If the goal is to only have one copy of sync_state running at a time, then yes, we can tweak the body. | 14:44 |
marun | armax: if the goal is to maximize concurrency of safe_configure, then beagles suggestion of queueing those activities might be a better choice. | 14:45 |
marun | I'm not exactly sure why spawn_n is being used. | 14:45 |
beagles | armax, marun: if the same operations are initiated through other means (other RPCs, etc), there will be other races. These should also be accounted for | 14:46 |
armax | I think it's safer to serialize | 14:46 |
armax | my understanding of the use of spawn was to run the task in 'background' | 14:47 |
beagles | keep in mind that many of these operations end up invoking commands that are of indeterminate duration... serializing everything may be prohibitive from the scalability perspective | 14:48 |
marun | armax: arg, beagles is correct though. enable_dhcp_helper calls configure_dhcp_for_network too, from all the network events | 14:48 |
beagles | considering sync_states are frequent and know and ad-hoc calls are arbitrary in frequency and timing | 14:48 |
beagles | s/know/known/ | 14:48 |
armax | my fear is that eventually those end up calling dnsmasq, ovs, and ip commands | 14:49 |
armax | if they end up playing with the same resource | 14:49 |
armax | then there may be troubles | 14:49 |
marun | :\ | 14:49 |
beagles | would it make to serialize at those points then? | 14:49 |
marun | armax, beagles: I think the sync point needs to be configuring a given network | 14:50 |
beagles | and allow other concurrency to occur if the resources are not shared/contended for | 14:50 |
armax | frankly I am very bad at visualizing the execution of concurrent code | 14:50 |
armax | in this case my approach would try trial and error | 14:50 |
beagles | armax: anybody who says they are faultless at that is lying :) | 14:50 |
beagles | armax, or deluded | 14:50 |
marun | armax: I don't think it's too complicated. | 14:50 |
armax | beagles: I cannot disagree | 14:50 |
beagles | armax: I've just "stuck my finger in the proverbial socket" quite a lot :) | 14:50 |
marun | armax: the os operations in question - dnsmasq etc - are isolated by network | 14:51 |
jog0 | anteaya: I am aboutto go offline | 14:51 |
marun | armax: so only a single configuration operation should be done at a time on a given network | 14:51 |
marun | armax: It should be safe to configure multiple networks at a time, though. | 14:51 |
beagles | I think ops vis-a-vis ovs probably wont' walk all over each other either since one network's operations is isolated from another's pretty much by definition | 14:52 |
* beagles notices marun's "etc" and facepalms and shuts up | 14:52 | |
armax | marun: true | 14:53 |
armax | but what if you get concurrent events on the same network | 14:53 |
marun | armax: queue | 14:53 |
marun | armax: or ?? | 14:53 |
armax | indeed | 14:53 |
marun | heh | 14:54 |
armax | but a queue is a form of serialization, is it not? | 14:54 |
beagles | queue per network | 14:54 |
armax | correct | 14:54 |
marun | +1 | 14:55 |
armax | bottom line: dhcp agent v2 | 14:55 |
armax | :) | 14:55 |
marun | oy | 14:55 |
marun | so we're stuck for havana? | 14:55 |
marun | or do you think v2 could be backported? | 14:56 |
armax | it depends on the extent of changes required | 14:56 |
marun | the reason I ask is that we're trying to stabilize havana for release as RHOS 4.0 and it's got an awful lot of issues at present. | 14:56 |
beagles | usually for this kind of thing it is best to start really simple and functionally specific | 14:56 |
marun | agreed | 14:57 |
beagles | in this case a map of network.id to updated info and have a set of eventlet threads processing that map | 14:57 |
anteaya | jog0: was just wondering what you thought about salv-orlando's assessment of Gate Bug #1 | 14:57 |
armax | yes, but the bigger picture should still be in sight | 14:57 |
* beagles nods | 14:58 | |
marun | armax: what would the bigger picture look like? | 14:58 |
*** yfried has joined #openstack-neutron | 14:58 | |
armax | you mentioned this queuing system | 14:58 |
beagles | armax, don't get stuck on "queue" | 14:58 |
beagles | armax, thing "task list" or "work queue" or "command pattern" | 14:59 |
armax | we also talked about introducing more states to network and subnet resources | 14:59 |
armax | to address some concurrency issues we noticed | 14:59 |
beagles | s/thing/tink | 14:59 |
beagles | think | 14:59 |
beagles | GAHH | 14:59 |
beagles | armax: that's a decent idea too... it is an approach used in nova for instances | 14:59 |
armax | I am all in for a piece-meal approach to improving things | 14:59 |
anteaya | jog0: he feels that the current hits might be due to his inclusion of some patches using parallel testing with a similar fingerprint to bug #1 | 14:59 |
marun | armax: +1 on introducing more states | 15:01 |
beagles | armax, but... it is not a panacea, it just gives you a mechanism of knowing when not to "do something dangerous". It doesn't address the issue of "doing something". The task_states etc. give some sanity to things like get_dhcp_port() after things have been deleted. | 15:01 |
marun | armax: Being able to track success or failure of operations with more granularity is key to improving reliability. | 15:01 |
marun | I don't think that's the immediate solution, though. Limiting concurrency is the obvious stepping stone. | 15:02 |
beagles | I'd rephrase that as "controlling dangerous concurrency" :) | 15:02 |
*** jistr has quit IRC | 15:02 | |
*** networkstatic_zZ has quit IRC | 15:02 | |
beagles | or "maximizing good concurrency" or something | 15:03 |
beagles | and reducing too much coffee intake for beagles | 15:03 |
*** networkstatic has joined #openstack-neutron | 15:03 | |
*** reaper has joined #openstack-neutron | 15:03 | |
marun | beagles: controlling dangerous concurrency => preventing dangerous concurrency, and you've got yourself a deal | 15:03 |
beagles | right on! | 15:04 |
jog0 | anteaya: ahh that is very possible | 15:04 |
*** jistr has joined #openstack-neutron | 15:04 | |
*** jistr is now known as jistr|mtg | 15:04 | |
jog0 | anteaya: thats easy to confirm | 15:04 |
jog0 | anteaya: if we see it in the gate, then he is wrong | 15:04 |
anteaya | jog0: hmmmm | 15:05 |
jog0 | anteaya: which bug just to be clear | 15:05 |
*** sbasam has joined #openstack-neutron | 15:05 | |
anteaya | jog0: do we have a graph for this bug showing up in the gate? | 15:05 |
anteaya | https://bugs.launchpad.net/bugs/1253896 | 15:05 |
anteaya | salv-orlando: ^^ | 15:05 |
jog0 | anteaya: ohh that one, that fails for nova-networking as well | 15:06 |
*** armax_ has joined #openstack-neutron | 15:06 | |
jog0 | although its possible the neutron failures are due to what you just described, checking | 15:06 |
anteaya | thanks | 15:06 |
*** wcaban has joined #openstack-neutron | 15:06 | |
*** thedodd has joined #openstack-neutron | 15:07 | |
marun | armax: dumb question, should it be possible to run more than 1 instance of an agent type (dhcp, l3, metadata) on a given host? | 15:07 |
*** ocherka_ has quit IRC | 15:08 | |
jog0 | http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiU1NIVGltZW91dDogQ29ubmVjdGlvbiB0byB0aGVcIiBBTkQgbWVzc2FnZTpcInZpYSBTU0ggdGltZWQgb3V0LlwiIEFORCBmaWxlbmFtZTpcImNvbnNvbGUuaHRtbFwiIiwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJvZmZzZXQiOjAsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sIm1vZGUiOiJ0ZXJtcyIsImFuYWx5emVfZmllbGQiOiJidWlsZF9uYW1lIiwic3RhbXAiOjEzODYxNjk2NTkxMjh9 | 15:08 |
jog0 | anteaya: the numbers back up that explanation at fist glance | 15:08 |
*** armax has quit IRC | 15:08 | |
*** armax_ is now known as armax | 15:08 | |
anteaya | jog0: so your sense is that salv-orlando's assessment of the situation is accurate? | 15:09 |
jog0 | http://logs.openstack.org/28/51228/11/check/check-tempest-dsvm-neutron/6e8adb4/console.html | 15:09 |
anteaya | jog0: can I get a shorter url for the logstash link? | 15:09 |
armax | marun: cannot speak for l3 and metadata, but I think that the dhcp one, configured properly, it should be able to | 15:10 |
marun | ah, ok | 15:10 |
jog0 | anteaya: that failure is in a a swift job | 15:10 |
armax | and by configuring properly I mean ensuring that they don't step up on each other toes | 15:11 |
marun | armax: https://review.openstack.org/#/c/58814 is presuming that is not the case | 15:11 |
armax | like host files etc | 15:11 |
anteaya | jog0: looking at the log, what should I be seeing? | 15:11 |
marun | armax: so I'll -1 and ask the question of the submitter | 15:11 |
jog0 | anteaya: not sure .. still digging in logstash | 15:11 |
*** markmcclain has joined #openstack-neutron | 15:11 | |
armax | I quickly looked at that but I needed more time to digest | 15:11 |
armax | it | 15:11 |
armax | I wasn't convinced by it though | 15:12 |
marun | armax: I'm going to suggest he raise the issue on the mailing list. I don't think that kind of design decision should be made without lots of input. | 15:13 |
armax | seems fair | 15:13 |
openstackgerrit | Brian Haley proposed a change to openstack/neutron: Change l3-agent to periodically check children are running https://review.openstack.org/59997 | 15:13 |
jog0 | ahh here we go | 15:14 |
jog0 | anteaya: https://review.openstack.org/#/c/57420 that is responsible for 26% of the failures | 15:14 |
jog0 | (out of the last 100 neutron failures) | 15:14 |
jog0 | query message:"SSHTimeout: Connection to the" AND message:"via SSH timed out." AND filename:"console.html" AND build_name:*neutron* | 15:14 |
jog0 | the next biggest one is https://review.openstack.org/#/c/57627 | 15:15 |
*** amritanshu_RnD has quit IRC | 15:15 | |
jog0 | anteaya: in short i see no gate failures for neutron for that bug | 15:15 |
*** jecarey has joined #openstack-neutron | 15:15 | |
jog0 | so I think its safe to say salv-orlando is right | 15:15 |
jog0 | I'll remove neutron from that bug | 15:16 |
anteaya | thank you, jog0 | 15:16 |
*** Sreedhar has quit IRC | 15:16 | |
*** edbak has quit IRC | 15:16 | |
*** ywu has quit IRC | 15:16 | |
enikanorov__ | marun: hi | 15:16 |
jog0 | anteaya: thank you, good digging | 15:17 |
marun | enikanorov__: hi! | 15:18 |
marun | enikanorov__: I bet I know what you want to talk about :) | 15:18 |
anteaya | jog0: I'm learning more about logstash everyday | 15:18 |
jog0 | anteaya: its really powerful | 15:19 |
jog0 | and confusing | 15:19 |
*** heyongli has quit IRC | 15:19 | |
anteaya | yes | 15:20 |
enikanorov__ | marun: yep | 15:20 |
enikanorov__ | marun: i'd like to discuss https://review.openstack.org/#/c/58814/ | 15:21 |
marun | enikanorov__: go ahead | 15:21 |
enikanorov__ | i've posted general comment | 15:22 |
enikanorov__ | so the fix is not trying to allow several agents of the same type on 1 host | 15:22 |
*** jlibosva has quit IRC | 15:22 | |
*** matsuhashi has quit IRC | 15:23 | |
marun | enikanorov__: the proposed fix makes it impossible for more than 1 instance of a given agent type to run on a host | 15:23 |
marun | enikanorov__: I'm not against that, but that decision needs to have broad consensus that I don't think is possible on a review. | 15:23 |
enikanorov__ | ah, i'm misread your comment actually | 15:23 |
anteaya | marun: can I get a status update on: https://bugs.launchpad.net/neutron/+bug/1251448 ? | 15:24 |
enikanorov__ | but isn't that how things have worked up to the moment? | 15:24 |
enikanorov__ | it's just with certain deployment sequence it leads to a problems | 15:24 |
marun | enikanorov__: It is not hard-coded as the fix proposes. | 15:24 |
marun | enikanorov__: I'm not saying 'this can't be merged'. I'm saying 'ask the community if this is an acceptable restriction' | 15:25 |
enikanorov__ | syeah, sure | 15:25 |
*** clev has joined #openstack-neutron | 15:25 | |
marun | enikanorov__: If it is not, then agents are going to have to have unique ids generated for them instead of using agent type+host | 15:25 |
enikanorov__ | that makes sense | 15:25 |
marun | anteaya: I'm afraid I have nothing to report beyond what is already commented on the bug report. | 15:25 |
marun | anteaya: I've been sidelined by trying to improve the dhcp agent's reliability which has been contributing to a host of other bus | 15:26 |
marun | bugs | 15:26 |
marun | anteaya: if you can find a volunteer to take it over, I think that would be best. | 15:27 |
*** networkstatic has quit IRC | 15:27 | |
*** fouxm_ has joined #openstack-neutron | 15:31 | |
*** wcaban is now known as Mr_W | 15:31 | |
*** fouxm_ has quit IRC | 15:32 | |
*** fouxm_ has joined #openstack-neutron | 15:33 | |
*** fouxm has quit IRC | 15:34 | |
anteaya | marun: ack | 15:34 |
anteaya | okay folks we need someone to take over https://bugs.launchpad.net/neutron/+bug/1251448 | 15:35 |
*** rpodolyaka has joined #openstack-neutron | 15:36 | |
openstackgerrit | stephen-ma proposed a change to openstack/neutron: Delete duplicate internal devices in router namespace https://review.openstack.org/57954 | 15:36 |
markmcclain | 1251448 looks a combo of a tempest and a neutron bug right? | 15:37 |
*** amuller has quit IRC | 15:37 | |
rpodolyaka | Hey all! marun raised an interesting question about allowing of running multiple agents of the same type on one host in this review https://review.openstack.org/#/c/58814/ . I thought, it might be interesting for you, guys | 15:37 |
jog0 | https://jenkins02.openstack.org/job/check-tempest-dsvm-neutron/buildTimeTrend | 15:37 |
jog0 | that job just dumped out a bunch of fails | 15:38 |
marun | rpodolyaka: Please make sure to ask on mailing list too, not everyone will see the question on irc. | 15:38 |
*** SushilKM has joined #openstack-neutron | 15:38 | |
jog0 | https://jenkins01.openstack.org/job/check-tempest-dsvm-neutron/buildTimeTrend | 15:38 |
jog0 | same here | 15:38 |
jog0 | anteaya: ^ | 15:38 |
rpodolyaka | marun: ok, just wanted you to look at my comment first :) | 15:39 |
jog0 | neutron check-tempest-dsvm-neutron just crapped out | 15:39 |
*** safchain has quit IRC | 15:39 | |
anteaya | jog0: :( | 15:39 |
jog0 | hmm its horizon | 15:39 |
anteaya | jog0: what bug is that? | 15:39 |
jog0 | anteaya: none yet | 15:40 |
jog0 | oh wait | 15:40 |
anteaya | or does the bug exist | 15:40 |
* anteaya waits | 15:40 | |
jog0 | its everything | 15:40 |
jog0 | https://jenkins01.openstack.org/job/check-tempest-dsvm-full/buildTimeTrend | 15:40 |
anteaya | everything? | 15:40 |
marun | rpodolyaka: hmmm, good point | 15:41 |
*** stackKid is now known as dingye | 15:41 | |
marun | markmcclain: ping | 15:41 |
*** otherwiseguy has joined #openstack-neutron | 15:41 | |
rpodolyaka | marun: so that's the only reason, why I implemented it like this | 15:42 |
markmcclain | marun: pong | 15:42 |
marun | markmcclain: So, only 1 instance of a given agent type allowed per host? | 15:42 |
dingye | hello, i am new comer, i'd like to contribute on QA/temptest on neutron. where to start? could anyone give me some hint? | 15:42 |
markmcclain | marun: yes because most share the same state files | 15:43 |
markmcclain | otherwise they will interfere with each other | 15:43 |
anteaya | jog0: had never seen that view for jenkins before | 15:43 |
anteaya | exciting | 15:43 |
marun | markmcclain: ok, fair enough. For some reason I thought it was possible for more than 1 to exist, maybe because the l3 agent without namespaces being enabled would require an agent per router. | 15:43 |
*** safchain has joined #openstack-neutron | 15:44 | |
anteaya | welcome dingye | 15:44 |
*** safchain has quit IRC | 15:44 | |
anteaya | dingye: take a look at this etherpad: https://etherpad.openstack.org/p/icehouse-summit-qa-neutron | 15:44 |
marun | rpodolyaka: ok, I'll update my review | 15:44 |
rpodolyaka | marun: thanks! | 15:44 |
markmcclain | l3 without namespaces can | 15:45 |
markmcclain | marun: ^ | 15:45 |
jog0 | anteaya: heh, moving the conversation to -qa since its universal | 15:45 |
anteaya | dingye: find this heading: API tests gap analysis (as of 2013-12-1) | 15:45 |
anteaya | you will see some instructions below it, to select one of the api gap items and create a launchpad bug for it | 15:45 |
anteaya | dingye: does that sound like something you are comfortable doing? | 15:45 |
marun | markmcclain: in that case, what do you think of adding a unique index on (agent_type,host)? | 15:46 |
anteaya | jog0: very good, moving to -qa | 15:46 |
marun | markmcclain: this restriction is already encoded in _get_agent_by_type_and_host, which throws an exception if multiple results are returned | 15:47 |
markmcclain | marun: that works for most cases but fails for l3 agents that serve different upstream nets | 15:48 |
*** carl_baldwin has joined #openstack-neutron | 15:49 | |
marun | markmcclain: the only way that could work (as armax suggested), would be to use a different hostname (for same ip) for each l3 agent | 15:49 |
dingye | anteaya: i understand the usercases of that section and i have played with neutron with ovs plugin. i'd like to start to look at some test example. is this right approach? | 15:49 |
markmcclain | yeah | 15:49 |
rkukura | markmcclain, marun: Should be no (or less) reason to run multiple l3 agents on same node once https://review.openstack.org/#/c/59359/ merges | 15:49 |
marun | markmcclain: https://github.com/openstack/neutron/blob/master/neutron/db/agents_db.py#L131 | 15:49 |
markmcclain | right | 15:50 |
markmcclain | db should enforce :) | 15:50 |
*** amir1 is now known as asadoughi | 15:50 | |
marun | markmcclain: ok, cool. | 15:50 |
marun | rpodolyaka: question, is there a reason not to retry, say, 5 times instead of 2? And maybe make it configurable? | 15:52 |
*** jistr|mtg has quit IRC | 15:53 | |
rpodolyaka | marun: I think, no. The only case this error can happen is two, transactions trying to insert the same entry and commit. Whatever happens, one will be committed, and the second will fail. And when it's retried, it will UPDATE the existing entry, rather than insert a new one | 15:54 |
rpodolyaka | * two concurrent transactions | 15:54 |
marun | rpodolyaka: what makes you think there would ever only be 2 concurrent transactions? | 15:54 |
rpodolyaka | marun: ok. But what changes if we have more than 2? only one succeeds, other ones are retried and issue UPDATEs | 15:55 |
markmcclain | dingye: here's a sample scenario test https://review.openstack.org/#/c/56242/ | 15:55 |
*** SushilKM has quit IRC | 15:55 | |
markmcclain | dingye: and another inflight for API tests | 15:56 |
markmcclain | https://review.openstack.org/#/c/56680/ | 15:56 |
*** jroovers has joined #openstack-neutron | 15:58 | |
rpodolyaka | marun: I mean, if this call is retried, than we already have an (agent_type, host) entry committed. The won't be new INSERTs, only UPDATEs | 15:58 |
rpodolyaka | marun: and we guarantee we retry once | 15:58 |
*** jorisroovers has quit IRC | 15:58 | |
marun | rpodolyaka: I guess it will be fine until it isn't. At some point in the near future any exceptions being logged will result in gate failures. | 15:59 |
dingye | markmcclain: thanks, i will start to learn from these examples. | 15:59 |
*** jistr has joined #openstack-neutron | 15:59 | |
*** jistr is now known as jistr|mtg | 16:00 | |
rpodolyaka | marun: if you prove me wrong, I'll be happy to update the patch :) but I can't see, how we can receive the error more than once | 16:02 |
roaet | rkukura: howdy, when was the ml2 meeting rescheduled for (just to make sure)? | 16:02 |
marun | rpodolyaka: concurrency is a wonderful thing | 16:02 |
*** jroovers has quit IRC | 16:02 | |
*** Abhishek_ has joined #openstack-neutron | 16:02 | |
*** dingye has quit IRC | 16:03 | |
rkukura | roaet: Is now on #openstack-meeting-alt | 16:03 |
rpodolyaka | marun: sure it is :) but only one transaction can be committed at the time | 16:03 |
roaet | ah yes, so it is | 16:04 |
marun | rpodolyaka: wow, that's a lot of sleep() calls in that method | 16:04 |
rpodolyaka | marun: so one commits, all others will fail, be rolled back and retried, on retry we'll update the existing entry (multiple times, yes) | 16:04 |
*** Abhishe__ has joined #openstack-neutron | 16:05 | |
*** alagalah has joined #openstack-neutron | 16:05 | |
rpodolyaka | marun: yeah, that's must be the reason that triggered this error with multiple agent entries to happen more often | 16:05 |
marun | rpodolyaka: yes, I think you're correct | 16:05 |
*** alagalah has left #openstack-neutron | 16:05 | |
*** safchain has joined #openstack-neutron | 16:07 | |
*** networkstatic has joined #openstack-neutron | 16:07 | |
*** Abhishek_ has quit IRC | 16:08 | |
marun | rpodolyaka: I suggest updating the comment (without a NOTE so it is more visible) to better explain that 'DBDuplicateEntry' exceptions can only happen when two or more creates contend, and that subsequent operations will always be updates and more or less guaranteed to succeed. | 16:09 |
*** banix has joined #openstack-neutron | 16:09 | |
rpodolyaka | marun: got it, thanks! | 16:09 |
*** otherwiseguy has quit IRC | 16:11 | |
*** lori has joined #openstack-neutron | 16:11 | |
*** Abhishe__ has quit IRC | 16:11 | |
*** x86brandon has joined #openstack-neutron | 16:13 | |
*** armax has quit IRC | 16:17 | |
anteaya | roaet can you take over this bug? https://bugs.launchpad.net/neutron/+bug/1251448 | 16:17 |
roaet | looking | 16:17 |
anteaya | thanks | 16:17 |
anteaya | this is our worst gate blocking bug right now | 16:18 |
anteaya | and I am trying to find someone to take it over for marun | 16:18 |
anteaya | you can talk to him about it | 16:18 |
anteaya | I just don't want to leave it hanging without someone championing it | 16:18 |
roaet | anteaya: not sure if 'taking over' is a good idea, but I am definitely looking at it with mlavalle already | 16:18 |
anteaya | and I am at the end of my day | 16:19 |
anteaya | awesome | 16:19 |
*** markmcclain has quit IRC | 16:19 | |
roaet | thanks for the directive. it was blocking my changes so I have been trying to fit it into my schedule | 16:19 |
anteaya | can you post status at the end of the day so when I look at it again, I get a sense of where you are? | 16:19 |
anteaya | roaet: awesome | 16:19 |
anteaya | thanks | 16:19 |
anteaya | I am afk for the night | 16:19 |
anteaya | see you tomorrow, I hope | 16:20 |
roaet | ok. will try to remember, nighto | 16:20 |
*** rudrarugge has joined #openstack-neutron | 16:20 | |
openstackgerrit | Sylvain Afchain proposed a change to openstack/neutron: Fix Metering doesn't respect the l3 agent binding https://review.openstack.org/60016 | 16:22 |
*** bashok has joined #openstack-neutron | 16:24 | |
*** alex_klimov1 has quit IRC | 16:25 | |
*** Sreedhar has joined #openstack-neutron | 16:26 | |
sbasam | mark, we spoke at the last summit about being able to support preserving neutron ports on a instance terminate. The bug report is at https://bugs.launchpad.net/neutron/+bug/1161015 | 16:29 |
*** reaper has quit IRC | 16:30 | |
*** SushilKM has joined #openstack-neutron | 16:35 | |
*** sbasam has quit IRC | 16:35 | |
*** rudrarugge has quit IRC | 16:36 | |
*** jgrimm has joined #openstack-neutron | 16:41 | |
*** mlavalle has joined #openstack-neutron | 16:45 | |
mlavalle | amir: ping | 16:45 |
*** SumitNaiksatam has quit IRC | 16:48 | |
*** marun has quit IRC | 16:50 | |
*** armax has joined #openstack-neutron | 16:50 | |
openstackgerrit | Jon Grimm proposed a change to openstack/neutron: Openvswitch update_port should return updated port info https://review.openstack.org/58847 | 16:50 |
*** sbasam has joined #openstack-neutron | 16:51 | |
*** otherwiseguy has joined #openstack-neutron | 16:52 | |
*** jistr|mtg is now known as jistr | 16:54 | |
*** armax has quit IRC | 16:55 | |
*** armax has joined #openstack-neutron | 16:55 | |
*** clev has quit IRC | 16:58 | |
*** SushilKM__ has joined #openstack-neutron | 16:59 | |
*** SushilKM has quit IRC | 17:01 | |
asadoughi | rkukura: no official follow up discussion. i was thinking of adding agenda item for next week ml2 meeting | 17:02 |
rkukura | asadoughi: ok | 17:03 |
mestery | asadoughi: I think ML2 is the right place to discuss that, please add an item. | 17:03 |
asadoughi | mestery: oh, do i just edit https://wiki.openstack.org/wiki/Meetings/ML2 ? | 17:04 |
asadoughi | wasn't sure if there was a more official way of asking for an agenda item | 17:05 |
mestery | asadoughi: Yes, please feel free to. | 17:06 |
mestery | Just add a new section at the top with the date for next week's meeting | 17:07 |
mestery | And go from there. | 17:07 |
asadoughi | mestery: done | 17:07 |
*** yamahata_ has quit IRC | 17:08 | |
mestery | asadoughi: Awesome, thanks! | 17:08 |
*** jpich has quit IRC | 17:10 | |
*** SumitNaiksatam has joined #openstack-neutron | 17:12 | |
openstackgerrit | Roman Podoliaka proposed a change to openstack/neutron: Fix a race condition in agents status update code https://review.openstack.org/58814 | 17:16 |
*** armax has quit IRC | 17:17 | |
pete5 | dear neutronians: I have a nova review for token sharing in nova.network.neutronv2 that helps a lot with instance creation time. Would appreciate some eyes on :-) https://review.openstack.org/#/c/58854/ | 17:18 |
*** pcm_ has quit IRC | 17:19 | |
*** pcm_ has joined #openstack-neutron | 17:19 | |
*** jistr has quit IRC | 17:19 | |
*** ywu has joined #openstack-neutron | 17:23 | |
*** pcm_ has quit IRC | 17:24 | |
*** safchain has quit IRC | 17:25 | |
mestery | pete5: Looking. | 17:26 |
*** armax has joined #openstack-neutron | 17:30 | |
*** mlavalle has quit IRC | 17:32 | |
*** chandankumar has quit IRC | 17:33 | |
*** networkstatic has quit IRC | 17:38 | |
openstackgerrit | Yves-Gwenael Bourhis proposed a change to openstack/neutron: Make dnsmasq aware of all names https://review.openstack.org/52930 | 17:41 |
*** garyk has joined #openstack-neutron | 17:45 | |
garyk | rkukura: ping | 17:45 |
*** fouxm_ has quit IRC | 17:48 | |
*** rossella_s has quit IRC | 17:52 | |
garyk | salv-orlando: ping | 17:53 |
salv-orlando | hi garyk | 17:53 |
garyk | salv-orlando: i am having some connectvity issues - can you look at the private message i sent you | 17:54 |
*** ygbo has quit IRC | 17:59 | |
garyk | rkukura: you around? | 18:06 |
*** yfried has quit IRC | 18:09 | |
*** armax has quit IRC | 18:14 | |
*** yfried has joined #openstack-neutron | 18:20 | |
*** armax has joined #openstack-neutron | 18:23 | |
*** armax has quit IRC | 18:24 | |
*** Abhishek_ has joined #openstack-neutron | 18:34 | |
*** Abhishek_ has quit IRC | 18:36 | |
*** harlowja has joined #openstack-neutron | 18:37 | |
*** Abhishek_ has joined #openstack-neutron | 18:37 | |
openstackgerrit | Sean M. Collins proposed a change to openstack/neutron: Create a new attribute for subnets, to store v6 dhcp options https://review.openstack.org/52983 | 18:41 |
*** jprovazn has quit IRC | 18:42 | |
*** pcm_ has joined #openstack-neutron | 18:43 | |
*** jprovazn has joined #openstack-neutron | 18:44 | |
*** pcm_ has quit IRC | 18:44 | |
*** pcm_ has joined #openstack-neutron | 18:45 | |
*** nati_ueno has quit IRC | 18:45 | |
*** alagalah has joined #openstack-neutron | 18:46 | |
*** alagalah has left #openstack-neutron | 18:46 | |
otherwiseguy | Hmm, so whatever generates the tarballs for releases messes up the whitespace in setup.cfg so it no longer matches what exists in the git tag (in addition to adding the [egg_info] section at the end. | 18:46 |
rkukura | garyk: pong | 18:46 |
otherwiseguy | This is currently screwing up my tools for packaging a backport of a setup.cfg change. :p | 18:47 |
* otherwiseguy waves at garyk | 18:47 | |
garyk | rkukura: sent you a mail :) | 18:47 |
garyk | hey otherwiseguy - hope all is well. | 18:47 |
rkukura | garyk: will reply to that | 18:47 |
garyk | rkukura: thanks | 18:48 |
otherwiseguy | garyk: things are at least better. ;) | 18:49 |
otherwiseguy | I hope all is well with you too! | 18:49 |
*** otherwiseguy has quit IRC | 18:55 | |
sc68cal | One of these days I won't wince when I push something that's a work in progress into gerrit.... | 18:55 |
*** alagalah has joined #openstack-neutron | 19:00 | |
*** pete5 has quit IRC | 19:02 | |
*** pete5 has joined #openstack-neutron | 19:02 | |
*** pete5 has quit IRC | 19:02 | |
*** pete5 has joined #openstack-neutron | 19:02 | |
*** sbasam has quit IRC | 19:06 | |
*** alagalah has left #openstack-neutron | 19:17 | |
*** yfried has quit IRC | 19:18 | |
*** sbasam has joined #openstack-neutron | 19:24 | |
*** yfried has joined #openstack-neutron | 19:27 | |
*** alex_klimov has joined #openstack-neutron | 19:31 | |
*** shashank_ has joined #openstack-neutron | 19:34 | |
*** clev has joined #openstack-neutron | 19:51 | |
*** rudrarugge has joined #openstack-neutron | 19:55 | |
*** alagalah_ has joined #openstack-neutron | 19:55 | |
openstackgerrit | Jon Grimm proposed a change to openstack/neutron: Openvswitch update_port should return updated port info https://review.openstack.org/58847 | 20:04 |
*** alagalah_ has left #openstack-neutron | 20:04 | |
*** armax has joined #openstack-neutron | 20:12 | |
*** Abhishek_ has quit IRC | 20:12 | |
*** armax has quit IRC | 20:15 | |
*** Abhishek_ has joined #openstack-neutron | 20:24 | |
*** Sreedhar has quit IRC | 20:24 | |
*** clev_ has joined #openstack-neutron | 20:32 | |
*** SushilKM__ has quit IRC | 20:34 | |
*** clev has quit IRC | 20:34 | |
carl_baldwin | anteaya: ping | 20:35 |
*** Abhishek_ has quit IRC | 20:40 | |
*** otherwiseguy has joined #openstack-neutron | 20:47 | |
anteaya | hello carl_baldwin | 20:48 |
carl_baldwin | anteaya: Was wondering if you received my email from yesterday about the sprint. | 20:48 |
roaet | anteaya: i have confirmed that I will be able to, and have time to, work on https://bugs.launchpad.net/neutron/+bug/1251448 I will keep you appraised as I go | 20:49 |
anteaya | carl_baldwin: I did | 20:49 |
anteaya | spent the day meeting with markmcclain | 20:49 |
anteaya | looking to send out an email to you and a few others shortly | 20:50 |
anteaya | roaet: dude you rock | 20:50 |
anteaya | roaet: are you coming to the code sprint? | 20:50 |
roaet | what is that? | 20:50 |
* roaet is super disconnected from reality :( | 20:50 | |
carl_baldwin | anteaya: thanks. | 20:51 |
anteaya | roaet: http://lists.openstack.org/pipermail/openstack-dev/2013-November/018907.html | 20:51 |
anteaya | carl_baldwin: thanks for the ping | 20:51 |
roaet | footware heh | 20:53 |
*** networkstatic has joined #openstack-neutron | 20:54 | |
roaet | anteaya: I would like to go, my employer will not fund it. I will need to figure out all the prices to see if I can work it into my budget. | 20:56 |
anteaya | roaet: do your figuring and let me know how it goes | 20:58 |
anteaya | we are not above some public shaming if folks who are willing to do the work to shore up the gaps are not given the support the deserve from their employer | 20:59 |
anteaya | it isn't our first choice, but it is an option | 20:59 |
roaet | YMQ right? | 21:00 |
anteaya | YMQ? | 21:00 |
roaet | the airport | 21:00 |
roaet | in quebec | 21:00 |
anteaya | any airport you like | 21:00 |
roaet | anteaya: any idea if it is weekend/weekday? | 21:00 |
anteaya | http://en.wikipedia.org/wiki/List_of_airports_in_the_Montreal_area | 21:01 |
anteaya | Wednesday, Thursday, Friday | 21:01 |
anteaya | Dec 15, 16 and 17 | 21:01 |
anteaya | YUL | 21:02 |
roaet | wait.. it said 2nd week of january | 21:02 |
anteaya | yes | 21:02 |
anteaya | second full week of January | 21:02 |
roaet | Not Dec 15, 16 and 17? | 21:02 |
roaet | ^^ | 21:02 |
roaet | Jan 15 - 17 | 21:02 |
anteaya | http://lists.openstack.org/pipermail/openstack-dev/2013-November/019973.html | 21:02 |
anteaya | sorry yes | 21:02 |
anteaya | Jan 15, 16, 17 | 21:03 |
roaet | hrm. it isn't bad. | 21:03 |
anteaya | sorry I am tired | 21:03 |
anteaya | good | 21:03 |
anteaya | work your magic and tell me what support you need | 21:03 |
anteaya | I will do my best to provide it | 21:03 |
roaet | do you know where the conference thing is happening? | 21:03 |
roaet | sorry | 21:03 |
roaet | prbably in the list | 21:03 |
* roaet reads | 21:04 | |
anteaya | Salle du Parc, New Residence | 21:04 |
anteaya | McGill University, 3625 Parc Avenue | 21:04 |
anteaya | np | 21:04 |
*** harlowja has quit IRC | 21:09 | |
*** harlowja has joined #openstack-neutron | 21:09 | |
roaet | anteaya: pretty much just need to have my passport eh? Seems ok. | 21:11 |
roaet | I'll talk to my manager and see how it goes :D | 21:11 |
anteaya | cool | 21:15 |
anteaya | thanks | 21:15 |
anteaya | keep me posted | 21:15 |
anteaya | yeah, US citizens to Canada is just a passport | 21:15 |
*** suresh12 has joined #openstack-neutron | 21:15 | |
*** jprovazn has quit IRC | 21:17 | |
*** julim has quit IRC | 21:25 | |
*** Abhishek_ has joined #openstack-neutron | 21:27 | |
*** bashok has quit IRC | 21:28 | |
*** rudrarugge has quit IRC | 21:50 | |
*** Abhishek_ has quit IRC | 21:51 | |
*** Abhishek_ has joined #openstack-neutron | 21:51 | |
*** Abhishe__ has joined #openstack-neutron | 21:54 | |
*** Abhishek_ has quit IRC | 21:55 | |
*** harlowja has quit IRC | 22:02 | |
openstackgerrit | dekehn proposed a change to openstack/neutron: extra_dhcp_opt add checks for empty strings https://review.openstack.org/59858 | 22:08 |
*** dims has quit IRC | 22:10 | |
*** otherwiseguy has quit IRC | 22:11 | |
*** mlavalle has joined #openstack-neutron | 22:12 | |
*** clev_ has quit IRC | 22:13 | |
*** sbasam has quit IRC | 22:15 | |
*** bvandenh has quit IRC | 22:16 | |
*** aymenfrikha has quit IRC | 22:16 | |
*** clev has joined #openstack-neutron | 22:22 | |
*** peristeri has quit IRC | 22:24 | |
*** harlowja has joined #openstack-neutron | 22:27 | |
*** otherwiseguy has joined #openstack-neutron | 22:27 | |
*** sbasam has joined #openstack-neutron | 22:30 | |
*** sbasam has quit IRC | 22:34 | |
*** x86brandon has quit IRC | 22:35 | |
*** clev has quit IRC | 22:42 | |
*** clev has joined #openstack-neutron | 22:43 | |
*** rossella_s has joined #openstack-neutron | 22:49 | |
*** clev has quit IRC | 22:54 | |
*** alex_klimov has quit IRC | 22:55 | |
*** nati_ueno has joined #openstack-neutron | 23:01 | |
*** jecarey_ has joined #openstack-neutron | 23:06 | |
*** jecarey has quit IRC | 23:07 | |
*** networkstatic has quit IRC | 23:11 | |
openstackgerrit | dekehn proposed a change to openstack/neutron: extra_dhcp_opt add checks for empty strings https://review.openstack.org/59858 | 23:11 |
*** Mr_W has quit IRC | 23:13 | |
*** SumitNaiksatam has quit IRC | 23:13 | |
*** SumitNaiksatam has joined #openstack-neutron | 23:14 | |
*** jecarey_ has quit IRC | 23:15 | |
*** SumitNaiksatam has quit IRC | 23:15 | |
*** pcm_ has quit IRC | 23:16 | |
*** gdubreui has joined #openstack-neutron | 23:18 | |
*** rossella_s has quit IRC | 23:29 | |
*** banix has quit IRC | 23:31 | |
*** aymenfrikha has joined #openstack-neutron | 23:41 | |
*** salv-orlando has quit IRC | 23:49 | |
*** yamahata_ has joined #openstack-neutron | 23:50 | |
*** gdubreui has quit IRC | 23:51 | |
*** gdubreui has joined #openstack-neutron | 23:51 | |
*** thedodd has quit IRC | 23:52 | |
*** bgorski has joined #openstack-neutron | 23:57 | |
bgorski | Hi folks | 23:57 |
bgorski | I have a question. Assume I have a router with added gateway. What is the best way to get IP address of this gateway through API? | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!