*** bana_k has joined #openstack-lbaas | 00:17 | |
*** chlong_ has joined #openstack-lbaas | 00:22 | |
*** chlong_ has quit IRC | 00:22 | |
*** chlong_ has joined #openstack-lbaas | 00:27 | |
*** chlong_ has quit IRC | 00:28 | |
*** AnothrLundquist has quit IRC | 00:32 | |
*** reedip__ has joined #openstack-lbaas | 00:35 | |
*** AnothrLundquist has joined #openstack-lbaas | 00:35 | |
*** AnothrLundquist has quit IRC | 00:38 | |
*** madhu_ak has quit IRC | 00:56 | |
*** bana_k has quit IRC | 00:59 | |
*** yamamot__ has joined #openstack-lbaas | 01:04 | |
*** ducttape_ has joined #openstack-lbaas | 01:04 | |
*** ducttape_ has quit IRC | 01:06 | |
*** kevo has quit IRC | 01:20 | |
*** mixos has joined #openstack-lbaas | 01:20 | |
*** piet has joined #openstack-lbaas | 01:28 | |
*** amotoki has quit IRC | 01:30 | |
*** reedip__ has quit IRC | 01:30 | |
*** mjblack has joined #openstack-lbaas | 01:36 | |
*** yamamot__ has quit IRC | 01:40 | |
*** amotoki has joined #openstack-lbaas | 01:41 | |
*** yamamot__ has joined #openstack-lbaas | 01:41 | |
*** AnothrLundquist has joined #openstack-lbaas | 01:45 | |
*** AnothrLundquist has quit IRC | 01:47 | |
*** amotoki has quit IRC | 01:50 | |
*** piet has quit IRC | 01:54 | |
*** AnothrLundquist has joined #openstack-lbaas | 01:55 | |
*** amotoki has joined #openstack-lbaas | 01:56 | |
*** AnothrLundquist has quit IRC | 01:57 | |
*** fnaval_ has joined #openstack-lbaas | 01:58 | |
*** fnaval has quit IRC | 02:00 | |
*** amotoki has quit IRC | 02:19 | |
*** ducttape_ has joined #openstack-lbaas | 02:25 | |
*** AnothrLundquist has joined #openstack-lbaas | 02:26 | |
*** fnaval_ has quit IRC | 02:27 | |
*** AnothrLundquist has quit IRC | 02:31 | |
*** cody-somerville has joined #openstack-lbaas | 02:31 | |
*** cody-somerville has quit IRC | 02:31 | |
*** cody-somerville has joined #openstack-lbaas | 02:31 | |
*** amotoki has joined #openstack-lbaas | 02:32 | |
*** fnaval has joined #openstack-lbaas | 02:36 | |
*** yamamot__ has quit IRC | 02:38 | |
*** piet has joined #openstack-lbaas | 02:45 | |
*** AnothrLundquist has joined #openstack-lbaas | 03:00 | |
*** AnothrLundquist has quit IRC | 03:01 | |
openstackgerrit | Lingxian Kong proposed openstack/octavia: Add timestamp to octavia resources https://review.openstack.org/309253 | 03:01 |
---|---|---|
*** AnothrLundquist has joined #openstack-lbaas | 03:04 | |
*** AnothrLundquist has quit IRC | 03:05 | |
*** piet has quit IRC | 03:07 | |
*** AnothrLundquist has joined #openstack-lbaas | 03:10 | |
*** ducttape_ has quit IRC | 03:14 | |
*** prabampm has joined #openstack-lbaas | 03:18 | |
*** yamamot__ has joined #openstack-lbaas | 03:19 | |
*** ducttape_ has joined #openstack-lbaas | 03:19 | |
*** links has joined #openstack-lbaas | 03:21 | |
*** yamamot__ has quit IRC | 03:22 | |
*** ducttape_ has quit IRC | 03:23 | |
*** yamamot__ has joined #openstack-lbaas | 03:23 | |
*** mjblack has quit IRC | 03:24 | |
*** mjblack has joined #openstack-lbaas | 03:24 | |
*** yamamot__ has quit IRC | 03:25 | |
*** armax has quit IRC | 03:27 | |
*** yamamot__ has joined #openstack-lbaas | 03:27 | |
*** ducttape_ has joined #openstack-lbaas | 03:32 | |
*** ducttape_ has quit IRC | 03:47 | |
*** reedip__ has joined #openstack-lbaas | 03:58 | |
openstackgerrit | Phillip Toohill proposed openstack/octavia: Whitespace bug in sysvinit jinja template https://review.openstack.org/314853 | 03:58 |
*** mjblack has quit IRC | 04:01 | |
*** prabampm has quit IRC | 04:02 | |
*** pcaruana has joined #openstack-lbaas | 04:02 | |
*** cody-somerville has quit IRC | 04:04 | |
*** mjblack has joined #openstack-lbaas | 04:16 | |
*** reedip__ has quit IRC | 04:19 | |
*** reedip__ has joined #openstack-lbaas | 04:31 | |
*** yuanying has quit IRC | 04:44 | |
*** amotoki_ has joined #openstack-lbaas | 04:46 | |
*** amotoki has quit IRC | 04:48 | |
*** rcernin has joined #openstack-lbaas | 04:58 | |
*** jaff_cheng has joined #openstack-lbaas | 05:02 | |
*** fawadkhaliq has joined #openstack-lbaas | 05:05 | |
*** yamamot__ has quit IRC | 05:08 | |
*** yamamot__ has joined #openstack-lbaas | 05:09 | |
*** gomarivera has joined #openstack-lbaas | 05:09 | |
*** yamamot__ has quit IRC | 05:12 | |
*** yamamot__ has joined #openstack-lbaas | 05:13 | |
*** fnaval has quit IRC | 05:16 | |
*** fnaval has joined #openstack-lbaas | 05:19 | |
*** gomarivera has quit IRC | 05:24 | |
*** kobis has joined #openstack-lbaas | 05:28 | |
*** yamamot__ has quit IRC | 05:50 | |
*** yamamot__ has joined #openstack-lbaas | 05:52 | |
*** nmagnezi has joined #openstack-lbaas | 05:52 | |
*** amotoki_ has quit IRC | 05:56 | |
*** AnothrLundquist has quit IRC | 05:59 | |
*** yuanying has joined #openstack-lbaas | 06:01 | |
*** numans has joined #openstack-lbaas | 06:07 | |
*** jschwarz has joined #openstack-lbaas | 06:14 | |
*** piet has joined #openstack-lbaas | 06:16 | |
*** piet has quit IRC | 06:25 | |
*** nmagnezi_ has joined #openstack-lbaas | 06:28 | |
*** nmagnezi_ has quit IRC | 06:35 | |
*** anilvenkata has joined #openstack-lbaas | 06:36 | |
*** woodster_ has quit IRC | 06:38 | |
*** fawadkhaliq has quit IRC | 06:44 | |
*** fawadkhaliq has joined #openstack-lbaas | 06:44 | |
*** mixos has quit IRC | 06:49 | |
*** amotoki has joined #openstack-lbaas | 07:04 | |
*** rcernin has quit IRC | 07:09 | |
*** rcernin has joined #openstack-lbaas | 07:10 | |
*** yuanying has quit IRC | 07:18 | |
*** amotoki has quit IRC | 07:18 | |
*** amotoki has joined #openstack-lbaas | 07:25 | |
*** nmagnezi has quit IRC | 07:30 | |
*** nmagnezi has joined #openstack-lbaas | 07:34 | |
*** chlong has quit IRC | 07:49 | |
*** amotoki has quit IRC | 07:54 | |
*** amotoki has joined #openstack-lbaas | 07:55 | |
*** amotoki has quit IRC | 08:04 | |
*** bana_k has joined #openstack-lbaas | 08:05 | |
*** dmk0202 has joined #openstack-lbaas | 08:08 | |
*** fawadkhaliq has quit IRC | 08:14 | |
*** fawadkhaliq has joined #openstack-lbaas | 08:15 | |
*** amotoki has joined #openstack-lbaas | 08:21 | |
*** amotoki has quit IRC | 08:22 | |
*** fawadkhaliq has quit IRC | 08:25 | |
*** fawadkhaliq has joined #openstack-lbaas | 08:25 | |
*** amotoki has joined #openstack-lbaas | 08:26 | |
openstackgerrit | hang cheng proposed openstack/neutron-lbaas: Wrote correct comment for function _create_health_monitor in: neutron_lbaas/tests/tempest/v2/scenario/base.py https://review.openstack.org/314911 | 08:28 |
openstackgerrit | hang cheng proposed openstack/neutron-lbaas: Correct the comment https://review.openstack.org/314911 | 08:30 |
*** yuanying has joined #openstack-lbaas | 08:34 | |
*** bana_k has quit IRC | 08:37 | |
*** yuanying has quit IRC | 08:37 | |
*** yuanying has joined #openstack-lbaas | 08:38 | |
*** bogdan has joined #openstack-lbaas | 08:41 | |
*** yuanying has quit IRC | 08:41 | |
*** yuanying has joined #openstack-lbaas | 08:43 | |
bogdan | I create LB and the amphora gets successfully created and in the Octavia database I see the LB is ACTIVE and ONLINE but the Neutron tables seem not to be updated - neutron lb show still reports LB is OFFLINE and in ERROR provisioning_status, how should I fix this? | 08:46 |
*** yuanying has quit IRC | 08:47 | |
*** amotoki has quit IRC | 08:49 | |
*** amotoki has joined #openstack-lbaas | 08:50 | |
*** yuanying has joined #openstack-lbaas | 08:52 | |
*** jschwarz_ has joined #openstack-lbaas | 08:55 | |
*** jschwarz has quit IRC | 08:55 | |
*** yuanying has quit IRC | 08:57 | |
*** tesseract has joined #openstack-lbaas | 09:20 | |
*** fawadkhaliq has quit IRC | 09:32 | |
*** fawadkhaliq has joined #openstack-lbaas | 09:32 | |
*** amotoki has quit IRC | 09:35 | |
*** reedip__ has quit IRC | 09:44 | |
*** jschwarz_ has quit IRC | 09:46 | |
*** yamamot__ has quit IRC | 09:54 | |
*** yamamot__ has joined #openstack-lbaas | 09:55 | |
*** reedip__ has joined #openstack-lbaas | 09:57 | |
*** jaff_cheng has quit IRC | 10:09 | |
*** jaff_cheng has joined #openstack-lbaas | 10:16 | |
*** jaff_cheng has quit IRC | 10:36 | |
bogdan | any octavia experts? | 10:46 |
*** ri0 has joined #openstack-lbaas | 10:50 | |
*** yamamot__ has quit IRC | 10:53 | |
*** yamamot__ has joined #openstack-lbaas | 10:53 | |
bogdan | I think I've configured everything to get the tutorial running - my amphora is up and running, LB in neutron is ACTIVE and OFFLINE, Floating IP assigned to the VIP port cannot be pinged...what am I missing, how to troubleshoot the VIP port? | 10:55 |
*** ri0 has quit IRC | 10:55 | |
bogdan | no errors in Octavia or Neutron logs | 10:56 |
*** ri0 has joined #openstack-lbaas | 10:58 | |
*** reedip__ has quit IRC | 11:06 | |
*** amotoki has joined #openstack-lbaas | 11:07 | |
*** prabampm has joined #openstack-lbaas | 11:22 | |
*** ducttape_ has joined #openstack-lbaas | 11:37 | |
bogdan | how can I troubleshoot the VIP port? | 11:38 |
*** rtheis has joined #openstack-lbaas | 11:42 | |
*** fawadkhaliq has quit IRC | 11:46 | |
*** fawadkhaliq has joined #openstack-lbaas | 11:47 | |
*** yamamot__ has quit IRC | 11:56 | |
*** yamamot__ has joined #openstack-lbaas | 11:56 | |
*** yamamot__ has quit IRC | 12:01 | |
*** yamamot__ has joined #openstack-lbaas | 12:01 | |
*** yamamot__ has quit IRC | 12:01 | |
*** yamamot__ has joined #openstack-lbaas | 12:02 | |
*** jaff_cheng has joined #openstack-lbaas | 12:22 | |
*** ducttape_ has quit IRC | 12:22 | |
*** openstackgerrit has quit IRC | 12:33 | |
*** openstackgerrit has joined #openstack-lbaas | 12:34 | |
*** yamamot__ has quit IRC | 12:40 | |
*** yamamoto has joined #openstack-lbaas | 12:40 | |
*** bana_k has joined #openstack-lbaas | 12:42 | |
*** links has quit IRC | 12:52 | |
*** bana_k has quit IRC | 13:02 | |
*** amotoki has quit IRC | 13:08 | |
*** matt-borland has joined #openstack-lbaas | 13:10 | |
*** bana_k has joined #openstack-lbaas | 13:16 | |
*** skape has joined #openstack-lbaas | 13:17 | |
skape | Hi guys! it's me again. Now I'm trying to enable the loadbalancer in horizon. i've set in the local_setting enable_lb to true but nothing change | 13:20 |
*** ri0 has quit IRC | 13:20 | |
skape | i'm using loadbalancer V2 with haproxy | 13:24 |
*** St3F_A13x has joined #openstack-lbaas | 13:29 | |
skape | if I try to access the dashboard/project/loadbalancer in the horizon logs I receive the error The resource could not be found." | 13:30 |
*** Alex_Stef has quit IRC | 13:33 | |
*** Alex_Stef has joined #openstack-lbaas | 13:34 | |
*** St3F_A13x has quit IRC | 13:35 | |
*** chlong has joined #openstack-lbaas | 13:38 | |
*** fawadkhaliq has quit IRC | 13:50 | |
*** fawadkhaliq has joined #openstack-lbaas | 13:50 | |
*** links has joined #openstack-lbaas | 14:01 | |
*** ducttape_ has joined #openstack-lbaas | 14:08 | |
*** fawadkhaliq has quit IRC | 14:10 | |
*** fawadkhaliq has joined #openstack-lbaas | 14:10 | |
*** mixos has joined #openstack-lbaas | 14:15 | |
*** fawadkhaliq has quit IRC | 14:15 | |
*** mixos_ has joined #openstack-lbaas | 14:16 | |
*** mixos has quit IRC | 14:20 | |
*** bana_k has quit IRC | 14:27 | |
*** mixos_ has quit IRC | 14:28 | |
*** prabampm has quit IRC | 14:29 | |
*** jaff_cheng has quit IRC | 14:29 | |
*** amotoki has joined #openstack-lbaas | 14:30 | |
*** bana_k has joined #openstack-lbaas | 14:30 | |
johnsom | skape Did you follow these steps? https://pypi.python.org/pypi/neutron-lbaas-dashboard | 14:34 |
johnsom | bogdan If you have DVR enabled, there is a bug that can cause floating IPs to not work correctly. Does going direct to the VIP work? | 14:35 |
johnsom | The DVR folks said it might be a while before they can fix the floating IP issue | 14:36 |
*** ajmiller has quit IRC | 14:36 | |
skape | johnsom no I didn't but the guys in horizon chanel helped me already | 14:38 |
skape | thx | 14:38 |
skape | but this information should be included here http://docs.openstack.org/mitaka/networking-guide/adv-config-lbaas.html | 14:40 |
*** mixos has joined #openstack-lbaas | 14:41 | |
*** ajmiller has joined #openstack-lbaas | 14:42 | |
bogdan | johnsom how can I tell if I am using DVR or not? (sorry new to that one) | 14:42 |
bogdan | I cannot ping the VIP interface via the qdhcp-namespace | 14:43 |
bogdan | when I try to set admin_state_up = True I get the following warning in the Neutron log: WARNING neutron.plugins.ml2.plugin [...] In _notify_port_updated(), no bound segment for port ... on network ... | 14:43 |
johnsom | bogdan Do a neutron router-show <routerID> and see if "distributed" is True | 14:44 |
johnsom | If it is True, you have DVR enabled and may be hitting the DVR bug | 14:45 |
*** TrevorV has joined #openstack-lbaas | 14:45 | |
bogdan | johnsom it is false for all my routers | 14:45 |
johnsom | Ok, then it isn't the DVR bug | 14:46 |
bogdan | I believe it is the ml2 thing but I cannot find any explanation about it... | 14:46 |
johnsom | skape Yes, we have had trouble getting stuff into that guide. We have some open bugs to move the docs into our tree. Feel free to update or work on one of those: https://bugs.launchpad.net/octavia/+bugs?field.tag=docs | 14:47 |
*** ducttape_ has quit IRC | 14:48 | |
*** woodster_ has joined #openstack-lbaas | 14:50 | |
bogdan | johnsom, I have saw a bug in google about this but it seems log fixed https://bugs.launchpad.net/neutron/+bug/1227091 | 14:51 |
openstack | Launchpad bug 1227091 in neutron "ml2 fails to bind lbaas VIP" [Critical,Fix released] - Assigned to yong sheng gong (gongysh) | 14:51 |
bogdan | fixed long ago | 14:52 |
johnsom | Yeah, there are no current bugs with ML2 and LBaaSv2 that I know of | 14:53 |
*** links has quit IRC | 14:54 | |
bogdan | johnsom, any idea where to debug? maybe my environment is strange and invalid parameters are passed to that step of notify_port_update? I mean which piece of code in Octavia should I look at? | 14:55 |
johnsom | Well, if you think it is an Octavia issue, you want to look in the o-cw (devstack) log | 14:56 |
johnsom | It would contain any error that it might have had working with neutron to plug ports. | 14:57 |
johnsom | Is this a devstack install? | 14:57 |
*** anilvenkata has quit IRC | 14:57 | |
bogdan | johnsom, it is not devstack | 14:59 |
bogdan | I see no errors in the octavia logs | 15:00 |
bogdan | btw, another strange issue (maybe related): | 15:00 |
bogdan | curl -s -H "X-Auth-Token: $TOK" http://localhost:9696/v2.0/lbaas/loadbalancers/2c9a9124-87db-47e9-8d55-944a87440386/statuses | python -mjson.tool | 15:00 |
bogdan | says : | 15:00 |
bogdan | "operating_status": "DEGRADED", for the LB, while all internals (listener, pool, members) are ONLINE/ACTIVE | 15:01 |
bogdan | I could not find any explanation of the DEGRADED value across the net | 15:02 |
bogdan | did not look at the code yet | 15:02 |
bogdan | what is this DEGRADED state and how can I reset it? | 15:02 |
johnsom | DEGRADED means that one or more of the members are not reachable | 15:06 |
bogdan | how come they are not reachable but have status ONLINE/ACTIVE - see full status at http://paste.openstack.org/show/496753/ | 15:08 |
bogdan | it is true that at some point in time I had one more pool whose members were OFFLINE, but then I deleted that pool, could that be a reason? | 15:08 |
bogdan | is there a way to reset this degraded state? | 15:08 |
bogdan | btw, what is this "segment" in ml2 context? I do not see any documentation about it on the network | 15:10 |
*** fnaval has quit IRC | 15:13 | |
*** diogogmt has joined #openstack-lbaas | 15:14 | |
*** numans has quit IRC | 15:15 | |
*** yamamoto has quit IRC | 15:17 | |
*** yamamoto has joined #openstack-lbaas | 15:18 | |
*** fawadkhaliq has joined #openstack-lbaas | 15:22 | |
*** bana_k has quit IRC | 15:23 | |
openstackgerrit | Nir Magnezi proposed openstack/neutron-lbaas: (WIP) Auto reschedule loadbalancers from dead agents https://review.openstack.org/299998 | 15:25 |
*** fnaval has joined #openstack-lbaas | 15:27 | |
*** skape has left #openstack-lbaas | 15:30 | |
*** armax has joined #openstack-lbaas | 15:32 | |
*** madhu_ak has joined #openstack-lbaas | 15:33 | |
johnsom | bogdan I'm not sure I can answer the details of a segment in ml2. I have not needed to get into much detail on the ML2 side yet. | 15:33 |
*** diogogmt has quit IRC | 15:33 | |
johnsom | bogdan DEGRADED should automatically go back into ONLINE once the members are restored. | 15:33 |
johnsom | Are you checking in the Octavia database or the neutron database? There may be some issues with syncing the neutron database depending on how you deployed | 15:34 |
bogdan | I am using the lbaas v2 API curl -s -H "X-Auth-Token: $TOK" http://localhost:9696/v2.0/lbaas/loadbalancers/2c9a9124-87db-47e9-8d55-944a87440386/statuses | python -mjson.tool | 15:38 |
johnsom | Ok, so via LBaaS API | 15:40 |
*** armax has quit IRC | 15:40 | |
bogdan | btw, there seems to be some difference between the databases http://paste.openstack.org/show/496764/ | 15:40 |
bogdan | octavia db says ONLINE, neutron db says OFFLINE, lbaas api says DEGRADED :) wtf | 15:41 |
bogdan | how should I get the neutron and octavia dbs in sync? | 15:41 |
bogdan | from what you say about degraded it seems to be a bug at least in the API because it reports the degraded state but all of the members are ONLINE/ACTIVE, right? | 15:42 |
*** pck has quit IRC | 15:42 | |
*** pck has joined #openstack-lbaas | 15:44 | |
*** pck is now known as pckizer | 15:45 | |
bogdan | johnsom, it seems that after some hacking via the horizon interface I got this port detached, how can I manually attach it back to the loadbalancer? what should I fill in for device Owner, device ID, Binding Host? | 15:46 |
*** rcernin has quit IRC | 15:47 | |
johnsom | bogdan Yeah, you might have a DB sync issue. Their is the "event streamer" that might help you here. It's an octavia configuration setting, http://docs.openstack.org/developer/octavia/config-reference/octavia-config-table.html event_streamer_driver | 15:48 |
johnsom | There is an open bug about the databases getting out of sync. | 15:48 |
johnsom | bogdan As for the port settings, I'm not sure the right answer for all of that, we let Octavia handle that for us | 15:49 |
johnsom | It varies based on the port and topology you selected | 15:49 |
*** bana_k has joined #openstack-lbaas | 15:49 | |
*** armax has joined #openstack-lbaas | 15:53 | |
*** armax has quit IRC | 15:57 | |
*** diogogmt has joined #openstack-lbaas | 15:59 | |
*** fawadkhaliq has quit IRC | 16:02 | |
bogdan | johnsom, could you paste me some "neutron port-show" for a valid VIP port? I would like to see what is missing in my port | 16:03 |
*** dmk0202 has quit IRC | 16:07 | |
*** nmagnezi has quit IRC | 16:07 | |
*** mjblack has quit IRC | 16:08 | |
*** armax has joined #openstack-lbaas | 16:21 | |
*** mixos has quit IRC | 16:22 | |
*** crc32 has joined #openstack-lbaas | 16:23 | |
*** ducttape_ has joined #openstack-lbaas | 16:34 | |
*** bogdan has quit IRC | 16:36 | |
*** bana_k has quit IRC | 16:40 | |
*** mjblack has joined #openstack-lbaas | 16:44 | |
*** AnothrLundquist has joined #openstack-lbaas | 16:46 | |
*** Lundquist_ has joined #openstack-lbaas | 16:49 | |
*** AnothrLundquist has quit IRC | 16:51 | |
*** amotoki has quit IRC | 16:52 | |
*** fawadkhaliq has joined #openstack-lbaas | 16:53 | |
*** bana_k has joined #openstack-lbaas | 16:56 | |
*** crc32 has quit IRC | 17:02 | |
*** BjoernT has joined #openstack-lbaas | 17:10 | |
*** fawadkhaliq has quit IRC | 17:12 | |
*** anilvenkata has joined #openstack-lbaas | 17:15 | |
*** bogdan has joined #openstack-lbaas | 17:19 | |
*** Lundquist_ has quit IRC | 17:19 | |
ptoohill | gates being funky again, or is this just me >< | 17:25 |
*** kevo has joined #openstack-lbaas | 17:27 | |
*** Lundquist_ has joined #openstack-lbaas | 17:29 | |
*** openstackgerrit has quit IRC | 17:33 | |
*** openstackgerrit has joined #openstack-lbaas | 17:34 | |
*** matt-borland has quit IRC | 17:42 | |
*** Lundquist_ has quit IRC | 17:54 | |
*** anilvenkata has quit IRC | 17:56 | |
*** kobis has quit IRC | 18:01 | |
bogdan | johnsom, I have enabled event_streamer_driver = queue_event_streamer, restarted octavia services, recreated lb but I still get to non-synchronized dbs http://paste.openstack.org/show/496790/, am I missing something? | 18:02 |
*** numans has joined #openstack-lbaas | 18:03 | |
bogdan | could the non-synhcronized state of Neutron lbaas tables lead to the VIP port state DOWN? | 18:16 |
*** Purandar has joined #openstack-lbaas | 18:18 | |
*** mixos has joined #openstack-lbaas | 18:21 | |
bogdan | these are the details of the ports of the LB, do you see issues there? http://paste.openstack.org/show/496795/ | 18:24 |
*** evgenyf has joined #openstack-lbaas | 18:27 | |
*** krotscheck_ has joined #openstack-lbaas | 18:30 | |
bogdan | strangely enough the LB is in state DEGRADED even before I create any listeners,pools, members... http://paste.openstack.org/show/496796/ | 18:31 |
*** krotscheck has quit IRC | 18:31 | |
*** krotscheck_ is now known as krotscheck | 18:33 | |
johnsom | bogdan The sync should not impact the VIP functioning. If the LB thinks the members are all down, when you curl the VIP you would get an HTTP 503 error. | 18:41 |
bogdan | my VIP port is down, so no curl to it is possible | 18:44 |
bogdan | I have no listeners/pools/memebers so wondering what the logic behind this DEGRADED is | 18:45 |
bogdan | this should be somewhere in Neutron right? | 18:45 |
johnsom | DEGRADED would make sense if there is no pool and members. | 18:48 |
bogdan | ah, ok | 18:48 |
johnsom | bogdan also note, there are two ports for a VIP in octavia. | 18:48 |
johnsom | One is an allowed address pairs port | 18:49 |
bogdan | do you know at what step is the VIP port completely configured? could it be possible that only after state is out of degraded the port gets enabled? | 18:49 |
johnsom | You should be able to curl the VIP port and get back a 503 with just the LB and listener configured. | 18:49 |
bogdan | ok let me configure the listener now | 18:50 |
*** tesseract has quit IRC | 18:51 | |
bogdan | is there a diagram or explanation about all the different ports created by octavia per LB? | 18:55 |
johnsom | Hmm, actually no. We should add that to the docs bugs. It's basically a service VM with a management port, then it creates a VIP which is a main port and an allowed address pairs port, then as necessary it will create a plug ports for the member networks. | 18:56 |
johnsom | bogdan Are you running this in devstack or other? The network plugging is pretty stable from our uses. The only known issue, which isn't LBaaS/Octavia related is neutron floating IPs don't work when DVR is enabled if the port has allowed address pairs enabled | 18:57 |
bogdan | I am using plain openstack, no devstack | 18:59 |
*** numans has quit IRC | 18:59 | |
johnsom | Ah, ok, so maybe there was a setup step missed. Check out the script we use to configure devstack: https://github.com/openstack/octavia/blob/master/devstack/plugin.sh | 18:59 |
johnsom | It has everything we do to configure Octavia | 19:00 |
bogdan | from your list I can recognize - I have the management port 10.1.0.14, the VIP 192.168.1.60 and I guess the plug port for the member network 192.168.1.61, (listed here http://paste.openstack.org/show/496795/), so I am missing the "an allowed address pairs port" | 19:01 |
*** AnothrLundquist has joined #openstack-lbaas | 19:02 | |
bogdan | in which log should I see details about it being created or which lines of code in octavia to debug to see why I do not get these? | 19:02 |
sbalukoff | Meeting today? | 19:03 |
sbalukoff | Or is that at 1:00pm? | 19:03 |
sbalukoff | Er... an hour from now? | 19:03 |
bogdan | I was following the plugin.sh all way long, the only thing I did not do is the steps in "create_mgmt_network_interface" because I did not know how to translate the ovs command to linuxbridge one | 19:05 |
bogdan | could that be the issue? I thought it is not the problem as Octava db says ACTIVE/ONLINE for LB - only Neutron says ERROR/DEGRADED, etc. | 19:06 |
*** AnothrLundquist has quit IRC | 19:12 | |
*** AnothrLundquist has joined #openstack-lbaas | 19:17 | |
bogdan | johnsom, you can see what happens right after the amphora is up http://paste.openstack.org/show/496805/ from what I see all VIPs and ports and whatever should be created but you see the VIP port is down ... I am wondering how to troubleshoot | 19:28 |
bogdan | do you see anything not right in the log? | 19:28 |
xgerman | log looks ok to me | 19:30 |
xgerman | Neutron goes to Octavia to check — in Octavia I would look in the members table to see which one of the embers is down. Also a common problem is suers not specifying healthchecks | 19:31 |
bogdan | aaah, I for sure did not configure healtchecks | 19:34 |
bogdan | but how about the VIP port, why would it be in such an ugly state, status = DOWN, binding:vif_type = unbound, etc. | 19:36 |
bogdan | I have no members yet, johnsom said I should be able to curl VIP port but it is DOWN, I do not want to proceed with pool, member, healtcheck creation before I get this VIP port up and running | 19:37 |
*** dmk0202 has joined #openstack-lbaas | 19:38 | |
xgerman | yeah, it will return some error page without members so that will wirk | 19:46 |
xgerman | VIP port weird - I have no idea but maybe blogan can help | 19:46 |
*** AnothrLundquist has quit IRC | 19:50 | |
johnsom | bogdan We would not plug another port for the members if it is the same network as your VIP. 192.168.1.60 and 192.168.1.61 are your two VIP ports, one will be the allowed address pairs port. The other being down is ok and should not impact your VIP functionality | 19:55 |
johnsom | No matter what the port says, you can't curl your vip? | 19:55 |
bogdan | nope, I was just reviewing the security group rules if they prevent 80 port but they seem ok | 19:56 |
bogdan | and curl to VIP port fails with no route to host | 19:56 |
johnsom | Ah, ok, so your host can't get to the VIP netowrk | 19:56 |
bogdan | I am trying to ping the VIP via the qdhcp-namespace of the subnet where these IPs live | 19:56 |
johnsom | sbalukoff Yes, I plan to have a short meeting today | 19:57 |
bogdan | when using the namespace I can ping the router port in the network e..g 192.168.1.2 ...but not the 60 or 61 IPs... | 19:57 |
*** mixos has quit IRC | 19:57 | |
johnsom | Ping is blocked by the security group I think | 19:58 |
*** minwang2 has joined #openstack-lbaas | 19:58 | |
johnsom | Only 80 should be open | 19:58 |
bogdan | I added ICMP rules to all SGs | 19:58 |
bharathm | bogdan: do you have allow ICMP rule for the port ? | 19:58 |
bogdan | yes | 19:58 |
bharathm | As jhonsom said, one of those is VIP and the other is address-pair port.. From Neutron perspective, its ok for the VIP port to be DOWN and unbound FYI | 19:59 |
johnsom | Octavia meeting starting soon on #openstack-meeting-alt | 19:59 |
bogdan | wow! that is a relief ...I was afraid I have a severe problem with NEutron | 19:59 |
bogdan | ok then I am missing something else | 20:00 |
bharathm | bogdan: see if you can reach any other VMs with a port plugged from this subnet | 20:00 |
bharathm | from qdhcp-* ns | 20:00 |
bogdan | so which of the two ports 60 or 61 is supposed to be my VIP that I can curl? | 20:00 |
bharathm | The one from neutron lbaas-loadbalancer-list is the VIP port IP add | 20:01 |
rm_work | whatever you put as the listener port when you created the LB | 20:01 |
bharathm | The one from nova list output is the service address-pair owner.. | 20:01 |
rm_work | or, maybe i am missing context | 20:01 |
bharathm | rm_work: he's referring to neutron ports | 20:01 |
rm_work | ah | 20:01 |
bogdan | I can ping 192.168.1.1 and 2 which are the router and dhcp ports in that subnet | 20:03 |
bogdan | but cannot ping 60 or 61 | 20:03 |
bogdan | ping to 60 or 61 says Destination Host Unreachable | 20:03 |
bharathm | bogdan: that means you received no response from that interface.. So either VM is down OR the interface on the VM is corrupted as you have already added the SG rules.. | 20:04 |
bharathm | If you can, ssh into the amphora and do "$ip a" | 20:04 |
bogdan | yes, I can ssh to the amphora via the amphora net (in my case 10.1.0.14 is the IP of the amphora) here is what ip a says: http://paste.openstack.org/show/496813/ | 20:09 |
bogdan | I only see the amphora network interface there... I did not know I should have more interfaces there.... who controls them? when and how are these created? cloud-init? | 20:10 |
bharathm | bogdan: Sorry.. the interface are moved to the namespace.. Try "$sudo ip netns exec amphora-haproxy ip a" | 20:15 |
bogdan | inside the amphora vm? | 20:16 |
bogdan | here it is http://paste.openstack.org/show/496815/ | 20:17 |
bogdan | just saw eth1 is for some reason not created - see updated http://paste.openstack.org/show/496815/ | 20:23 |
*** AnothrLundquist has joined #openstack-lbaas | 20:26 | |
*** mjblack has quit IRC | 20:27 | |
*** kobis has joined #openstack-lbaas | 20:29 | |
bharathm | bogdan: as you can see eth1 is down and has no IP address.. A functioning amphora should have eth1 and eth1:0 with VM port and VIP port ip addresses respectively | 20:31 |
bogdan | I got that but now wondering what the cause for this is, I have built the amphora image using the tutorial ./diskimage-create.sh -o /home/I030730/openstack/octavia/amphora-ubuntu-x64-haproxy -s 2 -t qcow2 | 20:33 |
bogdan | what am I missing? | 20:33 |
bharathm | plug-vip task may have failed.. check amphora agent log | 20:35 |
bogdan | but the very first lines of cloud init say that eth1 is not up...shouldn't it be up already there? (see last lines in the paste above) | 20:36 |
bogdan | quick question - what is haproxy_template in [haproxy_amphora] in octavia.conf? mine is commented | 20:37 |
*** mjblack has joined #openstack-lbaas | 20:37 | |
rm_work | bogdan: the jinja template to use to make the haproxy config | 20:37 |
rm_work | you can customize it and use one for your own deployment if you have specific requirements about ram usage or core affinity or whatever | 20:38 |
rm_work | by default it uses the simple one we ship | 20:38 |
rm_work | i forget the path | 20:38 |
bogdan | ok then this is not the problem | 20:39 |
bogdan | where is the amphora log? just /var/log/syslog or some dedicated file? | 20:40 |
rm_work | you mean on the actual amphora itself? | 20:41 |
bogdan | yes | 20:41 |
rm_work | I actually ... am not sure, lol | 20:41 |
bogdan | in syslog I see IPv6: ADDRCONF(NETDEV_UP): eth1: link is not ready | 20:41 |
bogdan | how are eth1 and eth1:0 supposed to be configured on the amphora? who and when is doing it? | 20:42 |
rm_work | johnsom might know, he was working on pulling the amp logs during our gate tests so i'm guessing he at least knows where the files are :P | 20:42 |
rm_work | bogdan: the REST Agent does it | 20:42 |
rm_work | well, some of it | 20:42 |
rm_work | some of it is nova doing it on boot | 20:42 |
*** pcaruana has quit IRC | 20:42 | |
johnsom | bogdan Yes, eth1 and eth1:0 are cofigured. They are part of the amphora-haproxy namespace though | 20:43 |
johnsom | The are for the VIP | 20:43 |
*** rcernin has joined #openstack-lbaas | 20:44 | |
bogdan | ok but in my amphora they fail to configure | 20:44 |
bogdan | I see also dhclient: Error getting hardware address for "eth1": No such device in amphora /var/log/syslog... | 20:44 |
bogdan | and tons of failures in the sense of: | 20:45 |
bogdan | May 11 18:04:37 amphora-c8697146-42dc-446b-8f55-2f2188eaad88 os-collect-config: 2016-05-11 18:04:37.609 1155 WARNING os_collect_config.local [-] /var/lib/os-collect-config/local-data not found. Skipping May 11 18:04:37 amphora-c8697146-42dc-446b-8f55-2f2188eaad88 os-collect-config: 2016-05-11 18:04:37.613 1155 WARNING os_collect_config.local [-] No local metadata found (['/var/lib/os-collect-config/local-data']) May 11 18:04: | 20:45 |
bogdan | one generic question - could the amphora vm creation process be impacted by the fact that I am behind proxy and there is no proxy configured inside amphora? | 20:46 |
bogdan | johnsom, where are the eth1 and eth1:0 configured? the os for some reason cannot use them | 20:47 |
johnsom | Well, note, they are in a network namespace, so you have to interact with them different than a normal interface | 20:47 |
johnsom | ip netns exec amphora-haproxy ifconfig | 20:48 |
bogdan | yes, bharathm already explained this | 20:48 |
bogdan | see above | 20:48 |
*** mixos has joined #openstack-lbaas | 20:49 | |
bogdan | and ip netns exec amphora-haproxy ifconfig sys: http://paste.openstack.org/show/496815/ | 20:49 |
bogdan | so interfaces are for some reason not configured on a very basic level - the very start of cloud init reports eth1 down - look at http://paste.openstack.org/show/496818/ | 20:50 |
johnsom | Yeah, cloud init will not see the eth1 in the namespace, only the eth1 outside the namespace | 20:51 |
bogdan | hm, how to troubleshoot then? :) | 20:52 |
bogdan | who and when configures those interfaces in the namespace? is this part of the image? | 20:52 |
*** minwang2 has quit IRC | 20:53 | |
bogdan | johnsom, here is what I have as config for eth1 on the vm http://paste.openstack.org/show/496822/ | 20:54 |
*** mjblack has quit IRC | 21:01 | |
johnsom | bogdan That is the management interface outside of the namespace. What is in /etc/netns/amphora-haproxy/network/interfaces.d | 21:02 |
*** evgenyf has quit IRC | 21:02 | |
johnsom | ? | 21:02 |
johnsom | note, there will be multiple eth1's, some inside namespace, some outside | 21:03 |
johnsom | Assuming you are running Master code and not liberty/mitaka | 21:03 |
johnsom | bogdan the amphora agent inside the amphora creates those interfaces with the VIP is plugged into the amphora. | 21:04 |
*** mjblack has joined #openstack-lbaas | 21:06 | |
bogdan | I am running openstack Liberty | 21:09 |
*** rtheis has quit IRC | 21:09 | |
bogdan | and did git clone https://github.com/openstack/octavia.git | 21:09 |
bogdan | so octavia is from master | 21:10 |
bogdan | the rest is from liberty | 21:10 |
johnsom | Ahhh, ok, way different | 21:11 |
johnsom | There are no namespaces in the liberty version | 21:12 |
johnsom | Oh, sorry, mis-read | 21:12 |
bogdan | I see an interesting file where I guess everything happens about namespaces, interfaces, but cannot tell how it was executed /etc/init/haproxy-8d30f0a1-3901-4e97-8cae-b70324ae3586.conf | 21:15 |
bogdan | I see /var/log/upstart/haproxy-8d30f0a1-3901-4e97-8cae-b70324ae3586.log which only contains: | 21:16 |
bogdan | two lines: | 21:16 |
bogdan | Cannot not create namespace file "/var/run/netns/amphora-haproxy": File exists Wed May 11 18:52:02 UTC 2016 Starting HAProxy | 21:16 |
bogdan | according to this conf file interfaces should be created but they are somehow not configured | 21:17 |
johnsom | That is fine, that code only runs if the amp is rebooted | 21:18 |
kong | ooh, again I missed the irc meeting... | 21:19 |
johnsom | bogdan The code that does the plug is here: https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/plug.py#L45 | 21:19 |
kong | anyone may take a look at https://review.openstack.org/#/c/314410/, need more feedback for working on that | 21:20 |
kong | it's prettry simple, just need consensus | 21:20 |
bogdan | johnsom, what is the relation between plug.py and the conf file on the amphora? | 21:22 |
johnsom | Which conf? | 21:22 |
bogdan | /etc/init/haproxy-8d30f0a1-3901-4e97-8cae-b70324ae3586.conf on the amphora vm | 21:23 |
*** TrevorV has quit IRC | 21:23 | |
johnsom | Ah, another part of the amphora-agent code generates that haproxy conf | 21:23 |
johnsom | So, virtually no relation | 21:24 |
bogdan | so, the plug.py is executed inside the amphora vm? how can I debug it - when does this piece of code run? | 21:25 |
bogdan | I see 3 instances of /usr/bin/python /usr/local/bin/amphora-agent --config-file /etc/octavia/amphora-agent.conf already running in the amphora vm, where are they logging to? | 21:29 |
johnsom | It logs to /var/log/upstart/amphora-agent | 21:33 |
bogdan | nothing indicates that the agent failed creating those interfaces http://paste.openstack.org/show/496827/ | 21:35 |
bogdan | so no errors but also no interfaces configured :) how to troubleshoot this? | 21:36 |
*** crc32 has joined #openstack-lbaas | 21:43 | |
bogdan | johnsom, when is the plug_vip code executed from amphora lifecycle point of view? | 22:07 |
johnsom | load balancer create | 22:07 |
bogdan | how can I debug that? | 22:08 |
bogdan | I tried to shutdown the amphora-agent manually but could not do it as it was respawning, then decided to reboot vm but then health manager decided to revert it and deleted the whole VM :) | 22:09 |
johnsom | Yeah, these are cattle, the controller will nuke it and startup a replacement (which has a bug I'm working on right now with the namespaces). | 22:11 |
johnsom | If you want to monkey with the amphora-agent (don't recommend it, as if there are no logs for the agent, it succeeded at it's role) | 22:12 |
johnsom | you have to stop the health manager | 22:12 |
bogdan | but if it succeeded then why these interfaces are down on the amphora vm? | 22:13 |
*** mixos has quit IRC | 22:17 | |
*** ducttape_ has quit IRC | 22:24 | |
*** yamamoto has quit IRC | 22:28 | |
*** yamamoto has joined #openstack-lbaas | 22:28 | |
bogdan | johnsom, I did manually setup the eth1 interface in amphora-haproxy interface and I could finally ping it from octavia host, but the question remains why the setup during plug is not happening | 22:46 |
bogdan | how is the plug_vip executed - is it initiated from octavia servier remotely? or is it run automatically by agent on first run or something? | 22:47 |
bogdan | I want to check what properties are used by the plug_vip code | 22:47 |
*** dmk0202 has quit IRC | 22:47 | |
bogdan | e.g. where is it getting the IPs from | 22:48 |
johnsom | You setup the files under /etc/netns/amphora-haproxy/network/interfaces.d? | 23:00 |
johnsom | These are what the plug.py sets up for the VIP interfaces | 23:01 |
bogdan | i did not touch these, i just die ip netns exec amphora-haproxy ifconfig eth1 192.168.1.63 | 23:06 |
bogdan | and that was all - I could ping the interface | 23:06 |
bogdan | so, it seems the plug_vip did not execute something properly | 23:07 |
bogdan | now I added a breakpoint at nano +220 /usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py and I saw that arguments are sent ok | 23:07 |
johnsom | Very unusual | 23:07 |
*** yuanying has joined #openstack-lbaas | 23:10 | |
*** zigo has quit IRC | 23:14 | |
*** zigo has joined #openstack-lbaas | 23:15 | |
bogdan | johnsom, i think I have found something - shouldn't this folder contain interface configs? /etc/netns/amphora-haproxy/network/interfaces.d | 23:23 |
johnsom | Yes, as the VIP is plugged, plug.py will create that folder and the required contents | 23:23 |
bogdan | I do not have the file there but I have it in /etc/network/interfaces.d/eth1.cfg | 23:24 |
johnsom | That file is for the management interface, the eth1 outside the namespace | 23:24 |
bogdan | and that is because in /etc/octavia/amphora-agent.conf [amphora_agent] I have agent_server_network_dir = /etc/network/interfaces.d/ | 23:24 |
johnsom | Yep | 23:24 |
bogdan | nope, my management interface is eth0 | 23:25 |
bogdan | from what I got this should be my VIP http://paste.openstack.org/show/496830/ | 23:26 |
johnsom | Ah, right. Then that file should not be there if it's the right version of Octavia. can you go to /opt/amphora-agent and do a "git log" | 23:26 |
johnsom | ? | 23:26 |
bogdan | but looking at the code eth1 is expected to be in a differet location | 23:27 |
bogdan | here is git log http://paste.openstack.org/show/496831/ | 23:28 |
johnsom | That should be right | 23:29 |
bogdan | just wondering why the code assumes /etc/netns/... if then util.get_network_interface_file is using CONF settings | 23:29 |
bogdan | should I just change my agent_server_network_dir to be /etc/netns/amphora-haproxy/network/interfaces.d, would that fix it? | 23:30 |
johnsom | the /etc/netns path is an overlay directory. To applications running in the namespace or are namespace aware they look like /etc/network | 23:31 |
bogdan | ok but in order to make plug_vip to work properly I should change my agent_server_network_dir to be /etc/netns/amphora-haproxy/network/interfaces.d, right? | 23:33 |
johnsom | No, that would break the overlay | 23:33 |
*** diogogmt_ has joined #openstack-lbaas | 23:34 | |
bogdan | ok then I am lost ;) | 23:34 |
*** diogogmt_ has quit IRC | 23:34 | |
bogdan | how is it supposed for the eth1.cfg file to go to the /etc/netns/ location ? | 23:35 |
johnsom | To the code running in the namespace /etc/netns/* does not exist as linux maps /etc/netns/* to be /etc/network | 23:35 |
bogdan | I got that but did not quite get where eth1.cfg file should be | 23:36 |
*** diogogmt has quit IRC | 23:37 | |
*** cody-somerville has joined #openstack-lbaas | 23:42 | |
*** ajo has quit IRC | 23:45 | |
bogdan | johnsom, if eth1.cfg is dedicated for programs that run in the amphora-haproxy namespace then shouldn't this file be copied to /etc/netns/amphora-haproxy/network/interfaces.d so that they would see it as /etc/network/interfaces.d/eth1.cfg? | 23:47 |
johnsom | This code creates the eth1.cfg in /etc/netns/... https://github.com/openstack/octavia/blob/master/octavia/amphorae/backends/agent/api_server/plug.py#L85 | 23:48 |
*** ajo has joined #openstack-lbaas | 23:49 | |
*** BjoernT has quit IRC | 23:50 | |
bogdan | well, if I read the code correctly this line uses interface_file_path variable, which is taken from util.get_network_interface_file, which is using CONF.amphora_agent.agent_server_network_file or CONF.amphora_agent.agent_server_network_dir, right? | 23:54 |
johnsom | Yeah, I'm seeing that too | 23:54 |
johnsom | Ah, ok, so the new default is: # agent_server_network_dir = /etc/netns/amphora-haproxy/network/interfaces.d/ | 23:55 |
johnsom | So it is putting them in that path | 23:55 |
bogdan | in my conf agent_server_network_dir = /etc/network/interfaces.d/, and agent_server_network_file is missing, so in my case plug_vip creates the file not in /etc/netns.... | 23:55 |
johnsom | Ahh, ok, you set that to /etc/network/interfaces.d? | 23:56 |
johnsom | https://github.com/openstack/octavia/blob/master/etc/octavia.conf#L217 | 23:56 |
bogdan | yes, seems I was looking to older examples | 23:56 |
johnsom | This is the default config file | 23:56 |
bogdan | indeed :) | 23:56 |
johnsom | Ah, ok | 23:56 |
bogdan | so I need to update it :) | 23:56 |
johnsom | It becomes clearer now | 23:56 |
johnsom | Yeah, those amphora_agent settings really shouldn't be in the config file as they are internal use to the amphora. They got added back in because people kept seeing them and thinking they were "missing" from the config | 23:57 |
johnsom | They really shouldn't be changed | 23:58 |
bogdan | oh, it was even worse - I had it commented in my config, so I guess it is taking the old default from somewhere | 23:58 |
johnsom | If it is commented out, it should use: https://github.com/openstack/octavia/blob/master/octavia/common/config.py#L69 | 23:58 |
johnsom | To double check what it is using, you can enabled debug in the octavia.conf and restart the worker process. That will log the "actual" configuration settings used as it starts up | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!