Thursday, 2015-02-26

*** david-lyle is now known as david-lyle_afk00:14
*** hj-hp has joined #openstack-operators00:15
*** hj-hp has quit IRC00:16
*** hj-hp has joined #openstack-operators00:17
*** j05h has quit IRC00:18
*** Piet has joined #openstack-operators00:20
*** jaypipes has quit IRC00:40
*** Marga__ has quit IRC00:41
*** mdorman has quit IRC00:44
*** blair has joined #openstack-operators00:52
*** hj-hp has quit IRC01:02
*** j05h has joined #openstack-operators01:04
*** VW_ has quit IRC01:11
*** blair has quit IRC01:19
*** blair has joined #openstack-operators01:20
*** hj-hp has joined #openstack-operators01:31
*** hj-hp has quit IRC01:33
*** georgem1 has joined #openstack-operators01:54
*** andrewbogott is now known as andrewbogott_afk02:19
*** georgem1 has quit IRC02:36
*** georgem1 has joined #openstack-operators02:38
*** markvoelker has quit IRC02:47
*** markvoelker has joined #openstack-operators02:47
*** markvoelker has quit IRC02:51
*** georgem1 has quit IRC02:56
*** andrewbogott_afk is now known as andrewbogott02:59
*** georgem1 has joined #openstack-operators03:02
*** zerda has joined #openstack-operators03:38
*** markvoelker has joined #openstack-operators03:43
*** jlk has quit IRC04:05
*** jlk has joined #openstack-operators04:06
*** jlk has quit IRC04:11
*** jlk has joined #openstack-operators04:11
*** signed8bit_ZZZzz has quit IRC04:21
*** georgem1 has quit IRC04:31
*** markvoelker has quit IRC04:44
*** markvoelker has joined #openstack-operators04:45
*** markvoelker has quit IRC04:49
*** alop has quit IRC05:00
*** blairo has joined #openstack-operators05:06
*** blair has quit IRC05:08
*** blairo has quit IRC05:15
*** andrewbogott is now known as andrewbogott_afk05:25
*** zerda has quit IRC06:14
*** fifieldt has joined #openstack-operators06:26
*** zerda has joined #openstack-operators06:36
*** Marga_ has joined #openstack-operators06:49
*** Marga_ has quit IRC06:49
*** Marga_ has joined #openstack-operators06:50
*** belmoreira has joined #openstack-operators07:03
*** subscope has quit IRC07:14
*** Marga_ has quit IRC07:35
*** Miouge has joined #openstack-operators07:43
*** markvoelker has joined #openstack-operators07:44
*** sanjayu has joined #openstack-operators07:46
*** markvoelker has quit IRC07:50
*** racedo has quit IRC08:12
*** matrohon has joined #openstack-operators08:14
*** derekh has joined #openstack-operators09:17
*** bvandenh has joined #openstack-operators09:18
*** jemangs_ has quit IRC09:32
*** jemangs has joined #openstack-operators09:35
*** markvoelker has joined #openstack-operators09:47
*** markvoelker has quit IRC09:52
*** VW_ has joined #openstack-operators10:07
*** VW_ has quit IRC11:03
*** pcaruana has joined #openstack-operators11:19
*** pboros has joined #openstack-operators11:28
*** VW_ has joined #openstack-operators11:30
*** VW__ has joined #openstack-operators11:38
*** VW__ has quit IRC11:39
*** VW__ has joined #openstack-operators11:39
*** VW_ has quit IRC11:41
*** markvoelker has joined #openstack-operators12:50
*** markvoelker has quit IRC12:56
*** georgem1 has joined #openstack-operators13:02
*** bvandenh has quit IRC13:03
*** jkraj has joined #openstack-operators13:05
*** esker has joined #openstack-operators13:07
*** sanjayu has quit IRC13:26
*** zerda has quit IRC13:26
*** signed8bit has joined #openstack-operators13:45
*** markvoelker has joined #openstack-operators13:53
*** markvoelker has quit IRC13:59
*** cpschult has joined #openstack-operators14:00
*** priteau has joined #openstack-operators14:08
*** markvoelker has joined #openstack-operators14:10
*** georgem1 has quit IRC14:13
*** bvandenh has joined #openstack-operators14:21
*** VW__ has quit IRC14:30
*** VW_ has joined #openstack-operators14:31
*** radez_g0n3 is now known as radez14:34
*** georgem1 has joined #openstack-operators14:35
*** csoukup has joined #openstack-operators14:55
*** signed8b_ has joined #openstack-operators15:03
*** signed8bit has quit IRC15:06
*** dmsimard_away is now known as dmsimard15:10
*** georgem1 has quit IRC15:17
*** markvoelker has quit IRC15:21
*** markvoelker has joined #openstack-operators15:21
*** matrohon has quit IRC15:22
*** markvoelker has quit IRC15:26
*** VW_ has quit IRC15:33
*** VW_ has joined #openstack-operators15:34
*** georgem1 has joined #openstack-operators15:38
*** reed has joined #openstack-operators15:48
*** mdorman has joined #openstack-operators15:50
*** markvoelker has joined #openstack-operators15:51
*** hj-hp has joined #openstack-operators16:06
*** Marga_ has joined #openstack-operators16:08
*** VW_ has quit IRC16:12
*** VW_ has joined #openstack-operators16:17
georgem1do you have any recommendations for the partitioning scheme of the compute nodes? (no shared storage, KVM, Ubuntu)16:17
georgem1I'm thinking /boot 500 MB, swap 128 GB (50% of RAM), / 50 GB and the rest for /var; there will be no memory overcommit so I don't want to waste more space for swap16:17
*** VW_ has quit IRC16:17
*** VW_ has joined #openstack-operators16:18
jlkgeorgem1: do you have a reason for splitting off /boot ?16:24
klindgrenVW_, Got some good news on the Neutron errors that we have been seeing.  I configured memcached in neutron that was added in Juno and the metadata errors have dropped off complelty.  Now the only errors that I see are _heal_instance_info_cache timing out connection to neutron.16:24
jlkklindgren: do you get any problems with instance boot and neutron vif plug timeout?16:25
klindgrenWe have always had a separate boot partition.  However ours use to be 200mb - which now with new kernels and larger ram disks is problematic16:25
klindgrenjlk, not really16:25
jlkyeah, tradition is a separate boot, but that was due to lilo and grub issues with large disks or with LVM set ups16:26
klindgrenI did see 1-2 errors about vif creation valid due to neutron timeouts16:26
georgem1jlk: /boot has to be ext2 usually and I want to use xfs for everything else, so I carve a partition for it16:26
jlkwhich largely isn't a problem these days, so unless you're using a huge (more than 2TB disk) or a filesystem grub can't understand, a separate partition may be a wasted effort16:26
klindgrenhowever 99% of the errors are either metadata related, or instance_info_cache16:26
jlkklindgren: grub supports XFS16:27
georgem1jlk: we are going with 6x2 TB drives, so the RAID container will be quite large16:27
klindgrenjlk - I get that - its just that by default our build system already partitions it that way so *shrug*16:27
VW_nice klindgren16:27
jlkgeorgem1: well that's a good reason to split off /boot then :)16:28
klindgrenI thought grub2 could do gpt partition tables as well16:28
jlkgrub2 perhaps.16:30
jlklots has changed there for secure boot too16:30
jlkand eufi16:30
klindgrenVW_, still not fixed though.  Thinking that this is pointing to some issue with how neutrons doing database stuff with multiple servers/workers is causing some sort of lock contention.16:31
VW_I would vote yes16:32
VW_:)16:32
VW_we've had some db contention problems too16:33
*** david-lyle_afk is now known as david-lyle16:36
georgem1anybody running 2 x 10 Gb bonded and trunked with opensvswitch and Ubuntu 14.04? I need some feedback about performance16:40
*** Marga_ has quit IRC16:42
klindgrengeorgem1, active-active or active-passive?16:46
georgem1active/active16:49
*** pcaruana has quit IRC16:50
georgem1the plan is to have 2x10 Gb going to a ToR, LACP, trunk and create openvswitch bridges for management, GRE, storage, monitoring, etc over different VLANS16:51
georgem1but I heard there were performance issues with openvswitch and bond links, I'm not sure about the newest version16:52
klindgreneh - them ajority of the perofmance issues i have had with OVS were the result of the flow stuff being single threaded16:55
*** Marga_ has joined #openstack-operators16:55
*** Marga_ has quit IRC16:56
klindgrenhowever, I dont have 10gig links in active/active16:56
klindgrenand I use real vlans instead of gre16:56
*** Marga_ has joined #openstack-operators16:56
klindgrenI can tell you that we are looking at removing ovs all together and going to linux bridging.  Mainly because we are using shared networks with real vlans, so ovs buys us exactly nothing.  INfact all packets have to be filtered through both linux bridge and OVS (due to security groups), so we figured it would be simpler/faster to just have linuxbridge.16:58
georgem1ok, but what's the performance like now on a 10 Gb link?17:00
georgem1I don't have control of the number of neutron networks that will be created by my tenants, so the idea of having 3-4000 VLANs across all the switches is not ideal for me, hence I have to go with GRE17:02
*** belmoreira has quit IRC17:23
*** bvandenh has quit IRC17:27
*** Marga_ has quit IRC17:38
*** Marga_ has joined #openstack-operators17:38
*** derekh has quit IRC17:40
*** georgem1 has quit IRC17:44
*** alop has joined #openstack-operators17:44
*** Marga_ has quit IRC17:56
*** emagana has joined #openstack-operators18:03
*** Ctina has joined #openstack-operators18:06
Ctinaanyone in here using ceilometer? I'm configuring it for the first time and had some dumb questions..18:06
*** emagana has quit IRC18:07
klindgrenWe used to use it - stopped due to terrible performance and overhead18:07
Ctinai booted an instance and then did a ceilometer meter-list and i'm not seeing cpu meters like i would expect based on some of the quick guides i've seen18:08
Ctinai see instance:<flavor>, instance.scheduled, etc along with disk.root.size meters, just not cpu or cpu_util18:09
klindgrensome of them are based upon calculations18:10
klindgrenI think cpu_util was basedupon the average cpu usage over 10 minutes or something like that18:10
klindgrenit dpended on when the sample was taken18:10
Ctinaoh okay, that makes sense18:10
klindgrenso we updated some of them to put data in faster18:11
klindgrenI forgot the file that we did that on18:11
Ctinaceilometer/pipeline.yaml probably18:11
CtinaI'll mess with some of those intervals and see what i can get, thank you :)18:13
klindgrenyea pretty sure that was it18:13
*** andrewbogott_afk is now known as andrewbogott18:16
*** georgem1 has joined #openstack-operators18:19
*** VW_ has quit IRC18:20
*** VW_ has joined #openstack-operators18:20
jlkany of you have an easy way for end users to see what cinder volume capacity there is?18:26
georgem1jlk: what do you mean by "cinder volume capacity"?18:32
jlkas in how much space is left to make volumes18:32
jlka customer tried to make 20 volumes at 20g each and eventually ran out of space, but there wasn't a way to know that ahead of time18:32
georgem1no, the available capacity is only available to the cloud admin, unless you change the policy.json18:33
georgem1the user can see what his quota is18:33
jlkhow can the cloud admin see it ( because in this case, the user /was/ an admin)18:37
*** Miouge has quit IRC18:39
*** Miouge has joined #openstack-operators18:45
*** Marga_ has joined #openstack-operators18:46
*** Marga_ has quit IRC18:47
*** Marga_ has joined #openstack-operators18:47
*** Marga_ has quit IRC18:48
*** Marga_ has joined #openstack-operators18:48
*** Marga_ has quit IRC18:49
*** Marga_ has joined #openstack-operators18:49
*** andrewbogott is now known as andrewbogott_afk19:08
*** priteau has quit IRC19:15
*** matrohon has joined #openstack-operators19:31
*** hj-hp has quit IRC19:38
georgem1jlk:I'm afraid the information is not available to the admin either :(19:45
jlkthat was the conclusion I came to as well. Seems like a missing bit of usefulness there19:45
jlkas a provider, I"d love to be able to monitor available capacity, and pre-plan capacity increases19:46
klindgrenjlk, but the cloud is infinite19:46
jlkI can cobble something together by looking at all the backend storage devices, but that requires touching operating system or NAS/SAN bits, rather than poking openstack APIs19:46
georgem1jlk: https://blueprints.launchpad.net/cinder/+spec/list-backends-and-capabilities19:47
jlkregistered in 2013, not touched since :(19:48
georgem1jlk: as admin, you could run "cinder list --all-tenants" and add up the volumes19:48
georgem1jlk:as admin, you should also know the total capacity of your storage (ceph pool, NAS, local filesystem, etc)19:49
jlk"should", but I'm dealing with 10s of clouds19:49
jlksomeday 100s19:50
jlkprograming that into an alert or metric takes... effort19:50
*** jkraj has quit IRC19:55
georgem1jlk: query the cinder db mayb19:57
georgem1https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/zabbix/files/scripts/monitoring.conf19:58
*** Ctina_ has joined #openstack-operators20:01
*** VW_ has quit IRC20:02
*** Ctina has quit IRC20:05
*** Ctina_ has quit IRC20:06
*** hj-hp has joined #openstack-operators20:18
*** andrewbogott_afk is now known as andrewbogott20:21
*** Marga_ has quit IRC20:28
*** hj-hp has quit IRC20:44
*** georgem1 has quit IRC20:52
*** hj-hp has joined #openstack-operators20:56
*** VW_ has joined #openstack-operators20:56
*** georgem1 has joined #openstack-operators20:58
*** georgem1 has quit IRC21:16
*** Marga_ has joined #openstack-operators21:29
*** Marga_ has quit IRC21:33
jlkCan anybody tell me what the difference is between a server "image" create and a server "snapshot" create?21:37
*** andrewbogott is now known as andrewbogott_afk21:37
*** andrewbogott_afk is now known as andrewbogott21:37
*** Marga_ has joined #openstack-operators21:40
*** georgem1 has joined #openstack-operators21:40
*** georgem1 has quit IRC21:41
*** georgem1 has joined #openstack-operators21:41
*** Miouge has quit IRC21:46
*** hj-hp has quit IRC21:48
*** markvoelker has quit IRC21:51
*** markvoelker has joined #openstack-operators21:51
*** hj-hp has joined #openstack-operators21:53
*** radez is now known as radez_g0n321:54
*** hj-hp has quit IRC21:54
*** markvoelker has quit IRC21:56
*** markvoelker has joined #openstack-operators21:57
*** turnerg has joined #openstack-operators22:10
*** VW_ has quit IRC22:16
*** VW_ has joined #openstack-operators22:17
*** georgem1 has quit IRC22:33
*** matrohon has quit IRC22:51
*** cpschult has quit IRC22:55
*** alop has quit IRC23:09
*** alop has joined #openstack-operators23:12
*** signed8b_ has quit IRC23:15
*** pboros has quit IRC23:18
klindgrenjlk - asside from glance needing special flags to see them.  I can't23:39
jlkhuh.23:39
klindgrenone thing I did notice is that snapshots have a serverid component - not really sure what that means though23:39
klindgren[14:36]  <jlk> Can anybody tell me what the difference is between a server "image" create and a server "snapshot" create?23:39
klindgreniirc when doing a nova image-list you get different results than glance image-list23:40
klindgrenyou have to pass another flag to glance to see snapshots23:40
jlkyeah, the huh was in "yeah that's kind of what I thought too."23:40
*** dmsimard is now known as dmsimard_away23:48
*** csoukup has quit IRC23:53

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!