Thursday, 2015-12-10

*** dimtruck is now known as zz_dimtruck00:06
*** rook has joined #openstack-tailgate00:08
*** rook has quit IRC00:27
*** fnaval has joined #openstack-tailgate00:27
*** fnaval has quit IRC00:28
*** nicholasgoracke has joined #openstack-tailgate03:33
*** jasonsb has joined #openstack-tailgate04:11
*** rook-desktop has quit IRC05:21
*** nicholasgoracke has quit IRC05:42
*** nicholasgoracke has joined #openstack-tailgate07:54
*** nicholasgoracke has quit IRC07:58
*** hogepodge has quit IRC09:26
*** hogepodge has joined #openstack-tailgate09:28
*** rook has joined #openstack-tailgate10:47
*** chris_hunt has joined #openstack-tailgate14:06
*** rook has quit IRC14:25
*** dkranz has joined #openstack-tailgate14:39
*** nicholasgoracke has joined #openstack-tailgate14:46
*** rook has joined #openstack-tailgate14:50
*** malini has joined #openstack-tailgate15:12
*** zz_dimtruck is now known as dimtruck15:22
*** Leom has joined #openstack-tailgate15:46
*** Leom_ has joined #openstack-tailgate15:46
*** Leom has quit IRC15:50
*** rook has quit IRC16:28
*** sabeen has joined #openstack-tailgate16:53
*** rook has joined #openstack-tailgate16:54
*** sabeen3 has joined #openstack-tailgate16:55
*** sabeen has quit IRC16:58
*** gema_ is now known as gema16:59
*** gema is now known as Guest6804117:00
*** Guest68041 is now known as gema_17:01
hockeynutI will be a few mins late17:01
gema_hockeynut: ack17:01
gema_having problems with my nick17:01
gema_whatever I will start17:02
gema_#startmeeting tailgate17:02
openstackMeeting started Thu Dec 10 17:02:29 2015 UTC and is due to finish in 60 minutes.  The chair is gema_. Information about MeetBot at http://wiki.debian.org/MeetBot.17:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:02
openstackThe meeting name has been set to 'tailgate'17:02
cleeo/17:02
gema_#topic rollcall17:02
gema_clee: hello!17:02
cleehey gema_ :)17:02
gema_hi!17:03
gema_I was emailing spyderdyne, he seems to be waiting for us in a hanogut17:03
gema_hangout17:03
gema_malini: o/17:04
malinio/17:04
maliniI was confused abt the time again - but tht's me :D17:04
gema_don't worry17:04
gema_so, I haven't prepared any particular agenda because it has been so long that I thought we should recap17:05
gema_#topic where are we, where are we going17:05
maliniis someone grabbing spyderdyne from the hangout?17:06
gema_I sent him an email saying we were here17:06
malinicool17:06
gema_he is downloading an irc client and will be here soon17:06
gema_maybe we should give him and hockeynut a couple of mins17:07
maliniyes17:07
*** spyderdyne has joined #openstack-tailgate17:07
gema_meanwhile, how's life?17:07
spyderdynein the words of the great Wu Tang CLan, life is hectic :)17:08
gema_spyderdyne: lol, welcome :)17:08
spyderdynethanks17:08
gema_spyderdyne: we were going to do some recap and see where we are and where we are going17:08
gema_since it's been so long since the last meeting17:08
spyderdynei had sent out a weekly hangouts invite to at the very least have a weekly meeting reminder sent out17:09
gema_spyderdyne: the reminder works great for me17:09
spyderdyneand had great difficulty with the simple arithmetic of setting the timezone17:09
gema_but I didn't see the hangout17:09
malinitht is NOT simple arithmetic17:09
gema_spyderdyne: I use a website for that17:09
spyderdynethere is a UTC trick that just almost works, but not reliably17:09
spyderdyneactually, i googled it17:09
spyderdynelol17:09
gema_haha17:09
malinitimezones are bad enough, and then they change time every 6 months17:10
gema_yep17:10
spyderdyneright17:10
malinimy brain is not complex enough to handle this :/17:10
spyderdyneto save whale oil reserves17:10
spyderdynethe problem is that we work on complex patterns all day, so somethign simple like what time it will be next thrusday is complicated all of a sudden17:10
*** ig0r_ has joined #openstack-tailgate17:11
gema_absolutely17:11
gema_so maybe I will send an email 1 hour prior to the meeting every thursday17:11
gema_that way every knows it's coming17:11
malininow that we have figured it out, we should be ok17:11
spyderdynethat might help actually17:12
gema_because adding the  meeting to the calendar if you are not invited directly , didn't prove easy for me either17:12
gema_ok, will do that17:12
gema_now, regarding our purpose and what we wanted to do originally17:12
hockeynutdone17:12
hockeynutthis is the meeting now in this irc?17:13
gema_has anyone checked what the openstack performance team are doing?17:13
cleehockeynut: yes17:13
gema_hockeynut: we've always been on IRC except when we were trying to rush a presentation together17:13
gema_I'd say irc is better in the sense that there is written log17:13
gema_for the people that cannot make it17:13
gema_and taking notes is not necessary :)17:13
hockeynutok, just wanted to be sure I'm in the right place :-D17:14
gema_and more importantly, you are at the right time ;)17:14
malini+1 for IRC17:14
maliniI wud have missed this meeting otherwise17:14
gema_alright17:14
spyderdynei believe we are required to have IRC content just to let the rest of the community stay up to date with our meetings, but i will keep the hangout alive each week in case someone needs to present or anyone needs to discuss offline17:14
gema_spyderdyne: sounds good17:15
spyderdyneso, i missed the last 2 or 3 meetings due to workload17:15
spyderdynebut i have some updates17:15
gema_spyderdyne: we are all ears17:15
gema_(eyes)17:15
spyderdynei know some of you shied away when i presented a scale-centric project for the summit presentation, which is fine17:16
spyderdynewe are currently working with red hat, intel, and others to provide scale testing components and methodologies to the openstack governance board17:16
gema_that sounds really good17:17
spyderdyneour current efforts are to kick the tires on the mythos components, which i am happy to say are working for the most part17:17
spyderdynewe decided to abandon the idea of scale testing instances that dont do anything17:17
gema_spyderdyne: is that tied up into the new openstack performance testing group?17:17
spyderdynei.e. cirros images, micro micros, etc.17:17
spyderdynei have had no contact with the new openstack perf testing group,17:18
spyderdynebut our partners may be tied into it17:18
gema_ack17:18
spyderdynei am using off the shelf linuxbenchmarking.org components, and some open source projects17:18
spyderdyneour next phase will be to enter the arena of a 1,000 hypervisor intel openstack data center17:19
spyderdynewe will do testing with the other interested parties there, and after the 1st of the year it will double in size17:19
hockeynutspyderdyne that's the OSIC (Rackspace + Intel)?17:19
spyderdyneyes17:19
hockeynutsweet17:19
gema_spyderdyne: what will double in size?17:20
gema_the cloud?17:20
spyderdynethe goal is to find common ground, and provide some sanity to the scale testing and performance testing methodoligies17:20
spyderdynethe 1,000 hypervisors becomes 2,00017:20
spyderdyne:)17:20
gema_wow, awesome17:20
malinithis is really cool!17:21
spyderdyneeach group has things they are using internally, so it looks like it might be an arms race to see who has the most useful weapons for attacking a cloud and measuring the results17:21
gema_spyderdyne: is rally anywhere in the mix?17:21
spyderdynethe mythos project is my contribution, and looks to be a nuclear device17:21
spyderdynerally is being used in heavily modified forms by cisco and red hat/ibm17:22
spyderdynewe have the odin wrapper to chain load tests, and red hat has IBMCB which does something similar, but addresses some of rally’s shortcomings17:22
spyderdyneit also behaves differently in that it spins up things like a normal rally test,17:23
spyderdynebut leaves them there to run multiple tests against, and then tears them down as a separate step17:23
spyderdynecurrently rally performs build, test, and teardown (not so good at the teardown part…) for every test run17:23
spyderdynemy team abandoned rally b/c we didnt feel like we could trust the results, and there is a magic wizard inside that does things we cant track or account for17:24
spyderdyne:)17:24
gema_yeah, sounds sensible17:24
spyderdynewe are at 1,000 instances now and working out some minor bugs with our data center17:25
gema_spyderdyne: and what metrics are you interested in?17:26
gema_time to spin 1000 instances?17:26
spyderdynethen we will puch to as many instances as we have VCPUs for and see if we can shut down neutron17:26
gema_spyderdyne: you don't overcommit VCPUs?17:26
gema_(for the testing, I mean)17:26
spyderdyne1.  if neutron can be overloaded with traffic, what level of traffic breaks it, and what component breaks first17:27
spyderdyne(SLA)17:27
gema_ack17:28
spyderdyne2.  what linuxbenchmaring.org measurements do we get on our platform for each flavor?17:28
spyderdyne3.  how many instances that are actually doing work of some kind can a hypervisor support17:29
gema_spyderdyne: and you determine what instances are doing work via ceilometer?17:30
spyderdyneto this end we are setting instances on private subnets, having them discover all the other similar hosts on their private subnets, and having them run siege tests against each other using bombardment scale tests, pushing the result to our head node for reporting17:30
gema_or on the load of the host17:30
spyderdynewe dont use ceilometer at all for this17:30
gema_ok17:30
spyderdynethe head node has a web server that clients check in with every 10 minutes via cron17:30
spyderdynewe write a script named .dosomething.sh and place it in the monitored directory17:31
gema_ok17:31
gema_ack, I understand how it works, thx17:31
spyderdynewe then move it frmo .dosomething.sh to dosomething.sh and the clients check in over a 10 minute offset to see if there is something new for them to run.17:31
spyderdyneif there is they run it17:31
spyderdynein our scale tests, mongodb has been unable to even keep up with more than a few thousand instances17:32
spyderdyneso it hasnt been useful to us17:33
gema_spyderdyne: and what are you using mongo for?17:33
spyderdyneceilodb17:33
gema_ack17:33
gema_there are scalability issues with ceilometer + mongo17:33
gema_we've also been unable to use it with hundreds, not thousands17:33
gema_use it as in not only storing data but also mine it later17:33
gema_but that's an issue for another day, I guess17:34
gema_spyderdyne: very interesting all you guys are doing17:34
spyderdynei was hooping someone would put more support behind the gnocci project, but in our case with Ceph for all our storage, it may cause other issues using that as well17:34
spyderdynethe issue is all the writes needed17:35
spyderdyneit makes snese to distribute them, and swift makes sense to use, but if you are using your object as a block sotre as well it just shifts the issue from adding another db platform to scale out in the control plane to making sure the storage can keep up17:36
gema_spyderdyne: are results from this testing going to be available publicly?17:37
spyderdynei wanted to ask since it wasnt very well received, shoudl i remove all the mythos stuff from our team repo and just keep it in my github form now on?17:38
spyderdynethe results fomr the intel testing will be made public17:38
gema_spyderdyne: you can do whatever you prefer17:38
malinispyderdyne: 'well received' where?17:38
gema_spyderdyne: I don't think anyone is against your project17:38
gema_spyderdyne: the only issue was making us present this when we knew nothing about it, without you there17:39
spyderdynethe results from our internal testing will be made public once we prove that our hardware + architecture blows the doors off of or at least provides a high water mark compared to other platforms17:39
malinispyderdyne: my only concern was presenting this as how everybody tests their installation of openstack, when you were the only one who knew what it is17:39
spyderdyne:)17:39
malinispyderdyne: it has nothing to do with how good mythos is - which I am sure is pretty cool17:40
gema_absolutely17:40
gema_spyderdyne: you scared us17:40
malini& I cant even handle timezones :D17:40
spyderdynelol17:40
malinispyderdyne: but if there is anything we can do to make mythos better, I would love to help :)17:41
gema_yep, agreed, unfortunately my 8 hypervisors are already overloaded x)17:41
gema_and it doesn't look like they'd make much of a difference17:41
malini:D17:41
gema_spyderdyne: I think the work you are doing is keeping this group alive17:42
gema_spyderdyne: and hopefully we'll find a way to chip in17:42
malini+117:42
gema_I will present what I have been working on next week17:43
gema_explain it in detail like you've been doing17:43
spyderdynei would ask that any of you who are able to spin up an ubuntu vm with vt support to check out the code and give it a spin17:43
gema_it's a jenkins plugin + a REST api for openstack on openstack functional testing17:43
spyderdynei could use the feedback and my docs definitely need improvement17:44
gema_spyderdyne: vt support?17:44
spyderdynevanderpool17:44
spyderdynehardware virtualization17:44
gema_yep, I can do that17:44
gema_how many VMs do I need to install/use mythos?17:45
gema_anyway, don't say anything17:45
gema_I will try and ask questions17:45
gema_you can improve documentation based on that17:45
gema_spyderdyne: although I am not sure it'll happen before Xmas17:46
*** Leom_ has quit IRC17:47
gema_#action gema to try mythos on ubuntu with vt support17:47
maliniI will try to find an ubuntu with vt support17:47
maliniIf I can get hold of one, I'll do too17:47
spyderdyne1 ubuntu 15.04 vm, 2048MB ram, 100GB hdd (until i get packer to work and build a smaller source image)17:47
gema_spyderdyne: so you install it from an image?17:48
spyderdyneas long as your openstack instnaces support hardware virt in libvirt (they are supposed to) then it shoudl work17:48
gema_spyderdyne: and how do we check that it works?17:48
spyderdyneits using virtualbox (boo) and VB needs VT to run 64 bit guests17:48
spyderdynehttp://askubuntu.com/questions/292217/how-to-enable-intel-vt-x17:49
gema_#link http://askubuntu.com/questions/292217/how-to-enable-intel-vt-x17:49
spyderdynethe script has a check built in and will fail gracefully if it isnt supported17:49
gema_ok17:49
spyderdyneits getting very friendly17:50
spyderdyne:)17:50
spyderdynei am close to 1,000 commits now17:50
gema_perfect17:50
gema_you've been busy! :D17:50
gema_alright, thanks so much for explaining it and we'll start helping with compatibility at least17:51
gema_and documentation reviews17:51
gema_we are 10 mins from end of meeting17:51
gema_malini: anything from you?17:51
gema_clee: ?17:52
gema_hockeynut: ?17:52
hockeynutI'm good17:52
gema_alright17:54
gema_spyderdyne: do you have anything else?17:54
malinisorry - had to step away for a call17:54
gema_malini: no worries17:54
gema_alright, calling it a day then17:55
malinithanks spyderdyne - this is really cool!17:55
gema_#endmeeting17:55
openstackMeeting ended Thu Dec 10 17:55:29 2015 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:55
openstackMinutes:        http://eavesdrop.openstack.org/meetings/tailgate/2015/tailgate.2015-12-10-17.02.html17:55
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/tailgate/2015/tailgate.2015-12-10-17.02.txt17:55
openstackLog:            http://eavesdrop.openstack.org/meetings/tailgate/2015/tailgate.2015-12-10-17.02.log.html17:55
gema_thanks yoou all for coming!17:55
gema_spyderdyne: awesome work, thanks!17:55
cleegema_: yeah, nothing new from me, sorry :/17:55
gema_clee: no worries :LD17:55
gema_:D thanks for making it17:56
cleetrying to figure out why refstack is only running 63 of the 128 tests in the defcore suite on our cluster right now :/17:56
gema_clee: look at your conf17:56
gema_tempest.conf17:56
gema_if there is still one xD17:56
gema_(they were talking in tokyo about doing something to the conf file)17:56
cleeI've got a tempest.conf, and it seems to be properly configured. valid image/flavor IDs, valid user accounts, etc17:56
gema_clee: good luck then :D17:57
cleeI mean, if my tempest.conf was broken, wouldn't I get 0 tests running?17:57
clee63 is just weird.17:57
gema_clee: are they skipped?17:57
cleeanyway. yeah.17:57
cleethis isn't really the place to debug that :)17:57
cleebut thanks!17:57
gema_lol, ok17:57
gema_night all17:58
*** gema_ is now known as gema17:58
hockeynuthave a good day/evening/morning/afternoon y'all!17:59
*** spyderdyne has quit IRC18:15
*** sabeen3 has quit IRC18:17
*** rook has quit IRC18:28
*** malini has quit IRC18:33
*** rook has joined #openstack-tailgate18:55
*** malini has joined #openstack-tailgate18:59
*** jasonsb has quit IRC20:04
*** sabeen1 has joined #openstack-tailgate20:24
*** rook has quit IRC20:31
*** sabeen3 has joined #openstack-tailgate20:39
*** sabeen1 has quit IRC20:39
*** jasonsb has joined #openstack-tailgate20:43
*** rook has joined #openstack-tailgate20:58
*** chris_hu_ has joined #openstack-tailgate20:58
*** chris_hunt has quit IRC21:02
*** chris_hu_ has quit IRC21:03
*** malini has quit IRC21:12
*** dimtruck is now known as zz_dimtruck21:52
*** hogepodge has quit IRC22:10
*** hogepodge has joined #openstack-tailgate22:13
*** moravec has quit IRC22:18
*** moravec has joined #openstack-tailgate22:19
*** moravec has quit IRC22:19
*** moravec has joined #openstack-tailgate22:20
*** dkranz has quit IRC22:21
*** moravec has quit IRC22:24
*** rook has quit IRC22:33
*** rook has joined #openstack-tailgate22:59
*** sabeen3 has quit IRC23:21
*** sabeen1 has joined #openstack-tailgate23:47

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!