Tuesday, 2016-10-04

*** jmlowe has quit IRC00:04
*** ccneill has joined #openstack-operators00:09
*** ducttape_ has joined #openstack-operators00:13
*** verdurin_ has joined #openstack-operators00:14
*** verdurin has quit IRC00:17
*** Apoorva has quit IRC00:23
*** markvoelker has joined #openstack-operators00:29
*** saneax is now known as saneax-_-|AFK00:30
*** markvoelker has quit IRC00:34
*** ducttape_ has quit IRC00:43
*** ccneill has quit IRC00:43
*** Rockyg has quit IRC00:44
*** ducttape_ has joined #openstack-operators00:52
*** markvoelker has joined #openstack-operators00:54
*** purp has quit IRC01:00
*** ducttape_ has quit IRC01:10
*** cdelatte has quit IRC01:18
*** ducttape_ has joined #openstack-operators01:52
*** ducttape_ has quit IRC02:10
*** bdeschenes has quit IRC02:11
*** ducttape_ has joined #openstack-operators02:14
*** ducttape_ has quit IRC02:26
*** sudipto has joined #openstack-operators03:08
*** sudipto has quit IRC03:08
*** sudipto has joined #openstack-operators03:09
*** sudipto_ has joined #openstack-operators03:09
*** ducttape_ has joined #openstack-operators03:21
*** sudipto has quit IRC03:23
*** sudipto_ has quit IRC03:23
*** ducttape_ has quit IRC03:37
*** hpe-hj has quit IRC03:38
*** sudipto_ has joined #openstack-operators04:07
*** sudipto has joined #openstack-operators04:07
*** saneax-_-|AFK is now known as saneax04:33
*** ducttape_ has joined #openstack-operators04:38
*** ducttape_ has quit IRC04:43
*** admin0 has joined #openstack-operators05:36
*** ducttape_ has joined #openstack-operators05:39
*** admin0 has quit IRC05:42
*** ducttape_ has quit IRC05:44
*** armax has quit IRC05:49
*** Hosam_ has joined #openstack-operators05:54
*** simon-AS559 has joined #openstack-operators05:55
*** armax has joined #openstack-operators06:09
*** armax has quit IRC06:12
*** armax has joined #openstack-operators06:12
*** armax has quit IRC06:13
*** simon-AS559 has quit IRC06:23
*** boogibugs has quit IRC06:33
*** ducttape_ has joined #openstack-operators06:40
*** rcernin has joined #openstack-operators06:43
*** jsheeren has joined #openstack-operators06:44
*** ducttape_ has quit IRC06:45
*** jsheeren has quit IRC06:48
*** pilgrimstack has quit IRC06:50
*** rarcea has joined #openstack-operators06:51
*** tesseract- has joined #openstack-operators06:52
*** pilgrimstack has joined #openstack-operators06:53
*** hieulq has joined #openstack-operators06:54
*** rarcea has quit IRC06:58
*** rarcea has joined #openstack-operators06:58
*** jsheeren has joined #openstack-operators06:59
*** admin0 has joined #openstack-operators07:05
*** matrohon has joined #openstack-operators07:14
*** markvoelker has quit IRC07:24
*** simon-AS559 has joined #openstack-operators07:31
*** pilgrimstack1 has joined #openstack-operators07:34
*** pilgrimstack has quit IRC07:34
*** ahonda has joined #openstack-operators07:41
*** ahonda has quit IRC07:41
*** ducttape_ has joined #openstack-operators07:41
*** ducttape_ has quit IRC07:47
*** hieulq has quit IRC07:47
*** dbecker has quit IRC07:54
*** dbecker has joined #openstack-operators07:56
*** pilgrimstack has joined #openstack-operators07:57
*** pilgrimstack1 has quit IRC07:57
*** hieulq has joined #openstack-operators08:03
*** Hosam has joined #openstack-operators08:13
*** Hosam_ has quit IRC08:15
*** racedo has joined #openstack-operators08:17
*** markvoelker has joined #openstack-operators08:24
*** hieulq has quit IRC08:28
*** markvoelker has quit IRC08:29
*** hieulq has joined #openstack-operators08:33
*** simon-AS559 has quit IRC08:41
*** simon-AS559 has joined #openstack-operators08:55
*** simon-AS5591 has joined #openstack-operators08:58
*** simon-AS559 has quit IRC08:59
*** MrDan has joined #openstack-operators08:59
*** electrofelix has joined #openstack-operators09:00
*** Hosam has quit IRC09:03
*** Hosam has joined #openstack-operators09:03
*** derekh has joined #openstack-operators09:04
*** paramite has joined #openstack-operators09:04
*** verdurin_ is now known as verdurin09:07
*** Hosam has quit IRC09:08
*** bdeschenes has joined #openstack-operators09:56
*** simon-AS5591 has quit IRC09:58
*** zigo has quit IRC10:01
*** zigo has joined #openstack-operators10:04
*** zigo is now known as Guest8478010:05
*** simon-AS559 has joined #openstack-operators10:07
*** simon-AS559 has quit IRC10:08
*** simon-AS559 has joined #openstack-operators10:09
*** simon-AS5591 has joined #openstack-operators10:10
*** simon-AS559 has quit IRC10:13
*** Guest84780 has quit IRC10:14
*** zigo_ has joined #openstack-operators10:16
*** markvoelker has joined #openstack-operators10:26
*** dc_mattj has joined #openstack-operators10:30
*** markvoelker has quit IRC10:31
*** Hosam has joined #openstack-operators10:39
*** ducttape_ has joined #openstack-operators10:45
*** ducttape_ has quit IRC10:49
*** rcernin has quit IRC10:57
*** rcernin has joined #openstack-operators10:58
*** dc_mattj has quit IRC11:02
*** bjolo_ is now known as bjolo11:10
*** sudipto has quit IRC11:12
*** sudipto has joined #openstack-operators11:13
*** Hosam_ has joined #openstack-operators11:20
*** Hosam has quit IRC11:20
*** Hosam_ has quit IRC11:22
*** Hosam has joined #openstack-operators11:22
*** zigo_ is now known as zigo11:23
*** markvoelker has joined #openstack-operators11:27
*** markvoelker has quit IRC11:32
*** georgem1 has joined #openstack-operators11:37
georgem1How do you guys do neutron security group rules auditing? I'm interested in a script to find out the security group allowing ingress access from the world, and any instances having those rules applied11:42
georgem1the "nova list --fields security_groups" shows the name of the security group applied, which might not be unique, and the "default" name is not for sure, instead of the security_group UUID11:45
*** bdeschenes has quit IRC11:45
*** bdeschenes has joined #openstack-operators11:46
*** bdeschenes has quit IRC11:46
*** georgem1 has quit IRC12:03
*** ducttape_ has joined #openstack-operators12:09
*** VW has joined #openstack-operators12:15
*** furlongm has quit IRC12:15
*** furlongm has joined #openstack-operators12:16
*** maticue has joined #openstack-operators12:16
*** georgem1 has joined #openstack-operators12:22
*** bdeschenes has joined #openstack-operators12:28
*** mriedem has joined #openstack-operators12:30
*** VW has quit IRC12:30
*** cdelatte has joined #openstack-operators12:33
*** ducttape_ has quit IRC12:33
*** markvoelker has joined #openstack-operators12:33
*** gaudenz has joined #openstack-operators12:34
*** jmlowe has joined #openstack-operators12:36
*** bdeschenes has quit IRC12:36
*** simon-AS5591 has quit IRC12:45
*** ducttape_ has joined #openstack-operators12:50
*** sudipto_ has quit IRC13:00
*** sudipto has quit IRC13:00
*** Hosam has quit IRC13:02
*** paramite has quit IRC13:03
*** ducttape_ has quit IRC13:03
*** simon-AS559 has joined #openstack-operators13:04
*** simon-AS5591 has joined #openstack-operators13:05
*** jmlowe has quit IRC13:06
*** sudipto has joined #openstack-operators13:06
*** simon-AS559 has quit IRC13:09
*** sudipto_ has joined #openstack-operators13:09
*** mriedem has quit IRC13:18
*** VW has joined #openstack-operators13:24
*** VW has quit IRC13:26
*** VW has joined #openstack-operators13:27
*** jmlowe has joined #openstack-operators13:28
*** VW has quit IRC13:33
*** VW has joined #openstack-operators13:34
*** ducttape_ has joined #openstack-operators13:35
*** pilgrimstack has quit IRC13:36
*** MrDanDan has joined #openstack-operators13:42
*** fifieldt has joined #openstack-operators13:46
*** MrDan has quit IRC13:46
*** jamesdenton has joined #openstack-operators13:49
*** pilgrimstack1 has joined #openstack-operators13:49
*** mihalis68 has joined #openstack-operators13:50
*** ducttape_ has quit IRC13:53
*** shintaro has joined #openstack-operators13:57
*** mperazol has joined #openstack-operators13:59
fifieldthello!14:00
fifieldt#startmeeting Ops Meetups Team14:00
openstackMeeting started Tue Oct  4 14:00:26 2016 UTC and is due to finish in 60 minutes.  The chair is fifieldt. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
med_\o14:00
openstackThe meeting name has been set to 'ops_meetups_team'14:00
shintarohi14:00
fifieldtanyone here for the meeting?14:00
med_mdorman, yes, end of 0 for real v214:00
med_and likely P for Neutron to catch up14:00
med_\o is14:00
mihalis68I am here14:00
fifieldtNB: If you're new, or just idling in the channel, be sure to have read:14:00
fifieldt#link https://wiki.openstack.org/wiki/Ops_Meetups_Team14:00
fifieldtfor background.14:00
fifieldtCheck out our agenda items at:14:00
fifieldt# link https://etherpad.openstack.org/p/ops-meetups-team14:00
fifieldtAs always: please write your name down on the agenda etherpad as a way of introduction, since we're a new crew.14:00
fifieldtSecondly, if there's something else burning to put on the agenda, please add it to that same etherpad.14:01
fifieldthow is everyone?14:01
* VW is lurking, but must stay focused on the meeting he's in and not type much14:01
serverascodehowdy14:01
claytono/14:01
mihalis68I am in great shape and that feels good to say!14:01
fifieldtwohoo!14:01
fifieldtno miserable incidents for mihalis68 is a very good thing14:01
med_+1 mihalis6814:01
* med_ remembered to join thanks VWs blastogram14:02
med_and I added a calendar invite to myself...14:02
fifieldtSo, a quick runthrough of lassdt week14:02
fifieldt#topic review of last week's action items14:02
fifieldtmassive progress on the barcelona agenda - we'll talk about that soon14:02
fifieldtlooks like nova is on14:03
med_cool14:03
fifieldta call for expressions of interest went out to the ML thanks to mihalis6814:03
fifieldtfor hostign midcycles14:03
fifieldtand a call for more lighting talks at BCN\14:03
mihalis68yes, but it appears quite buried... no followups at all. Perhaps it wasn't a good time14:03
fifieldtindeed14:03
fifieldtwell let's add that to the agenda14:04
fifieldtotherwise, no hanging items14:04
fifieldt#topic     Check progress on agenda14:04
mihalis68should we tweet the link to that post?14:04
fifieldtthis is for Barcelona14:04
med_so some confusion on Lightning Talks that we should clarify herein.14:04
mihalis68or copy it to a permanent web page?14:04
fifieldt#link https://docs.google.com/spreadsheets/d/1EUSYMs3GfglnD8yfFaAXWhLe0F5y9hCUKqCYe0Vp1oA/edit#gid=80351347714:04
fifieldtWe had 3 questions from rthe etherpad:14:05
fifieldt1)     confirm that the NFV sessions in this calendar are duplicates now that the WG has had sessions scheduled through the separate WG process.14:05
fifieldt2)     There are 3 or so empty slots that we can still look at, but the team wondered if leaving the ones late in the day open might be good for overflow/follow up from earlier sessions14:05
fifieldt3)     #action - VW to email WG leaders and finalize the intro session14:05
fifieldtoh, and14:05
fifieldt4)     UC sessions at the summit - Edgar14:05
fifieldtand adding14:05
fifieldt5) what to do about lighting talks14:05
med_kk thanks14:05
fifieldtis curtis here?14:05
VW6) confirm Nova schedule14:05
serverascodeyup I'm here14:06
* med_ doesn't grok (6)14:06
fifieldtserverascode: did you get NMFV slots in the working group sessions?14:06
*** matrohon has quit IRC14:06
fifieldtI see them on the agenda14:06
fifieldtlet me link14:06
serverascodeI haven't received any word about my submission14:06
serverascodebut maybe they just haven't sent it out yet?14:06
fifieldt#link https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16768/openstack-operators-telecomnfv-functional-team14:07
fifieldtIt's on there14:07
fifieldtand has your name on it14:07
serverascodeha, ok then :)14:07
fifieldtso, are the sessions in the ops meetup dstill required?14:08
serverascodecertainly not both14:08
fifieldtok, so kill one and you can use the other?14:09
fifieldtgiving you two slots overall14:09
serverascodethat would be awesome14:09
fifieldtone tuesday one wednesday14:09
fifieldtok14:09
fifieldtpreference for killing 11:25 or 12:15 start?14:09
fifieldtor actually, we may need to move around14:09
fifieldtto make use of the double14:09
*** ducttape_ has joined #openstack-operators14:09
serverascodesure, whatever you feel best14:09
fifieldtlet's revisit after we talk more about agenda14:09
serverascodek14:09
fifieldt#agreed 1 - NFV only needs one slot in ops14:10
fifieldtok on to numbner 214:10
fifieldtlooking at the draft agwenda14:10
fifieldtdid we miss anything?14:10
fifieldtor shall wer re-jig so the ones later in the day are empty14:10
*** mcunietti has joined #openstack-operators14:10
fifieldtkeeping in mind that "Cross Project" sessions are happening in the afternoon (after the green finishes)14:10
fifieldtand "Cross Project" sessions are the design summit sessions likely to be useful to ops14:11
fifieldtthoughts?14:11
*** ducttape_ has quit IRC14:11
fifieldtI believe shintaro left this comment on the spreadsheet :)14:11
fifieldt"Maybe we can leave these empty sessions open for teams who need more time to discuss."14:11
med_stupid question: what is "Baremetal Deploy"?14:11
shintaroyes, but I wasn't aware of the cross project14:11
*** ducttape_ has joined #openstack-operators14:12
fifieldtmed_: excellent question, let me add:14:12
fifieldt7) Figuring out session names and descriptions14:12
med_:-)14:12
fifieldtok, so, maybe aim to leave the later slots blank?14:13
fifieldtif there's nothing people want to add?14:13
fifieldtdo people know which slots we're talking about?14:14
fifieldtthe 5:05 - 6:35pm purple ones14:14
mcuniettinope sorry, I just joined :-)14:14
shintaroyes14:14
fifieldtmcunietti: you'll need tis luink: https://docs.google.com/spreadsheets/d/1EUSYMs3GfglnD8yfFaAXWhLe0F5y9hCUKqCYe0Vp1oA/edit#gid=80351347714:14
mcuniettithanks tom14:14
med_one point (minor) about leaving blank spots: folks will likely fill in their Summit schedule with non-OPS talks (just as a natural fill in all the holes type of behavior)14:14
shintaroF9:F10, G10 in spreadsheet14:14
fifieldtgood point med_14:15
fifieldtThe good news is that Ops Tools and Ops War Stories runs all week14:15
mihalis68yes, I am the only attendee from my employer this time, have to balance ops meet ups with other things14:15
med_so a placeholder title like "ops follow up" for the blank slots may be useful14:15
fifieldtok, placeholders I can do14:16
claytonwell, or adjust the schedule so there is less overlap across tracks if we're not going to need the time14:16
med_nod14:16
claytonwith 7 rooms, there is bound to be a lot of overlap14:16
fifieldtoverlap is indeed huge this time14:17
fifieldtbut keep in mind we always have 7 slot overlap regardless of session time - it's just overlap with different stuff14:17
fifieldtlooking at the proposed list of cross-project sessions (the white space after our green bits), I think people willenjoy a lot of those as much as the ops sessions14:18
mperazolquestion: where is the lightning talks in the schedule?14:18
fifieldtmperazol: It looks like they are at 5:05 to 6:35pm14:18
fifieldtin Room 11514:18
fifieldtOps War Stories treack14:18
mperazolok, thx14:18
* fifieldt is looknig for someone to finish this :)14:19
fifieldtok, this might be too hard14:20
fifieldtlet's move on14:20
shintaromaybe leave them open as ops followup and wait for someoen to complain?14:20
*** matrohon has joined #openstack-operators14:20
fifieldtlet's see how the meeting pans out :)14:21
fifieldt4)     #action - VW to email WG leaders and finalize the intro session14:21
fifieldtso that's taken care of at least14:21
fifieldt5)     what to do about lightning talks14:21
fifieldtwe currently havd 4 l;ightning talks14:21
fifieldtthere are 2 40 minutes slots reserved for these in the Ops War Stories track of the conference14:21
fifieldtif we don't get more lightning talks, I will have to hand the slots back14:22
mcuniettihow does the self-proposal process work? I mean, don't these talks need to be approved and voted by the community?14:22
serverascodeweird, they used to be more popuplar14:22
mperazolshould we send another call for proposals to the dist list?14:22
fifieldtmcunietti: to date, we've taken what we had in FIFO format14:22
fifieldt#link 14:22 < serverascode> weird, they used to be more popuplar14:22
med_not for lightning14:23
fifieldtlink https://etherpad.openstack.org/p/BCN-ops-lightning-talks14:23
med_lightning is totally ad hoc14:23
fifieldt#link https://etherpad.openstack.org/p/BCN-ops-lightning-talks14:23
med_and for these OPS sessions, we own the slots14:23
claytonI'm late to the game on the schedule making, but it seems like we're really taking advantage of having 7 rooms early in the day and have tons of overlap, but the afternoon schedule is fairly light14:23
claytondo we not have some of those slots later in the day?14:23
mrhillsmansorry so late14:23
fifieldtcorrect clayton14:23
med_so I was going to volunteer to moderate a/both LTs as mdorm is not able to attend.14:23
fifieldtbecause the summit starts on tuesday, we're all condensed14:23
fifieldtso the design summit starts immediately after us14:24
med_I think the question he's asking/proposing: Move some of the morning things to the afternoon14:24
fifieldtwith cross project sessions14:24
fifieldtwe only have the green and purple space14:24
med_so the morning isnt' triple booked14:24
claytonok, I understand your previous comment now.  We should show those as blocked off on the schedule then14:24
fifieldtdonme14:25
claytonpersonally I'm probably at least as interested in the cross-project stuff as the ops sessions, so I won't complain anymore :)14:25
med_so the ops work session in F10-11 is the only "spare" slots14:25
med_erm, 9 and 1014:25
med_and G 1014:25
fifieldtand Ops Fishbowl 4C14:25
fifieldtin D614:26
claytonhopefully this will be less of an issue once the PTG is in place14:26
med_gotcha14:26
fifieldtok, so lightning taslks?14:27
fifieldtam I handing badck the slots?14:27
fifieldtor did we want to have another go?14:27
med_Did we always have LTalks lined up before the summit?14:27
fifieldtyessir14:27
med_kk14:27
fifieldtfor this time, it's a bit more necessary, since they're in the "big" conference :)14:28
med_then probably scrap one slot at least due to lack of uptake14:28
serverascodeI can't really comment b/c I put one in :)14:28
shintaroI am trying to get one LT from my colleague. I can try more.14:28
fifieldtthat would be awesome :)14:28
fifieldtmaybe if we all just tell our friends14:28
fifieldtand twitterize or something14:28
mihalis68to be honest, I am overwhelmed trying to make sense of attending this conference solo as it is14:28
fifieldtit will pick up14:28
fifieldtthis is fine mihalis68 :)14:29
mihalis68without even considering also giving a little talk14:29
fifieldtok, I'll leave it for one mroe week :s14:29
fifieldtnext up14:29
fifieldt5)     confirm nova schedule14:29
fifieldtthjis was basically working out when nova was on in the design summit14:29
fifieldtto work out when to put it in ops meetup14:29
fifieldtbut, we're actually good in any green sl,ot14:29
fifieldtsince we are by ourselves :)14:30
fifieldtso that solves that and VW, I think the current nova slot is fine on that basis14:30
fifieldt6)     Figuring out session names and descriptions14:30
fifieldtOK, so, we need to put all these onn the official agenda ASAP14:30
fifieldtI can do that tonight14:30
fifieldtbut we need to lock down14:30
fifieldt-times14:30
fifieldt- session names14:30
fifieldt-session descriptions14:30
fifieldtThis is also related to14:30
fifieldt7) finsing moderators for all the sessions14:31
fifieldtwe are very behind on this - it normally takes at least 2 weeks to sort out a moderator for every session14:31
fifieldtso, looking for voluntters to make session names and descriptions, guess which moderators would like which session and email them to ask :)14:31
fifieldtwill also take any comments about clashes on the agenda14:32
med_so do we just email you14:32
med_or add comments... will add comments14:32
fifieldtidea for feedback:14:32
fifieldthttps://etherpad.openstack.org/p/BCN-ops-meetup -- we make the etherpads on there14:32
fifieldtand then put the title and deswcripotion in the etherpad14:32
fifieldtSee eg the link I made for nova just then14:33
med_gotcha14:34
* fifieldt has talked too much today14:34
* fifieldt will be quiet for a bit to let others talk abnout bcn agenda14:34
mrhillsmanyou're making up for lost time :)14:35
mrhillsmanperfectly ok hehe14:35
mihalis68Ok i am going to say something a bit negative. This process is too painful14:35
mihalis68the etherpad->google doc->ether pad jumps just shed information left and right14:35
mihalis68like one session is "hardware"14:35
mihalis68if you trace back, sure, there are interesting things on the submission etherpad14:36
fifieldtcorrect14:37
mihalis68I am not blaming the work laying out the sessions on the itinerary, that's hard too. But, I feel like we need to look at this document-sprawl14:37
fifieldtwe do have a tool to do this better14:37
fifieldtcheddar14:37
fifieldtttx wrote it for the design summit14:37
mihalis68is that the foundation one for main sessions14:37
fifieldtbut it doesn't quite do what we need14:38
fifieldtnaw, that's a seoparate tool14:38
fifieldtbut mihalis68 is totally right14:38
serverascodeI totally forgot about moderators14:38
fifieldtdo we want to take an action to right up requirements?14:38
mihalis68is cheddar close to being what would save us here?14:39
mihalis68I'd rather enhance an existing tool than crack open a discussion on some dream tool nobody has started14:39
fifieldtI'll write something up for you to read14:39
fifieldtpost on ops ML14:39
fifieldt#action fifieldt to write-uip cheddar14:40
fifieldtin the interest of saving out meeting time14:40
fifieldtso, anyone up for taking a block of time and writing session descriptions and emailing moderators?14:41
mihalis68I can do some later today14:41
fifieldtas in block fo time == a row in the spreadsheet14:41
serverascodehow will we know who emailed who?14:41
serverascodebut yeah I can put some time in14:41
*** emagana has joined #openstack-operators14:42
serverascodeput "potential moderator" in the ehterpad page for the sessoin?14:42
emaganaHello All!14:42
fifieldtserverascode: I'm thinking maybe just putting a name next to ther timeslot on the etherpad (e.g. 12:15 - curtis)14:42
mihalis68as I understand this task, take a row, for each session, refer to the suggestions ether pad and move the content to a dedicated new ether pad for that topic for this meetup14:42
emaganaSorry for being that late. I did not know which channel to join14:42
fifieldtno worries emagana14:43
mrhillsmanglad you made it :)14:43
fifieldtwe are discussing the agenda for barcelona14:43
*** simon-AS5591 has quit IRC14:43
fifieldtmihalis68: our tooling sucks :)14:43
serverascodeok I will just keep an eye on the etherpads and folllow what others are doing14:43
emaganafifieldt: awesome!14:44
fifieldtI'll work out how to make this work and email erveryone14:44
fifieldtsince emagana has come to thiws meting to chat with us, I propose we switch to him14:44
mihalis68+114:44
fifieldtFor reference, emagana == Edgar Magana == USer COmmittee14:44
fifieldt#topic Edgar's time in the sun14:44
med_and tweeter par excellence14:44
fifieldtemagana: I understand vw called you here :)14:44
emaganafifieldt: Yes, Indeed.14:45
fifieldtwe have a link "    UC sessions at the summit - Edgar "14:45
fifieldtin the agenda14:45
fifieldtcan you enlighten us?14:45
emaganavw wanted to give you all a very light overview of the UC sessions. Let me get you the link, one sec14:45
med_User Committe? I thought it was Ubuntu Cloud14:45
* med_ had no idea what it was14:46
emaganamed_: :-) I am glad I can talk about it.14:46
emaganaOk!14:47
emagana#link https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16869/user-commitee-session14:47
emaganaThe User Committee UC is one of the pillar of the OpenStack Foundation from the organizational point of view.14:47
emaganaThere are three entities that help the Foundation together and moving forward: Board of Directors, Technical Committee (TC) and User Committee14:48
emaganaYou all my friends are members of the UC. The idea behind this committee is to give the opportunity to users and operators of OpenStack clouds to express their needs for the future of the platform14:49
VWsorry - emagana - engaged in an all-day meeting14:50
emaganaA lot of members of the Foundation who were focus on operations did not feel recognize by the Foundation. The UC makes sure that does not happen anymore.14:50
VWjust wanted the meetupteam to hear about the UC sessions and such14:50
emaganaVW Thanks for the space, actually.14:51
emaganaSo, I am actually done. I dont want to bored everybody with a lot of bla bla bla..14:51
shintaroare all WGs under UC?14:51
*** hj-hpe has joined #openstack-operators14:51
fifieldtThe board may have working groups, but in general most of them are yes14:52
emaganaVM and fifieldt and the UC wanted to be sure that you are aware of the UC. You all are welcome to attend our session in Barcelona.14:53
emaganaWe will discuss the goals of the Uc committe and also the members. We believe we should increase the number of members from 3 to 5 and we want to propose to have an open election. Just the same way it happens with the TC14:53
*** connor__ has joined #openstack-operators14:54
mcunietti"A lot of members of the Foundation who were focus on operations did not feel recognize by the Foundation"14:54
mcuniettiemagana I think the problem is on the USERS not on the OPS14:54
emaganaFor the BCN summit you all should have in your badge a tag: AUC (Active User Contributor)14:54
fifieldtUC does both mcunietti14:54
mcuniettiusers are completely neglected, everybody refers to users=ops14:54
fifieldtnot true mcunietti14:54
fifieldtmy colleague david flanders is working very hard on "users" :)14:55
mcuniettii get it tom, but I think we must clarify who the users really are14:55
emaganamcunietti: We want to have an space for both14:55
fifieldtfeel free to chat with him14:55
emaganamcunietti: is it really needed to amke a difference between Operators and Users?14:55
mcuniettiplease put me in contact with anyone working on USERS please :-)14:55
fifieldtflanders@openstack.org14:55
mcuniettiAWS knows exactly who they are14:55
fifieldt^ email him :)14:55
mcuniettigreat, I'll do14:56
fifieldt(4 minute warning)14:56
emaganaI do have a really big cloud running OpenStack. I consider myself an Operators but also User. I dont even see the difference. This is why UC is for both.14:56
mcuniettiwhen are we supposed to talk about Ops Mid-Cycle meetup?14:56
mcuniettii do as well. I consider myself an Op14:56
fifieldtit wasn't on the agenda, but we have 4 minutes left if edgar is done :)14:56
fifieldtEdtgar?14:56
mcuniettisorry tom, saverio wrote me it was :-/14:57
fifieldtsorry about that :(14:57
*** jmlowe has quit IRC14:57
mcuniettinoprob14:57
mihalis68the call for hosts went out last week but needs more visibility14:57
mihalis68(hosts for next mid-cycle meet up)14:57
emaganamcunietti: I am sorry for taking this time for the UC. We wanted to be sure that you all know about it and reach out to us. Uc could help in moving this forward and help you with any feature request to the TC14:57
shintarothe process wasn't so clear for raising hand for the host.14:57
mcuniettithanks emagana, it was very useful for me14:58
fifieldtshintaro may have a point there14:58
fifieldtThanks Edgar!14:58
shintaromaybe we need new etherpad for this?14:58
emaganaThank you all for you time. You can reach me out anytime.14:58
fifieldt#topic Mid cycle (2 minutes)14:58
mihalis68+1 for an etherpad14:58
mihalis68it was an overburdened email, I will admit14:58
mcunietti:-)14:58
*** connor__ has quit IRC14:59
fifieldtso maybe a quick reply to the original email wioth an etherpad where people can make proposals?14:59
fifieldt(1 minute warning)14:59
mihalis68ok I'll do that14:59
fifieldtcheers14:59
mihalis68action me up :)14:59
fifieldt#action mihalis68 to etherpad reply email something something14:59
mihalis68PRECISELY14:59
fifieldtanything else in the last 30 seconsds?14:59
shintarothanks14:59
fifieldtotherwise I will be in touch via email about the agenda14:59
fifieldtand start putting placeholderts on the official dschedule14:59
fifieldtfor BCN this is14:59
fifieldtSorry this was so drawn out today :)15:00
fifieldtagendas are hard :(15:00
mcunietti:-) thanks everybody15:00
mihalis68indeed15:00
fifieldtyou all did so very well making it15:00
fifieldtbe proud!15:00
fifieldtI will start putting it online and send you links and things15:00
fifieldt#endmeeting15:00
openstackMeeting ended Tue Oct  4 15:00:42 2016 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/ops_meetups_team/2016/ops_meetups_team.2016-10-04-14.00.html15:00
med_thanks all15:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/ops_meetups_team/2016/ops_meetups_team.2016-10-04-14.00.txt15:00
openstackLog:            http://eavesdrop.openstack.org/meetings/ops_meetups_team/2016/ops_meetups_team.2016-10-04-14.00.log.html15:00
shintarothank you!15:00
serverascodethanks!15:00
mcuniettibye thanks15:00
fifieldtsorrymcunietti - back to you via email to give you a proper meting time :(15:01
*** ducttape_ has quit IRC15:01
emaganaciao ciao all15:02
mihalis68TTFN15:03
*** electrofelix has quit IRC15:07
*** shintaro has quit IRC15:07
*** electrofelix has joined #openstack-operators15:07
*** rcernin has quit IRC15:10
*** shintaro has joined #openstack-operators15:11
*** jsheeren has quit IRC15:14
*** fifieldt has quit IRC15:14
*** docaedo has joined #openstack-operators15:14
*** emagana_ has joined #openstack-operators15:16
*** mriedem has joined #openstack-operators15:16
*** MrDanDan has quit IRC15:16
*** emagana has quit IRC15:16
*** admin0 has quit IRC15:17
*** paramite has joined #openstack-operators15:17
*** emagana has joined #openstack-operators15:18
*** emagana_ has quit IRC15:20
*** admin0 has joined #openstack-operators15:20
*** armax has joined #openstack-operators15:22
openstackgerritMerged openstack/osops-tools-contrib: support flat network and availiability zone and certs  https://review.openstack.org/37911615:24
*** saneax is now known as saneax-_-|AFK15:24
*** admin0 has quit IRC15:24
*** azerou has joined #openstack-operators15:24
*** admin0 has joined #openstack-operators15:24
*** admin0 has quit IRC15:24
*** shintaro has quit IRC15:25
*** dc_mattj has joined #openstack-operators15:25
*** azerou has quit IRC15:26
*** jsecchiero has joined #openstack-operators15:27
*** jsecchiero has quit IRC15:29
*** jsec has joined #openstack-operators15:29
*** VW has quit IRC15:30
*** VW has joined #openstack-operators15:30
*** jsec has quit IRC15:30
*** jsech has joined #openstack-operators15:31
*** jsech has quit IRC15:32
openstackgerritMerged openstack/osops-tools-contrib: Fix typo in heat lamp README  https://review.openstack.org/38087915:32
*** jsech has joined #openstack-operators15:33
*** simon-AS559 has joined #openstack-operators15:45
*** simon-AS5591 has joined #openstack-operators15:46
openstackgerritMerged openstack/osops-tools-monitoring: Remove "encoding: utf-8" from monitoring-for-openstack files  https://review.openstack.org/37764215:47
openstackgerritMerged openstack/osops-tools-monitoring: Improve README file for monitoring-for-openstack  https://review.openstack.org/37763815:47
*** matrohon has quit IRC15:47
*** simon-AS559 has quit IRC15:49
*** armax has quit IRC15:53
*** simon-AS5591 has quit IRC15:57
*** dtrainor has joined #openstack-operators16:14
*** armax has joined #openstack-operators16:17
*** jmlowe has joined #openstack-operators16:22
*** cdelatte has quit IRC16:23
*** openstackgerrit has quit IRC16:26
*** dc_mattj has quit IRC16:27
*** openstackgerrit has joined #openstack-operators16:27
*** openstackgerrit has quit IRC16:28
*** tesseract- has quit IRC16:29
*** openstackgerrit has joined #openstack-operators16:29
*** openstackgerrit has quit IRC16:30
*** openstackgerrit has joined #openstack-operators16:30
*** jmlowe has quit IRC16:33
*** Apoorva has joined #openstack-operators16:50
*** VW has quit IRC16:59
*** VW has joined #openstack-operators16:59
*** derekh has quit IRC17:00
*** jmlowe has joined #openstack-operators17:01
*** jmlowe1 has joined #openstack-operators17:09
*** jmlowe has quit IRC17:10
*** emagana_ has joined #openstack-operators17:12
*** emagana has quit IRC17:16
*** admin0 has joined #openstack-operators17:21
*** mcunietti has quit IRC17:24
*** sudipto_ has quit IRC17:24
*** sudipto has quit IRC17:24
*** jmlowe has joined #openstack-operators17:27
*** jmlowe1 has quit IRC17:28
*** jsech has quit IRC17:30
*** simon-AS559 has joined #openstack-operators17:36
*** simon-AS5591 has joined #openstack-operators17:38
*** pilgrimstack1 has quit IRC17:39
*** jsheeren has joined #openstack-operators17:40
*** piet_ has joined #openstack-operators17:40
*** simon-AS559 has quit IRC17:41
*** jmlowe has quit IRC17:44
*** krobzaur has joined #openstack-operators17:50
*** bjolo_ has joined #openstack-operators17:53
*** VW has quit IRC17:55
*** VW has joined #openstack-operators17:56
*** jmlowe has joined #openstack-operators17:57
*** rcernin has joined #openstack-operators17:58
*** jsech has joined #openstack-operators18:00
*** krobzaur has quit IRC18:02
*** harlowja has quit IRC18:03
*** jsech has quit IRC18:04
*** electrofelix has quit IRC18:06
*** pilgrimstack has joined #openstack-operators18:13
*** ducttape_ has joined #openstack-operators18:13
*** bdeschenes has joined #openstack-operators18:25
*** docaedo has quit IRC18:29
*** dc_mattj has joined #openstack-operators18:29
*** jmlowe has quit IRC18:33
*** admin0 has quit IRC18:35
*** Rockyg has joined #openstack-operators18:43
*** jsech has joined #openstack-operators18:44
*** simon-AS559 has joined #openstack-operators18:46
*** admin0 has joined #openstack-operators18:48
*** simon-AS5591 has quit IRC18:50
*** dc_mattj has quit IRC18:50
*** jmlowe has joined #openstack-operators18:53
*** auggy has joined #openstack-operators18:57
*** ruoyu has joined #openstack-operators18:58
dansmithruoyu: which version of openstack?18:59
ruoyuIt is 'Liberty'18:59
jlkruoyu: the simple answer is that you can control the number of API and admin API workers in your nova.conf files.18:59
jlkif it's purely a problem with running too many.18:59
jlkLiberty I believe defaults to starting up a number of workers equal to the number of "CPUs" on the host.19:00
dansmithjlk: my first thought was that the un-archived database has inflated his worker sizes, regardless of number19:00
dansmithwe've had multiple issues with that, because horizon hits simple-tenant-usage by default, which over time inflates them all to the size of everything in your DB, deleted or not19:00
ruoyuThank you! I will go to check the nova conf file19:02
jlkoh right, so that's more of ballooning of the existing processes.19:02
dansmithyeah19:02
dansmithjlk: actually, you're a good person to ask.. you've had problems with the inbuilt db archiving I assume?19:02
jlk.... maybe?19:03
jlkI know we've done some perf testing of larger clouds, and lots of builds, but I don't think the issue of worker processes getting too big has come up, or at least in a way we could pinpoint some performance issues to it19:04
dansmithjlk: well, I mean archiving in general, regardless of the reason19:04
jlkI honestly don't think we've tried.19:05
dansmithjlk: just to be clear, you know I mean "actually deleting things from the DB that are deleted" right?19:05
dansmithwow, okay19:05
jlkyeah, purging of deleted instance data19:05
dansmithnot just instance data19:05
jlkall our clouds are tiny, so it's not been a priority to attempt19:05
jlkalthough we should be doing it prior to major version changes, so that migrations are potentially faster?19:05
dansmithjlk: we don't normally migrate deleted records19:06
jlkokay, bad guess19:06
dansmithjlk: I think you should be doing it for lots of reasons though19:06
jlkprobably19:06
dansmithjlk: anyway, I recently got hold of a real dataset and found a bug plaguing a bunch of people that I could never reproduce from clean state19:06
jlkoh fun!19:06
dansmithjlk: thought maybe I could win karma points with you.. guess not :)19:07
jlkWe _do_ purge keystone keys, haven't tried nova stuffs. I should try now somewhere!19:07
*** racedo has quit IRC19:07
dansmithjlk: this was the fix: https://review.openstack.org/#/c/377933/19:07
dansmithbackports are up for newton and mitaka19:07
jlkOH YES I think I do recall this19:08
dansmithjlk: so there was a bug at one point where scheduler was querying for all instances on startup, even deleted,19:08
dansmithwhich meant scheduler startup takes longer and longer each time19:08
dansmithwhen it hit 40 minutes recently for someone, they decided to hit me with something19:08
dansmiththe bug has been fixed upstream since liberty I think, but anyway19:09
dansmiththat's an example of a perf degradation other than the horizon thing19:09
jlkawesome!19:09
jlkis it a nova-manage call to purge the data?19:09
dansmithnova-manage db archive_deleted_rows19:09
dansmithI also recently added some niceness around that CLI too19:09
jlkwhere do they get archived to?19:09
dansmithlike --do-it-all-baby19:09
dansmithjlk: shadow tables.. shadow_instances, for example19:10
dansmithwhich have no FKs and you can nuke or dump for audits or whatever19:10
jlkDid there used to be a call to fully purge them?19:10
dansmithnot that Iknow of19:10
claytonyeah, it was always "guess a number to start with, rinse, repeat"19:11
ruoyujlk:dansmith: Yes, in nova configure file, I found that osapi_compute_workers has been set to 12 instead of default(the number of cpus available). I commend the line to make it to default. Hope this will fix it!19:11
claytonwhich always seemed weird to me, because I had absolutely no idea how many times I might have to run it19:11
claytonbut then again, it always died with FK constraint errors19:11
dansmithclayton: try with that patch applied19:11
jlkruoyu: how many CPUs do you have? the default will be to match that, so you may wind up with way more than 12 processes.19:11
claytonwe're still on liberty, but we'll try when we get on mitaka19:12
dansmithclayton: and see my patch to run until it's done19:12
dansmithclayton: https://review.openstack.org/#/c/378718/19:12
jlkdansmith: okay. I could see us adopting a pattern, because we archive database backups for a period of time, of purging the data out of the live database, to keep it slim.19:12
dansmithclayton: both of those probably just apply to liberty with no fuzz, fwiw19:12
claytonthe commit message sounds like it would have to fix at least part of the issue though19:12
dansmithjlk: yeah19:12
claytondansmith: thanks for working on it, that'll definitely help.  I hope we'll be on mitaka in not too long19:13
dansmithclayton: cool19:13
claytonwe've had a lot other things going on the last six months, so upgrades have suffered19:13
ruoyujlk: I have 12 cpu.....19:15
dansmithruoyu: how much ram were the processes using?19:15
dansmithruoyu: if they're using a lot, then you're probably suffering from the problem I describe above19:16
dansmithruoyu: nova-api processes will never get smaller, so over time they all inflate to the max size19:16
dansmithruoyu: you can help a little by slowly killing off a worker at a time19:16
dansmithwhich will be respawned small19:16
dansmithI even wrote a crappy script for that at some point19:17
ruoyudansmith: I have 25 nova-api process running! each one is consume around 1.3% of memory.19:17
dansmithruoyu: I'm guessing you have 12 cores but 24 threads then?19:17
dansmithruoyu: percentage doesn't help.. what's the actual footprint?19:17
claytonwe use the same hardware for control nodes as we do for hypervisors, so nova-api memory usage has never been a big issue for us19:17
jlkdansmith: he may have 12 API and 12 admin-api processes running19:18
dansmithjlk: or meta I guess19:18
jlkwait, I think I'm conflating nova and keystone. Nova doesn't have admin workers19:19
claytonnod, it does have metadata workers, but that runs inside nova-api unless you're using a separate wsgi server19:19
dansmithhacky script: http://pastebin.com/Vx37KNUZ19:20
dansmithkills one worker per run19:21
ruoyudansmith: VIRT is 510084, RES is 15700019:24
ruoyudansmith: for one nova-api process, others are similar. Thank you!19:25
ruoyudansmith: I have 25 nova-api process running now19:27
dansmiththat's kb I guess, so 500MB per worker?19:27
dansmiththat's not that big19:28
dansmithruoyu: in that case, I would say just run fewer of them if you want a smaller footprint19:28
*** mperazol has quit IRC19:29
ruoyuActually last time we meet this problem also, and I just restarted the nova service and it works then.19:30
dansmithruoyu: then use something like my script above to kill off one worker at a time and you can avoid the full restart19:31
ruoyudansmith: But this time the same problem happen again. And I guess I could also restart the service?19:31
dansmithruoyu: how many deleted instances do you have in your database?19:31
dansmithruoyu: SELECT COUNT(1) FROM instances WHERE deleted!=019:31
*** mriedem has quit IRC19:32
jlkdansmith: looks like you win this round of diagnosis!19:32
dansmithjlk: maybe19:33
dansmithjlk: this one issue is responsible for many of my visible scars19:33
*** cdelatte has joined #openstack-operators19:34
ruoyudansmith: Oh I remember. There is a historical problem in my horizon dashboard! I have 5 instances. Around 10 days ago I tried to delete two instances in horizon dashboard. However, I couldn't delete them. Now the dashboard keep showing 'Deleting' task. But it never been deleted yet19:35
dansmithwell, that's a different issue19:36
ruoyudansmith: Maybe this is the problem that cause too much nova resources working on this?19:36
*** matrohon has joined #openstack-operators19:39
dansmithruoyu: the failure to delete things from horizon? no19:40
dansmithfor that you need to look at logs19:40
*** emagana has joined #openstack-operators19:41
ruoyudansmith: Okay, thanks! I will try to run your script first.19:41
*** emagana_ has quit IRC19:43
*** jsheeren has quit IRC19:45
*** ruoyu has quit IRC19:46
*** Apoorva has quit IRC19:47
*** Apoorva has joined #openstack-operators19:48
*** Apoorva has quit IRC19:48
*** jsech has quit IRC19:48
*** Apoorva has joined #openstack-operators19:48
*** bjolo_ has quit IRC19:49
*** bjolo_ has joined #openstack-operators19:49
*** jmlowe has quit IRC19:50
*** mperazol has joined #openstack-operators19:52
*** mperazol has quit IRC19:56
*** paramite has quit IRC19:58
*** bjolo_ has quit IRC20:01
*** paramite has joined #openstack-operators20:06
*** ruoyu has joined #openstack-operators20:06
*** pilgrimstack has quit IRC20:11
*** jmlowe has joined #openstack-operators20:12
*** harlowja has joined #openstack-operators20:15
*** georgem1 has quit IRC20:18
*** Hosam has joined #openstack-operators20:27
*** rarcea has quit IRC20:28
*** Hosam_ has joined #openstack-operators20:30
*** dfflanders has joined #openstack-operators20:32
*** jmlowe has quit IRC20:33
*** Hosam has quit IRC20:34
*** emagana_ has joined #openstack-operators20:40
*** emagana has quit IRC20:42
*** paramite has quit IRC20:47
*** bdeschenes has quit IRC20:50
*** georgem1 has joined #openstack-operators20:52
*** simon-AS559 has quit IRC20:59
*** pilgrimstack has joined #openstack-operators20:59
*** pilgrimstack has quit IRC21:04
*** matrohon has quit IRC21:15
*** jmlowe has joined #openstack-operators21:17
*** ruoyu has quit IRC21:18
*** bdeschenes has joined #openstack-operators21:19
*** r-daneel has joined #openstack-operators21:20
*** dfflanders has quit IRC21:25
*** dfflanders has joined #openstack-operators21:28
*** cdelatte has quit IRC21:28
*** VW has quit IRC21:29
*** VW has joined #openstack-operators21:32
*** georgem1 has quit IRC21:32
*** ducttape_ has quit IRC21:36
*** ducttape_ has joined #openstack-operators21:38
dmsimardIs there no quota setting for ephemeral disk size limit for a project ?21:39
*** dbecker has quit IRC21:40
dmsimardCinder has a quota for total max volume size21:40
dmsimardOr at least, I think it'd be Cinder ?21:41
*** jamesdenton has quit IRC21:42
jlkNova has a 'gigabytes' quota21:51
dmsimardjlk: are you positive that's not exposed by cinder ?21:51
dmsimardopenstack quota show will have a gigabytes value -- nova quota-show won't and cinder quota-show <tenant_id> will21:52
jlkhttp://developer.openstack.org/api-ref/compute/?expanded=update-quotas-detail#update-quotas21:52
jlkI'm not sure, I'm looking still21:52
dmsimardyeah ctrl+f gigabytes nothing in there, nothing in http://docs.openstack.org/developer/nova/sample_config.html either21:53
jlkinterestingly enough, you may be right that there is no ephemeral disk quota21:53
dmsimardthat'd be super surprising21:53
dmsimardI saw https://blueprints.launchpad.net/nova/+spec/root-and-ephemeral-disk-quota but nothing ever merged: https://review.openstack.org/#/q/topic:bp/root-and-ephemeral-disk-quota,n,z21:53
jlkthere's an open blue print, that you found21:54
jlkI know of a number of ways it'll be wrong too21:54
dmsimardthe way around this is probably by limiting the amount of limited instances along with flavor usage21:54
jlkany boot from volume with a flavor that has a non-0 disk size (in the flavor) will count toward quota21:54
dmsimardbut it's sort of awkward21:54
*** Rockyg has quit IRC21:55
*** piet_ has quit IRC21:58
*** jmlowe has quit IRC22:00
*** VW has quit IRC22:07
*** ducttape_ has quit IRC22:10
*** ducttape_ has joined #openstack-operators22:14
*** admin0 has quit IRC22:22
*** bdeschenes has quit IRC22:23
*** ducttape_ has quit IRC22:30
*** 17SAAKTPZ has joined #openstack-operators22:32
*** emagana_ has quit IRC22:36
*** mattoliverau has joined #openstack-operators22:43
*** rcernin has quit IRC22:50
*** r-daneel has quit IRC22:52
*** 17SAAKTPZ has quit IRC23:03
*** ducttape_ has joined #openstack-operators23:03
*** Apoorva_ has joined #openstack-operators23:05
*** Apoorva has quit IRC23:08
*** ducttape_ has quit IRC23:12
*** saneax-_-|AFK is now known as saneax23:13
*** ducttape_ has joined #openstack-operators23:19
*** ducttape_ has quit IRC23:33
*** bdeschenes has joined #openstack-operators23:36
*** ducttape_ has joined #openstack-operators23:39
*** ducttape_ has quit IRC23:41
*** ducttape_ has joined #openstack-operators23:41
*** maticue has quit IRC23:42
*** ducttape_ has quit IRC23:53

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!