Tuesday, 2018-10-23

*** rcernin has joined #openstack-operators00:11
*** jamesmcarthur has joined #openstack-operators00:11
*** rcernin_ has quit IRC00:12
*** rcernin has quit IRC00:13
*** rcernin has joined #openstack-operators00:14
*** jamesmcarthur has quit IRC00:16
*** jamesmcarthur has joined #openstack-operators00:30
*** aojea has joined #openstack-operators00:39
*** aojea has quit IRC00:44
*** markvoelker has joined #openstack-operators01:25
*** VW_ has quit IRC01:30
*** jamesmcarthur has quit IRC01:41
*** jamesmcarthur has joined #openstack-operators02:00
*** jamesmcarthur has quit IRC02:05
*** jamesmcarthur has joined #openstack-operators02:18
*** jamesmcarthur has quit IRC02:34
*** jamesmcarthur has joined #openstack-operators02:39
*** jamesmcarthur has quit IRC02:59
*** mrhillsman is now known as openlab04:05
*** openlab is now known as mrhillsman04:06
*** faizy98 has joined #openstack-operators04:14
*** aojea has joined #openstack-operators04:16
*** faizy_ has quit IRC04:16
*** jamesmcarthur has joined #openstack-operators04:18
*** aojea has quit IRC04:20
*** jamesmcarthur has quit IRC04:22
*** faizy98 has quit IRC04:34
*** simon-AS559 has joined #openstack-operators05:19
*** faizy98 has joined #openstack-operators05:57
*** simon-AS559 has quit IRC06:03
*** simon-AS559 has joined #openstack-operators06:32
*** slaweq has joined #openstack-operators06:43
*** pcaruana has joined #openstack-operators06:56
*** rcernin has quit IRC07:06
*** jamesmcarthur has joined #openstack-operators07:34
*** aojea has joined #openstack-operators07:37
*** jamesmcarthur has quit IRC07:38
*** pvradu has joined #openstack-operators08:14
*** saybeano has joined #openstack-operators08:16
*** derekh has joined #openstack-operators08:29
*** electrofelix has joined #openstack-operators08:29
*** priteau has joined #openstack-operators08:45
*** dtrainor has quit IRC08:59
*** jamesmcarthur has joined #openstack-operators09:43
*** jamesmcarthur has quit IRC09:47
*** pvradu has quit IRC10:00
*** pvradu has joined #openstack-operators10:00
*** pvradu_ has joined #openstack-operators10:05
*** pvradu has quit IRC10:09
*** dtrainor has joined #openstack-operators10:48
*** markvoelker has quit IRC11:45
*** tobberydberg has quit IRC12:30
*** markvoelker has joined #openstack-operators12:36
*** markvoelker has quit IRC12:37
*** aojea_ has joined #openstack-operators12:42
mihalis68_etherpad isn't working for me right now12:53
mihalis68_https://www.irccloud.com/pastebin/thMldQa9/12:53
*** aojea_ has quit IRC12:55
*** aojea_ has joined #openstack-operators12:56
smcginnismihalis68_: I think they just recently updated our etherpad-lite deployment.12:57
smcginnismihalis68_: A lot of the functionality is client-side javascript in your browser, and browsers tend to cache a lot.12:57
smcginnisTry going to the link and pressing command+shift+r to do a full refresh.12:58
smcginnisSometimes that helps me after etherpad upgrades.12:58
mihalis68_I already restarted safari and that fixed it, thanks for the tip though!12:58
smcginnisCool. Maybe it will help after next upgrade.12:58
mihalis68_I'll see you in an hour, although I don't have any progress to report this week on ops stuff12:59
*** emccormick has joined #openstack-operators13:05
*** aojea_ has quit IRC13:17
*** aojea_ has joined #openstack-operators13:18
*** aojea_ has quit IRC13:22
*** mriedem has joined #openstack-operators13:34
*** emccormick has quit IRC13:40
*** aojea_ has joined #openstack-operators13:45
*** aojea_ has quit IRC13:48
*** aojea_ has joined #openstack-operators13:52
*** aojea_ has quit IRC13:55
*** aojea_ has joined #openstack-operators13:56
smcginnis'bout that time14:00
*** aojea_ has quit IRC14:00
mihalis68_o/14:04
simon-AS559o/14:04
mihalis68_#startmeeting ops-meetup-team14:04
openstackMeeting started Tue Oct 23 14:04:35 2018 UTC and is due to finish in 60 minutes.  The chair is mihalis68_. Information about MeetBot at http://wiki.debian.org/MeetBot.14:04
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:04
*** openstack changes topic to " (Meeting topic: ops-meetup-team)"14:04
openstackThe meeting name has been set to 'ops_meetup_team'14:04
mihalis68_#topic actions14:05
*** openstack changes topic to "actions (Meeting topic: ops-meetup-team)"14:05
smcginniso/14:05
mihalis68_I recall we had actions to set up the etherpads for our sessions in Berlin. I haven't done mine14:05
mihalis68_anyone else anything to report? I'll get mine done today14:06
spotzo/14:06
smcginnisI was out most of last week, so nothing to report from me.14:06
mihalis68_I did notify the list about our resolutions last week14:07
mihalis68_namely: social on the tuesday in berlin14:07
mihalis68_and then longer-term, strong support for two ops meetups next year, roughly in-between summits14:07
mihalis68_and as mentioned last week I'm still working on getting together an offer to host in frankfurt early next year14:08
mihalis68_any other updates re: actions?14:08
mrhillsmano/14:08
mihalis68_hi there!14:08
smcginnisWas there any response on the mailing list on the plan for two separate meetups?14:08
spotzI hadn't seen any14:09
mihalis68_yeah I just looked, no response14:11
simon-AS559"silence is consent"14:11
mihalis68_it's hard to tell if it's that or if there's nobody reading the messages14:12
mihalis68_maybe I'm just paranoid14:12
spotzWell we can always bring it up in Berlin14:12
smcginnisAlways the question with those kinds of posts. :)14:12
mihalis68_#topic berlin14:12
*** openstack changes topic to "berlin (Meeting topic: ops-meetup-team)"14:12
spotztada:)14:12
mihalis68_in case anyone missed last weeks meeting, the sessions I submitted weren't finalized, however they were rescued, hence ops meetups team catch-up, operators documents and a cceph session are on the schedule14:13
mihalis68_those are emccormick, smcginnis and myself respectively. etherpads to come shortly14:13
mihalis68_the only other thing the meetups team needs to do is arrange the specific operators related social14:13
mihalis68_emccormick was researching venues14:13
mihalis68_he was here earlier, I guess he's nipped out?14:15
smcginnisI've started an etherpad for the session I'm moderating and added it to the overall Forum list:14:15
smcginnis#link https://wiki.openstack.org/wiki/Forum/Berlin201814:15
mihalis68_oh nice!14:15
mihalis68_I'll follow along14:15
mihalis68_get mine started and linked today14:16
*** fresta_ has joined #openstack-operators14:16
*** fresta_ is now known as fresta14:16
mihalis68_I guess there's no update on the social venue. Anything else we need to talk about for berlin?14:17
mihalis68_I'm going to the ceph day on the monday, by the way14:17
simon-AS559Me too! (even got a talk accepted)14:17
mihalis68_oh cool!14:18
mihalis68_share the title here? I'll make sure to attend!14:18
smcginnisOh good. I can't attend, but it sounds like it will be a great one for you folks to be at.14:18
simon-AS559"Into the cold—Object Storage in SWITCHengines"14:20
mihalis68_I'm sure there sill be a Cephalopods Unite event after14:20
mihalis68_Oh your objects are cold? that's nice... :)14:20
mihalis68_we built an object store for large cold storage14:20
mihalis68_everyone turned up with primary small object high iops workload :|14:21
simon-AS559We're planning on building long-term storage, and are thinking about ways to prevent that from happening14:21
simon-AS559(i.e. people moving from our existing object (and other) storage offerings to cold storage just because it's cheaper, but expecting to use it like before)14:22
mihalis68_then I DEFINITELY need to listen to the talk14:22
mihalis68_ok, I've lead us somewhat off-topic14:23
mihalis68_any more regarding berlin as a whole?14:23
mihalis68_Oh, one thing... several of us joined a group chat for various past events. I find that really handy14:24
mihalis68_I've done google hangouts, WhatsApp, Signal...14:24
spotzOSA uses a hangout, WoO uses groupme14:24
smcginnisI don't have groupme, but everything else works for me.14:25
mihalis68_I never heard of groupme before. is it good?14:26
mihalis68_signal is ok not great14:26
spotzoh yeah oui uses hangout too:)14:26
spotzmihalis68_: It's ok and was easy for the less technical folks14:26
mihalis68_I don't like WhatsApps owner, but otherwise it was a little better than Signal14:26
mihalis68_I'll look into groupme this week then14:27
mihalis68_life is becoming simply rotating between messaging systems14:27
mihalis68_I'm surprised nobody said Slack14:27
mihalis68_or a channel here14:27
spotztrouble with here is I lose stuff when not connected unless I log into my bouncer and read logs14:28
mihalis68_yeah14:28
mihalis68_ok, moving not-very-swiftly-along... #topic 201914:28
mihalis68_summary: the community seems to be in favor of the traditional meetups14:28
mihalis68_#1 is aimed to be in europe and we're hoping to host in Frankfurt14:29
mihalis68_I don't have an update yet, unfortunately14:29
mihalis68_#2 would be "somewhere in NA" that's all we know I think14:29
smcginnisTopic isn't changed - need to have #topic at the start of the line.14:29
smcginnis:)14:29
mihalis68_#topic 2019 meetups14:29
*** openstack changes topic to "2019 meetups (Meeting topic: ops-meetup-team)"14:29
mihalis68_TIL about #topic14:29
smcginnisI didn't see a response to Eric's ML post either.14:30
smcginnisI think we were hoping to get some potential hosting offers off of that.14:30
mihalis68_I feel like the opertors ML doesn't really work these days14:30
mihalis68_or at least I feel thst perhaps the upcoming giant ML merger may help14:31
smcginnisHopefully it will be better after we combine lists.14:31
smcginnisYep14:31
mihalis68_technically the route to offering to  host #2 in 2019 is already open14:31
mihalis68_I don't think it's quite time to start tweeting requests for hosts, but it would be interesting to see if anyone's thinking about it14:32
mihalis68_no comments. I guess there are no orgs itching to host with a representative here today14:33
mihalis68_seems like we don't have much this week, so...14:33
mihalis68_#topic any more business14:34
*** openstack changes topic to "any more business (Meeting topic: ops-meetup-team)"14:34
smcginnisWe can try reviving the thread later. Maybe after Berlin is out of the way.14:34
mihalis68_oh by the way, smcginnis I'm challenging myself to land a chance in the operators guide before berlin14:34
*** medberry has joined #openstack-operators14:34
mihalis68_a *change14:34
mihalis68_it's the med!14:34
medberryhello14:34
mihalis68_hi there. We are a little bit thin this week. not many  updates14:35
medberrykk,14:35
mihalis68_I need to make an etherpad for berlin and so does emccormick14:35
* medberry read the notes from last week, mostly Berlin prep14:35
medberrykk14:35
*** aojeagarcia has joined #openstack-operators14:35
mihalis68_and we havne't nailed down the operators social on the tuesday14:35
mihalis68_I don't think you're joining right?14:35
*** aojea has quit IRC14:35
medberryProbably dine with my family but if they are beat, I might show up14:35
mihalis68_oh you'll be in berlin, though?14:36
medberryoh, most definitely14:36
mihalis68_excellent14:36
medberrytaking the whole fam14:36
spotzHe's just standing us up most of the time:)14:36
medberrybeen planning it for nearly a year.14:36
mihalis68_how are all your ansible demos going? hash offtopic14:36
medberryspotz, so true14:36
medberrysome go well, yesterday's was a bit of a pile of dung14:36
medberrytwo unrelated technical issues.14:37
mihalis68_bleurgh14:37
medberrydoing an openstack-ansible + ARA talk next week.14:37
medberryand doing other ansible things (mostly non-OpenStack) on a regular basis.14:37
mihalis68_alright, well sorry to do this when you just got here David, but I think we're done, nobody had much to report and berlin is a low overhead events for the meetups team14:37
medberrynext week's is the first talk I'll give that features openstack.14:37
medberryno worries.14:37
medberryI had an appt at 8 or would have been here earlier.14:38
medberrycheers!14:38
mihalis68_we are doing more and more ansible in our own openstack thing (chef-bcpc)14:38
mihalis68_last call folks, or we can all get back to the clouding14:38
spotzThanks all!14:38
mihalis68_#endmeeting14:38
*** openstack changes topic to "Discussion of topics related to operating Openstack infrastructures at scale | Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators | Logs: http://eavesdrop.openstack.org/irclogs/"14:38
openstackMeeting ended Tue Oct 23 14:38:43 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:38
openstackMinutes:        http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-23-14.04.html14:38
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-23-14.04.txt14:38
smcginnisThanks!14:38
openstackLog:            http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-10-23-14.04.log.html14:38
medberrythanks all.14:38
mihalis68_thanks everyone. See you next week14:38
mihalis68_I'm going to go remonstrate with erik14:39
mihalis68_some social secretary he turned out to be :)14:39
medberryI'll arrive in Berlin on the 11th and returning to the states on the 20th november.14:39
smcginnisHah14:39
simon-AS559Feel free to reach out to me if you need help14:39
smcginnisLooking forward to seeing folks again.14:39
simon-AS559(have lived in Berlin for a while, though nightlife must have changed since then)14:40
*** VW_ has joined #openstack-operators14:40
mihalis68_I'm travelling nov 10th-16th14:40
mihalis68_hey VW meeting is already over, not much this week14:40
mihalis68_heh... this time 2 years ago was travelling to the barcelona summit!14:42
smcginnisThat was a fun time.14:43
mihalis68_right?14:47
mihalis68_my second worst snafu in travel and accomodation. That time I did ok on the flights but my hotel was an hours walk from the summit14:48
spotzWe're there the 9th - 18th am. Still need a hotel the 16-18, but got one for the other nights14:56
*** simon-AS559 has quit IRC15:11
*** jamesmcarthur has joined #openstack-operators15:12
*** saybeano has quit IRC15:13
*** emccormick has joined #openstack-operators15:35
*** gyee has joined #openstack-operators16:16
*** pvradu_ has quit IRC16:37
*** jamesmcarthur has quit IRC16:46
*** derekh has quit IRC17:00
*** pvradu has joined #openstack-operators17:04
*** pvradu has quit IRC17:09
*** aojeagarcia has quit IRC17:18
*** pvradu has joined #openstack-operators17:42
*** aojea has joined #openstack-operators17:46
*** pvradu has quit IRC17:53
*** VW_ has quit IRC17:54
*** jamesmcarthur has joined #openstack-operators18:07
*** jamesmcarthur has quit IRC18:08
*** jamesmcarthur has joined #openstack-operators18:09
*** VW_ has joined #openstack-operators18:10
*** medberry has quit IRC18:18
*** irclogbot_4 has joined #openstack-operators18:35
mriedemlooking from more input from ceph deployments for this nova spec to increase image download time to computes on ceph storage https://review.openstack.org/#/c/572805/618:49
mriedem*for more18:50
mriedemit's kind of stalled otherwise18:50
smcginnismihalis68_: You may be interested in that one. ^18:52
emccormickmriedem Will take a look. I assume you meant decrease image download time? ;)19:11
mriedemheh yeah19:12
mriedemkind of important difference19:12
mriedems/increase/improve/19:13
emccormickWow, that's been hanging out there for quite a while huh?19:13
smcginnisIncrease speed, decrease time.19:14
mriedememccormick: well it came up late in rocky19:20
mriedemand then mnaser had some questions about alternative solutions using the compute service image cache19:21
*** jamesmcarthur has quit IRC19:22
*** pcaruana has quit IRC19:23
*** jamesmcarthur has joined #openstack-operators19:23
emccormickThere's quite a few comments in the last patch set that need addressing, but mnaser's cache thing confuses me19:25
emccormickwhy would you have a cache in RBD? At that point you might as well be running your instances entirely in RBD19:25
emccormickmaybe I misunderstand his idea19:26
emccormickI agree with the failback to API part of it, but he rest confuses me19:27
*** jamesmcarthur has quit IRC19:27
*** jamesmcarthur has joined #openstack-operators19:28
emccormickmriedem So would you like us to just throw some comments on it to try and set fire to email boxes and get it bubbled up to the top again? I'm happy to +1 the idea in general, but it seems like it was down to implementation minutia.19:30
*** jamesmcarthur has quit IRC19:30
*** jamesmcarthur has joined #openstack-operators19:32
mriedemyeah i'm just looking to see if more operators are interested in what's being proposed and would find value in it19:34
mriedemto help move it forward19:34
mriedemor if maybe others already have alternative solutions19:34
*** simon-AS559 has joined #openstack-operators19:42
*** simon-AS5591 has joined #openstack-operators19:45
*** simon-AS559 has quit IRC19:46
*** jamesmcarthur has quit IRC19:58
*** jamesmcarthur has joined #openstack-operators20:02
*** jamesmcarthur has quit IRC20:06
*** simon-AS5591 has quit IRC20:09
*** jamesmcarthur has joined #openstack-operators20:09
*** medberry has joined #openstack-operators20:17
mnasermriedem, emccormick: the reason behind the cache is because we have regions that are made out of multiple cells that are conncted via a 10G link20:34
emccormickmriedem I have interest in it. I quit using local disk for a while, but I'm going to need to reintroduce it in an AZ at some point in the not-to-distant future20:34
mnaserwhere glance is located in one of the "cell" deployments only20:34
mnaserso if you have 2 ceph clusters, on inside every "cell" or "az".. you dont have an easy way of making images reusable across20:34
mnaserand then that link gets saturated with image downloads20:34
mriedememccormick: what's your use case? HPC?20:37
mriedemfor local disk i mean20:37
emccormickmriedem yeah basically. I have a project that uses a shitload of cassandra data20:39
emccormickmriedem Rather than having them suck up 3x ceph replicas of 3x cassandra replicas, I'd rather run them on local disk20:40
mriedemhow much do you care about boot from volume?20:41
mriedems/you/this project/20:41
emccormicklocal disks will be faster for them since they're SSD's and not subjetc to network latency, and waste less actual space20:41
emccormickalso my ceph cluster is all spinning rust20:41
emccormickI don't / they don't20:42
mriedemok cool, because volume-backed affinity to the same compute host that the guest is running on has been a problem20:42
mriedemunless you explode AZs20:42
mriedemwhich is a different problem20:42
emccormickbigass root disk -> compute node. The end20:42
mriedemKISS20:43
emccormickFrankly I'd rather do it with Ironic and just give them a whole box, but it's not possible for me right now. Networking problems w/ ironic20:43
emccormickword20:43
mriedemyeah that was my response to our internal product people that wanted volume-backed servers affined to the compute on which the guest was running, "can't we just offer baremetal? don't we already offer baremetal?"20:44
emccormick"It worked in VMWare"20:45
emccormickThe other thing I've considered is just nailing a provider network to their tenant and cobblerizing a few boxes for them.20:47
emccormickThat feels very uncloudy though20:47
emccormickmnaser I think I just don't understand how glance-cache works cross-cell, but I'll leave that to you big dogs to sort out ;)20:48
emccormickI agree in general it would be nice to have the option of pulling images down directly from RBD when the rbd location is exposed in glance.20:49
*** jcmoore has quit IRC20:50
*** jcmoore has joined #openstack-operators21:01
*** priteau has quit IRC21:09
*** simon-AS559 has joined #openstack-operators21:10
*** simon-AS559 has quit IRC21:16
*** jamesmcarthur has quit IRC21:29
*** simon-AS559 has joined #openstack-operators21:36
*** slaweq has quit IRC21:39
*** simon-AS559 has quit IRC21:39
*** rtjure has quit IRC21:39
*** aojea has quit IRC22:02
*** slaweq has joined #openstack-operators22:04
*** rcernin has joined #openstack-operators22:24
*** mriedem has quit IRC22:26
*** slaweq has quit IRC22:38
*** threestrands has joined #openstack-operators23:02
*** slaweq has joined #openstack-operators23:11
*** slaweq has quit IRC23:45
*** gyee has quit IRC23:46

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!