Friday, 2018-10-26

*** diablo_rojo has quit IRC00:55
*** jamesmcarthur has joined #openstack-tc01:03
*** jamesmcarthur has quit IRC01:05
*** jamesmcarthur has joined #openstack-tc01:05
*** fanzhang has left #openstack-tc01:07
*** jamesmcarthur has quit IRC01:35
*** lbragstad has quit IRC01:49
*** lbragstad has joined #openstack-tc01:49
*** ricolin_ has joined #openstack-tc01:59
*** jamesmcarthur has joined #openstack-tc02:05
*** jamesmcarthur has quit IRC02:09
*** evrardjp_ has joined #openstack-tc05:51
*** ianw_ has joined #openstack-tc05:54
*** dims_ has joined #openstack-tc05:54
*** dtruong has quit IRC05:55
*** dims has quit IRC05:55
*** evrardjp has quit IRC05:55
*** ianw has quit IRC05:55
*** ianw_ is now known as ianw05:55
*** dangtrinhnt has quit IRC07:00
*** bauzas is now known as bauwser07:35
*** witek has quit IRC07:35
*** evrardjp_ is now known as evrardjp07:37
*** tosky has joined #openstack-tc07:46
*** jpich has joined #openstack-tc08:03
*** cdent has joined #openstack-tc08:52
*** e0ne has joined #openstack-tc09:16
ttxI think if affiliation appears as part of the bio it makes it less look like company-reserved seating09:18
cdentbecause there's a mix?09:20
cdentWhen I go to openstack events I put "OpenStack" and "OpenStack" as my company and job on my badge.09:21
ttxAt https://www.openstack.org/foundation/tech-committee/ company and bio are shown, but makes "company" the top info below name09:29
openstackgerritThierry Carrez proposed openstack/governance master: Indicate relmgt style for each deliverable  https://review.openstack.org/61326809:52
*** ricolin_ has quit IRC10:02
*** jamesmcarthur has joined #openstack-tc10:12
*** jamesmcarthur has quit IRC10:16
*** dtantsur|afk is now known as dtantsur10:48
*** e0ne has quit IRC10:51
*** e0ne_ has joined #openstack-tc10:52
*** EmilienM is now known as EvilienM11:24
*** fanzhang has joined #openstack-tc11:26
smcginnisAs much as we want to think everyone here is coming in unbiased by employer obligations (which I do think we do a pretty good job of) there still is someone paying our bills and "suggesting" what we should be doing.11:27
smcginnisTo varying degrees, I'm sure.11:28
cdentdestroy the corporate hegemons smcginnis11:35
smcginnis:)11:36
ttxOn that topic, I found that an interesting translation of the "openstack first, project second, company third" principle: https://twitter.com/masohn/status/105536067712737689611:49
smcginnisMaybe we should link to that.11:50
smcginnisAfter correcting "Red Hat" (ocd twitch)11:51
cdentI've always been a fan of that bit of policy11:51
ttxIt's always tricky to put in practice, but i suspect having it spelled out in your "code of business conduct" must help11:51
*** e0ne_ has quit IRC12:26
*** cdent has quit IRC12:30
*** jamesmcarthur has joined #openstack-tc12:47
fungiyeah, that's worth pointing the osf account management team at as something they can suggest to prospective new member companies. might also be good fodder for the contributing organization recommendations list the fc sig has been assembling12:56
*** cdent has joined #openstack-tc12:58
TheJuliaIt may also be good for them to have and leverage that to help convey context of a good chunk of the community.13:09
openstackgerritCorey Bryant proposed openstack/governance master: Add optional python3.7 unit test enablement to python3-first  https://review.openstack.org/61070813:13
*** mriedem has joined #openstack-tc13:19
*** e0ne has joined #openstack-tc13:25
tbarronzaneb: Any chance you can spend a bit of time in the weekly manila meeting to talk through the Cloud Vision?  It's at 1500 UTC on Thursdays.13:37
*** jamesmcarthur has quit IRC13:46
zanebtbarron: added it to my calendar, but ping me here if I forget14:04
zanebit's the same time as TC office hours so I'll definitely be around14:05
*** cdent has quit IRC14:05
scasas a not-tc, there's definitely a perception of "i am $INDIVIDUAL, representative of $COMPANY. i also work on OpenStack." -- i've been polled by knowledge-seekers about my contribution behaviors where the perception was just that very thing. they weren't seeking knowledge about me the individual, or me the OpenStack community member, but me the corporate representative to see how my handlers behave14:09
*** jaypipes is now known as leakypipes14:10
tbarronzaneb: great, I'll put you on next week's agenda.  IMO there are no fit problems between manila and the vision - manila virtualizes back end h/w, enables physical data center mgmt, elastic scaling, etc. but14:17
tbarronzaneb: rather than just +1 myself I want to make sure the manila community can talk this through first14:18
zanebtbarron: excellent, thank you. looking forward to it :)14:18
*** jamesmcarthur has joined #openstack-tc14:21
*** tosky has quit IRC14:25
*** tosky has joined #openstack-tc14:29
*** cdent has joined #openstack-tc14:29
ttxmoved https://etherpad.openstack.org/p/tc-health-checklist to https://wiki.openstack.org/wiki/OpenStack_health_tracker#Health_check_list14:54
ttxplease push further edits there instead than on the etherpad14:54
openstackgerritZane Bitter proposed openstack/governance master: Resolution on keeping up with Python 3 releases  https://review.openstack.org/61314514:59
*** diablo_rojo has joined #openstack-tc15:00
fungihttp://lists.zuul-ci.org/pipermail/zuul-discuss/2018-October/000575.html may be of interest for those not following the zuul-discuss ml15:00
*** dansmith is now known as SteelyDan15:01
cdentfungi: that seems like a good idea15:19
fungibasic idea being we could attempt to more fairly distribute ci resource allocation by project or by project queue group15:21
fungimaking it harder for any one project/team to monopolize our available quota at saturation15:21
cdentthis will super fun for nova15:22
openstackgerritLance Bragstad proposed openstack/project-team-guide master: Add section about collecting feedback to PTL guide  https://review.openstack.org/61361715:30
TheJuliascas: I think many people struggle... or have not found reference or knowledge of community over project team over company... at the same time that can be a very foreign concept and easy to completely disregard.15:52
fungicdent: fair queuing solutions there will likely impact tripleo the most, followed by nova and then neutron15:57
fungicdent: though in probability, it will be more like fair queuing in the gate specifically will impact the tripleo shared change queue the most, followed by the integrated projects shared change queue15:58
clarkbhttp://grafana.openstack.org/d/rZtIH5Imz/nodepool?panelId=13&fullscreen&orgId=1 gives you a rough idea of the breakdown. centos there is basically just tripleo15:58
fungicdent: meaning changes in the integrated gate queue will now get an even share of resources even if there are twice as many node requests for tripleo changes15:59
clarkbwe also deployed an update to zuul yesterday that will track per job and project resource usage (in our logs) that if I have time today I'd like to start processign to see if we can produce any interesting info15:59
cdentthat will be interesting16:00
TheJuliawow, that is an interesting raph16:00
TheJuliagraph16:00
fungiso basically the yellow line is node requests tripleo changes and then the pink line is node requests for all the projects running jobs on ubuntu-xenial?16:01
clarkbfungi: that is in use accounting so not just requests, but cpu time16:01
fungiahh, right. because proportional consumption would be number of nodes multiplied by the amount of time they're in use16:02
clarkbthe nice thing about that graph is it doesn't have to fudge the multinode math, the downside is it doesn't directly tie usage to specific jobs or projects (which we should now be able to do via other methods)16:02
fungiso the fact that lots of tripleo changes run multiple jobs on multi-node centos-7 deployments and use them for 3.6 hours is what's really pumping up that number16:03
clarkbfungi: yes16:03
fungier, 3.5 hours16:03
clarkbalso lots of gate resets16:03
clarkbits like the perfect storm of all the bad things16:03
fungigot it16:03
* fungi is glad they're working hard to remedy the situation with faster jobs, fewer nodes in those jobs and fixing the bugs causing their tests to fail16:04
clarkbthe graph will also be useful to tell how well we've transitioned forward to bionic16:04
clarkbright now its about a 1:10 ratio bionic:xenial and I expect that to flip16:04
*** dtantsur is now known as dtantsur|afk16:07
*** dtruong has joined #openstack-tc16:07
*** e0ne has quit IRC16:10
*** jpich has quit IRC16:31
*** mriedem is now known as mriedem_away16:34
zanebwhat is still running on trusty? stable branches?16:36
smcginnisI would expect that to be the only thing.16:38
fungiinfra still runs some tests on trusty too16:46
fungias we have some services (a dwindling few thankfully) running on trusty still16:46
fungiwe're eager to get the last of those dealt with before trusty reaches end of support from ubuntu16:47
*** bauwser is now known as bauzas17:01
*** jamesmcarthur has quit IRC17:16
*** lbragstad is now known as elbragstad17:37
*** jamesmcarthur has joined #openstack-tc18:01
*** cdent has quit IRC18:03
*** mriedem_away is now known as mriedem18:28
*** e0ne has joined #openstack-tc18:35
*** jamesmcarthur has quit IRC18:48
*** e0ne has quit IRC19:14
*** e0ne has joined #openstack-tc19:19
*** e0ne has quit IRC19:23
clarkbfrom earlier discussion http://paste.openstack.org/show/733154/ produced by https://review.openstack.org/61367419:38
*** jamesmcarthur has joined #openstack-tc19:46
clarkbthe breakdown is tripleo: ~50% of all cpu time, neutron: ~14% and nova ~5%19:47
notmynamewow19:48
clarkbunfortunately small sample due to only recently having logged this data, but I think it is likely representative19:50
smcginnisGreat that you're able to get that data now.19:58
clarkbkolla is ~3% and openstack-ansible ~4% to put tripleo on a scale with some other deployment projects19:59
notmynameclarkb: FWIW, https://review.openstack.org/#/q/p:openstack/tripleo-heat-templates shows a *lot* of failures, and https://review.openstack.org/#/c/585882/ has a non-voting timeout after 4 hours. maybe these are some low-hanging fruit to look at20:01
clarkblooking at the new foundation level projects: airship is .05%, zuul is .1%, starlingx is .05%20:02
clarkbnotmyname: oh there definitely is. One of the common pushbacks is "if you can't give us real numbers then how do you know we are actually doing bad"20:03
clarkbnotmyname: we had real numbers before zuulv3 and lost them in the transition and now we have them again20:03
dtroyerI'm trying to get those numbers up!  That low means we're not testing enough :)20:03
clarkbthe other thing that comes up a lot is concern that all these new projects are going to use all our resources20:03
notmynameclarkb: normalizing the cpu time to number of patches may be a good way to compare. maybe the high cpu usage is just because a lot of activity20:03
clarkbwhich looking at it today is just false20:03
clarkbnotmyname: ya I think there is a correlation there, but with tripleo in particular the other factors that come in are 3.5 hour jobs that fail a lot causing gate resets that run on multiple nodes20:04
notmynameclarkb: lol "no worries! the existing projects take 4 orders of magnitude more resources than the new ones!"20:04
clarkbanyways we now have fairly definitive data (that should be looked at longer term, that is the one caution its a 13 hour window) that shows airship and zuul aren't nova's problem with merging code20:05
notmynamethat's really cool. but I'm a little sad you didn't use asciigraph ;-)20:05
clarkbalso there are 376.8 test days captured in that 13 hour window20:06
clarkbif you want to completely misinterpret the data we do more than a year of testing every half day20:07
clarkb:)20:07
fungi"zuul tests more than a year's worth of patches before you even get up in the morning"20:13
notmyname"if current trends continue, openstack ci will consume all of the earth's available energy output sometime next tuesday"20:15
*** jamesmcarthur has quit IRC20:29
*** openstackstatus has quit IRC20:42
*** openstack has joined #openstack-tc20:44
*** ChanServ sets mode: +o openstack20:44
*** jamesmcarthur has joined #openstack-tc21:13
fungithat's rather optimistic. my projections are all saying late sunday21:15
*** jamesmcarthur has quit IRC21:24
TheJuliaThat is rather disturbing...  Could we convert it to a hybrid of wind and solar power? I wonder if maybe we could build some sort of CI data center into the San ?Junicto? mountain ranges, and use the wind near palm springs to power it. I'll even happily drive to that data-center and push buttons.21:26
fungii miss pushing buttons21:45
*** mriedem has quit IRC21:52
*** diablo_rojo has quit IRC23:40

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!