Friday, 2018-06-01

*** jamesmcarthur has quit IRC00:17
*** jamesmcarthur has joined #openstack-operators00:24
*** jamesmcarthur has quit IRC00:29
*** masuberu has quit IRC00:37
*** jamesmcarthur has joined #openstack-operators00:39
*** jamesmcarthur has quit IRC00:46
*** jamesmcarthur has joined #openstack-operators00:53
*** d0ugal__ has quit IRC00:58
*** harlowja has quit IRC00:58
*** jamesmcarthur has quit IRC00:59
*** d0ugal__ has joined #openstack-operators00:59
*** slaweq has joined #openstack-operators01:01
*** vijaykc4 has quit IRC01:04
*** vijaykc4 has joined #openstack-operators01:04
*** vijaykc4 has quit IRC01:05
*** slaweq has quit IRC01:06
*** masber has joined #openstack-operators01:06
*** masuberu has joined #openstack-operators01:18
*** masber has quit IRC01:22
*** jamesmcarthur has joined #openstack-operators01:25
*** jamesmcarthur has quit IRC01:29
*** markvoelker has joined #openstack-operators01:44
*** jamesmcarthur has joined #openstack-operators01:46
*** markvoelker has quit IRC01:49
*** fragatina has quit IRC02:21
*** fragatina has joined #openstack-operators02:24
*** fragatin_ has joined #openstack-operators02:26
*** fragatin_ has quit IRC02:27
*** fragatina has quit IRC02:28
*** klindgren has quit IRC02:31
*** fragatina has joined #openstack-operators02:38
*** fragatina has quit IRC02:42
*** dbecker_ has quit IRC02:48
*** dcreno has quit IRC02:49
*** jamesmcarthur has quit IRC02:51
*** jamesmcarthur has joined #openstack-operators02:55
*** dcreno has joined #openstack-operators02:59
*** dbecker_ has joined #openstack-operators03:01
*** dcreno has quit IRC03:03
*** dcreno has joined #openstack-operators03:11
*** masuberu has quit IRC03:12
*** dcreno has quit IRC03:15
*** fragatina has joined #openstack-operators03:16
*** fragatina has quit IRC03:16
*** fragatina has joined #openstack-operators03:17
*** dcreno has joined #openstack-operators03:22
*** dcreno has quit IRC03:26
*** dcreno has joined #openstack-operators03:28
*** dcreno has quit IRC03:32
*** slaweq has joined #openstack-operators03:38
*** racedo has joined #openstack-operators03:38
*** obre_ has joined #openstack-operators03:40
*** ircuser-1 has quit IRC03:44
*** bradm has quit IRC03:44
*** obre has quit IRC03:44
*** timburke has quit IRC03:44
*** jhebden has quit IRC03:44
*** amrith has quit IRC03:44
*** ctracey has quit IRC03:44
*** Dmitrii-Sh has quit IRC03:44
*** karlamrhein has quit IRC03:44
*** slaweq has quit IRC03:47
*** jamesmcarthur has quit IRC03:48
*** d0ugal__ has quit IRC03:52
*** fragatina has quit IRC03:57
*** jamesmcarthur has joined #openstack-operators04:04
*** racedo has quit IRC04:05
*** jamesmcarthur has quit IRC04:09
*** d0ugal__ has joined #openstack-operators04:14
*** jamesmcarthur has joined #openstack-operators04:16
*** harlowja has joined #openstack-operators04:17
*** masber has joined #openstack-operators04:18
*** harlowja has quit IRC04:20
*** jamesmcarthur has quit IRC04:26
*** jamesmcarthur has joined #openstack-operators04:28
*** jamesmcarthur has quit IRC04:32
*** d0ugal__ has quit IRC04:49
*** AlexeyAbashkin has joined #openstack-operators04:49
*** gyee has quit IRC04:51
*** Alexey_Abashkin has joined #openstack-operators04:52
*** AlexeyAbashkin has quit IRC04:53
*** Alexey_Abashkin is now known as AlexeyAbashkin04:53
*** d0ugal__ has joined #openstack-operators04:57
*** slaweq has joined #openstack-operators05:11
*** masber has quit IRC05:13
*** slaweq has quit IRC05:15
*** dmibrid_ has quit IRC05:17
*** masber has joined #openstack-operators05:18
*** masuberu has joined #openstack-operators05:19
*** AlexeyAbashkin has quit IRC05:22
*** masber has quit IRC05:23
*** markvoelker has joined #openstack-operators05:24
*** simon-AS5591 has joined #openstack-operators05:29
*** simon-AS5592 has joined #openstack-operators05:30
*** slaweq has joined #openstack-operators05:31
*** simon-AS5591 has quit IRC05:33
*** slaweq has quit IRC05:35
*** simon-AS559 has joined #openstack-operators05:38
*** simon-AS5592 has quit IRC05:39
*** fragatina has joined #openstack-operators05:52
*** iranzo has joined #openstack-operators05:55
*** pvradu has joined #openstack-operators06:03
*** pvradu has quit IRC06:18
*** slaweq has joined #openstack-operators06:21
*** slaweq has quit IRC06:26
*** d0ugal__ has quit IRC06:26
*** pcaruana has joined #openstack-operators06:40
*** simon-AS5591 has joined #openstack-operators06:40
*** slaweq has joined #openstack-operators06:41
*** slaweq has quit IRC06:45
*** d0ugal__ has joined #openstack-operators06:48
*** slaweq has joined #openstack-operators06:58
*** slaweq has quit IRC06:59
*** slaweq has joined #openstack-operators07:00
*** damien_r has joined #openstack-operators07:04
*** rcernin has quit IRC07:08
*** pvradu has joined #openstack-operators07:15
*** pvradu has quit IRC07:17
*** tesseract has joined #openstack-operators07:25
*** ircuser-1 has joined #openstack-operators07:47
*** bradm has joined #openstack-operators07:47
*** timburke has joined #openstack-operators07:47
*** jhebden has joined #openstack-operators07:47
*** amrith has joined #openstack-operators07:47
*** ctracey has joined #openstack-operators07:47
*** Dmitrii-Sh has joined #openstack-operators07:47
*** karlamrhein has joined #openstack-operators07:47
*** slaweq has quit IRC07:58
*** slaweq has joined #openstack-operators07:59
*** d0ugal__ has quit IRC08:03
*** AlexeyAbashkin has joined #openstack-operators08:03
*** d0ugal has joined #openstack-operators08:03
*** electrofelix has joined #openstack-operators08:37
*** jamesmcarthur has joined #openstack-operators08:43
*** jamesmcarthur has quit IRC08:48
*** belmoreira has joined #openstack-operators09:01
*** simon-AS559 has joined #openstack-operators09:04
*** simon-AS5591 has quit IRC09:05
*** simon-AS5591 has joined #openstack-operators09:15
*** d0ugal has quit IRC09:33
*** d0ugal has joined #openstack-operators09:52
*** damien_r has quit IRC09:55
*** damien_r has joined #openstack-operators09:56
*** d0ugal has quit IRC10:16
*** d0ugal has joined #openstack-operators10:24
*** simon-AS5591 has quit IRC10:40
*** markvoelker has quit IRC11:05
*** d0ugal has quit IRC11:06
*** khappone has joined #openstack-operators11:11
*** khappone_ has quit IRC11:11
*** markvoelker has joined #openstack-operators11:11
*** d0ugal has joined #openstack-operators11:23
*** dcreno has joined #openstack-operators11:25
*** dcreno has quit IRC11:26
*** dtrainor has quit IRC11:48
*** markvoelker has quit IRC11:48
*** markvoelker_ has joined #openstack-operators11:48
*** dtrainor has joined #openstack-operators11:48
*** vijaykc4 has joined #openstack-operators11:49
*** simon-AS5591 has joined #openstack-operators11:51
*** d0ugal has quit IRC11:53
*** slaweq has quit IRC11:53
*** slaweq has joined #openstack-operators11:54
*** d0ugal has joined #openstack-operators11:58
*** kstev has joined #openstack-operators12:10
*** pvradu has joined #openstack-operators12:13
*** d0ugal has quit IRC12:16
*** pvradu has quit IRC12:22
*** mriedem has joined #openstack-operators12:22
*** markvoelker_ has quit IRC12:23
*** pvradu has joined #openstack-operators12:25
*** klindgren has joined #openstack-operators12:25
*** slaweq has quit IRC12:25
*** markvoelker has joined #openstack-operators12:25
*** slaweq has joined #openstack-operators12:25
*** d0ugal has joined #openstack-operators12:34
*** dcreno has joined #openstack-operators12:46
*** slaweq has quit IRC12:47
*** slaweq has joined #openstack-operators12:47
*** slaweq has quit IRC12:48
*** slaweq has joined #openstack-operators12:49
*** dcreno has quit IRC12:50
*** dcreno has joined #openstack-operators12:50
*** pvradu has quit IRC13:00
*** kstev has quit IRC13:04
*** kstev has joined #openstack-operators13:15
*** pcaruana has quit IRC13:15
*** jamesmcarthur has joined #openstack-operators13:22
*** mriedem is now known as hansmoleman13:25
*** markvoelker_ has joined #openstack-operators13:34
*** markvoelker has quit IRC13:34
*** dansmith is now known as superdan13:35
*** jcarpentier has joined #openstack-operators13:36
*** lbragstad is now known as elbragstad13:37
*** jcarpentier has left #openstack-operators13:41
*** jamesmcarthur has quit IRC13:52
*** VW has joined #openstack-operators13:55
*** VW has quit IRC13:56
*** VW has joined #openstack-operators13:56
*** VW has quit IRC13:57
*** VW has joined #openstack-operators13:57
*** jiteka has joined #openstack-operators13:59
*** markvoelker_ has quit IRC14:00
*** belmorei_ has joined #openstack-operators14:00
*** belmoreira has quit IRC14:02
*** melwitt is now known as jgwentworth14:09
*** jitek4 has joined #openstack-operators14:10
*** pcaruana has joined #openstack-operators14:10
*** jiteka has left #openstack-operators14:22
*** jiteka has joined #openstack-operators14:23
*** admin0 has quit IRC14:24
*** admin0 has joined #openstack-operators14:24
*** markvoelker has joined #openstack-operators14:24
*** d0ugal has quit IRC14:31
*** d0ugal has joined #openstack-operators14:35
*** simon-AS5591 has quit IRC14:36
jitekaGot a question regarding octavia deployement ? who here use it in production with CentOS based amphora image ?14:48
jitekaIn our cloud environment we are exclusively providing centos based image and I would prefer to avoid mixing it with a ubuntu one just for Octavia14:48
*** d0ugal has quit IRC14:49
*** jamesmcarthur has joined #openstack-operators14:51
*** markvoelker has quit IRC14:52
*** d0ugal has joined #openstack-operators14:56
*** markvoelker has joined #openstack-operators15:03
*** jamesmcarthur has quit IRC15:04
*** gkadam has joined #openstack-operators15:05
*** iranzo has quit IRC15:07
*** fragatina has quit IRC15:11
*** ckonstanski has joined #openstack-operators15:11
*** fragatina has joined #openstack-operators15:12
mrhillsmanjiteka openstack-lbs may be able to provide you any common issues or faqs if no one here responds quickly15:15
mrhillsmanopenstack-lbaas15:15
mrhillsmandang autocorrect15:15
*** gkadam_ has joined #openstack-operators15:20
*** gkadam has quit IRC15:24
*** d0ugal has quit IRC15:34
*** damien_r has quit IRC15:35
*** belmorei_ has quit IRC15:40
*** d0ugal has joined #openstack-operators15:41
*** gkadam__ has joined #openstack-operators15:46
*** d0ugal has quit IRC15:48
*** gkadam_ has quit IRC15:49
*** markvoelker has quit IRC15:51
*** d0ugal has joined #openstack-operators15:51
ckonstanskiHave a keystone db_sync issue. Building a brand-new queens cloud. The following command fails:15:51
ckonstanskisu -s /bin/sh -c "keystone-manage db_sync" keystone15:51
ckonstanskiGetting a pastebin of the stacktrace...15:52
*** gkadam_ has joined #openstack-operators15:52
*** chyka has joined #openstack-operators15:53
ckonstanskihttps://paste.pound-python.org/show/LFpmDcDvHhJh11WxnVY4/15:54
ckonstanskiThere are no tables in the keystone database yet. Fresh install.15:54
ckonstanskiI suspect either collation or python2.7. (Assuming the db_sync is bug-free)15:55
*** gkadam__ has quit IRC15:55
ckonstanskiCollation is set correectly according to https://docs.openstack.org/install-guide/environment-sql-database-ubuntu.html15:57
ckonstanskiBefore I uninstall python2.7, is this a known issue? Or is python3 reccomended?15:57
*** dcreno has quit IRC15:57
penickMorning pabelanger15:58
pabelangerpenick: morning15:59
pabelangerpenick: give me a moment to refresh coffee and we can start16:00
penickNo worries :)16:00
*** tesseract has quit IRC16:01
pabelangerokay back16:06
penickwb16:06
pabelangermnaser: happen to be around, I know you might be interested in the topic we are about to discuss. cc cloudnull, hogepodge16:06
cloudnullo/16:07
penickHeyoo cloudnull16:07
pabelangerpenick: so, not to put you too much on the spot, do you want to maybe give a little overview of what you are looking to do on your side? personally I only caught a 5min discussion at the summit last friday.  I have my own personal ideas, but figured having your start is a good place16:07
cloudnull++16:08
penickSure, the quick and fast version is: We run a fairly large infrastructure, and want to donate baremetal resources to the infra team. We'll host the machines, maintain them, etc and give y'all access to em16:09
pabelangerwoot! okay, that's basically what I heard too :)16:09
mnaserso i guess someone has to run the cloud itself?16:09
penickyup16:09
pabelangeryah, which I think we have some experience, to various degrees16:10
pabelangereg: vexxhost, infra-cloud, osic-cloud116:10
penickWe were going to run it for you, but we're still wrapping up all our ocata upgrades, and the team is pretty maxed out right now. If I gate on their availability it'll be more delays. I'd rather just get you the resources for the gate, etc16:10
mnaseri can volunteer to help run/deploy/maintain the cloud16:10
*** harlowja has joined #openstack-operators16:11
pabelangerpenick: yes, I think that is a great idea. One idea I always had if we were to rebuild infra-cloud, was move it more to a community based operation team, over openstack-infra.  And I think this is a great path forward here16:11
pabelangerI think for almost everything, we could drive it directly from zuul.o.o and ansible16:12
pabelangerand if everybody agree, maybe just use OSA which both cloudnull and mnaser have experience in16:12
pabelangerwe don't have to decide now, but maybe one of the first things would be to get an inventory list of the hardware, and see which deployment project we're intersted in using16:12
pabelangerthat likely leads to a design discussion and so on16:13
penickDo you have a preference on the hardware specs?16:13
mnaserpenick: "POWERRRRRRRRRRR"16:13
cloudnull++ pabelanger I think thats a good place to start.16:13
mnaserbut on that note, the typical vm is 8 vcpus, 8 gb memory, ~80gb storage16:13
pabelangerpenick: i am not sure, is there a choice?16:13
penickThere..can be :)16:14
penickOk, the VM spec helps me, actually16:14
mnaserso if you have a few flavors, we can pick the ideal one to be used16:14
cloudnullif power if an option I'd love that!16:14
pabelangeryah, if we are donating the resources to openstack-infra and nodepool, we should be aiming for https://docs.openstack.org/infra/system-config/contribute-cloud.html as the VM specs16:14
cloudnull:D16:14
mnasercloudnull: https://www.youtube.com/watch?v=ygBP7MtT3Ac ?16:14
pabelangeroh, are we saying there is difference archs for CPUs?16:15
pabelangerI just assumed x8616:15
cloudnullI love topgear16:15
penickIs IPv6 ok for most machines, or do you need v4 and v6?16:15
pabelangerpenick: we'd love ipv6 I think16:15
penickI saw POWERRRR and thought "Clarkson approach, got it."16:15
pabelangerthere would be a need for a few ipv4 from the openstack network, but VMs have no issues doing ipv616:16
penickOk. That's ideal, because public v4 is a limited resource. I'm assuming you'll need some, but if most of the machines can be v6 only that would be ideal16:16
pabelangeragree, and totally think we should aim for that16:16
cloudnullv6 for instances is great, v6 only for host might be a little complicated. if we could have v4 on an rfc1918 or do a dual stack on the hosts that'd be idea.16:16
pabelangersince we are not in an actually meeting today, I'll start taking some notes at: https://etherpad.openstack.org/p/T7uUOURdAM16:16
pabelangercloudnull: yes, I'm happy to defer to both you and mnaser on that front16:17
pabelangerthe only IPv4 we really need on nodepool nodes, is for AFS mirror16:17
pabelangereverything else can be IPV6 only16:17
penickPerhaps v6 for instances, dual stack rfc1918 for compute nodes MQ/etc , and duals tack with public IPs for API nodes ?16:17
mnaserfor compute nodes and mq, can't we just use an internal network? we don't need them to be exposed to the internet16:18
cloudnull++16:18
mnaserso dual stack for compute nodes and mq is unnecessary, ipv4 is enough, if we can setup some sort of jumphost/vpn16:18
pabelangerwell, if we can avoid a jump host, that makes things easier for zuul16:19
penickAnd touching back on the hardware specs, I can easily do machines with 12 cores, 24gb of ram, single 500g SATA drive. Probably can do more ram to let you use up all that CPU16:19
pabelangerbut maybe we can dive into details a little more once we get a diagram up for stack16:19
pabelangerlet me get stats for infra-cloud hardware to compare16:19
mnaser12 physical cores? or 24 with ht?16:20
penick12 physical, 24 vcpus with hyperthreading16:20
cloudnull6 cores per socket with 2 sockets?16:21
*** ckonstanski has quit IRC16:21
pabelangerhttp://git.openstack.org/cgit/openstack-infra/system-config/tree/doc/source/infra-cloud.rst?id=d05e28dbd5625bc9f6aa959f086d8bb8f8557df516:21
pabelangerthat is break down of old infra-cloud16:21
pabelangerThe vanilla cloud has 48 machines. Each machine has 96G of RAM, 1.8TiB of disk and16:21
pabelanger24 Cores of Intel Xeon X5650 @ 2.67GHz processors.16:21
pabelangerThe chocolate cloud has 100 machines. Each machine has 96G of RAM, 1.8TiB of disk and16:21
pabelanger32 Cores of Intel Xeon E5-2670 0 @ 2.60GHz processors.16:21
pabelangerto compare16:21
penickcloudnull yep16:22
pabelangerpenick: with the specs you listed, how many machines would that be?16:22
cloudnullpenick excellent.16:22
*** ckonstanski has joined #openstack-operators16:23
penick150 or thereabouts. We might be able to do 96gb, though I need to see what's in inventory16:24
pabelangersure, we can tuned more I think once we get into design phase16:24
pabelangerpenick: any information on networking bandwidth / speeds?16:24
pabelangerand interfaces to the machines16:24
mnasercloudnull: with 24 vcpus and going with a 4:1 oversubscription ratio, puts us at 96 vcpus.  so 96gb of memory could be cool, or we can look at 2:1 to match osic, i will defer to discuss this with cloudnull16:25
penickThat i'm not 100% certain of. I would say 10gb, however these are going into a special network backplane, and I need to confirm 10gb is available there.16:25
pabelangeryah, I'm guessing we'd need to benchmark some of that oo16:25
penicksingle wire nic16:25
pabelangerpenick: any ability to do vlans?16:26
cloudnull++ if we could get machines with >=64GB RAM it'd be wonderful.16:26
cloudnull >=96GB (128GB is what we had in the OSIC) would be fantastic.16:26
penickI think >=64 is doable, i'll check but I think it should be ok.16:27
mnaseryep, to know if we can do vlans is important, but we can work around it.. i think16:27
*** fragatina has quit IRC16:27
cloudnull24GB will work too. but MOAR is always better :)16:27
penickpabelanger: Depends..Are you asking for access to the switch, or do you mean have these hosts grouped into separate  vlans?16:27
*** fragatina has joined #openstack-operators16:28
cloudnullwe can work with a flat network if we have to. vlan would be ideal.16:28
*** simon-AS559 has joined #openstack-operators16:28
pabelangerpenick: yah, mostly curious what options we have on the network side, eg: if we need flat as cloudnull says or vlan into different network16:29
cloudnullI personally dont need switch access, if we could just trunk a set of vlans down across the switches we can use vlan tagged interfaces and provider networks where needed.16:29
pabelangeryah, I only ask because I think we had some vlans for infracloud16:29
pabelangerI'd have to double chek16:29
penickAh, for the compute nodes! Yeah vlan is fine. We'll provide a separate set of vlans and subnets for your instances16:29
*** gyee has joined #openstack-operators16:29
penickIdeally control plane/ compute node/ and instances should all be on separate vlans16:30
cloudnull++16:30
pabelanger++16:30
cloudnullwe dont need to repeat cloud 8 :P16:30
pabelangerHa16:30
pabelangerpenick: any information on OOB management for servers? I'm assuming something ironic already supports16:31
penickI don't think we'll be able to give you access to OOB16:31
pabelangerah, okay. So will the servers already be provisioned with an operating system?16:32
pabelangerI wasn't sure if that was going to be exposed where we'd use say bifrost to manage the OS / configuration16:33
*** gkadam__ has joined #openstack-operators16:33
pabelangerwhich we've done in the past again with infracloud16:33
*** simon-AS5591 has joined #openstack-operators16:33
penickThat was the plan. I need to check with the security folks, but perhaps I can give you OOB access. If we have these in dedicated cabinets, with a couple machines set aside as jump hosts and have them dual wired to bridge into the OOB switches. That could work.16:34
pabelanger++ Yah, I believe that would also work on our side16:34
pabelangerthen we could manage the whole stack from the community, if your side is okay with the security16:35
penickOur normal model to donate resources is to handle imaging the hardware on your behalf16:35
penickBut, I can check16:35
pabelangerpenick: understood, we're happy to work around your policies, but also happy to manage it ourself if given the option16:36
pabelangermnaser: cloudnull: I take it since neither of you objected to using OSA, there isn't an issue?16:36
*** gkadam_ has quit IRC16:36
mnaser++ if we can make your life easier and be more self serve, even better, but if not, no worries16:36
pabelangerpenick: do you have any preference to using OSA or other deployment?16:36
*** simon-AS559 has quit IRC16:36
mnasernope, i am supportive of using openstack ansible but if we have a pressing reason not to, then that's fine16:36
penickno preference16:37
*** gkadam_ has joined #openstack-operators16:37
cloudnullI too am supportive of using OSA ;)16:37
pabelangerI think hogepodge suggested we maybe could CD openstack ansible from master, to help provide feedback to community. But would defer to mnaser and cloudnull for best practice there16:37
pabelangergreat!16:37
cloudnullthough if we decided to go with something else I'd still be game to help16:37
*** simon-AS5591 has quit IRC16:37
pabelangerno, I think OSA is a great idea16:38
mnaserCD OSA from master seems cool but i want to make sure that given we're a smaller group running it, we might have surprises16:38
cloudnullwe can CD OSA, we do this at rax for our test clouds.16:38
pabelangerand personally interested in learn more about it :)16:38
*** simon-AS559 has joined #openstack-operators16:38
pabelangermnaser: yah16:38
mnaserbut if cloudnull has tinkered in it and feels that it's relatively stable so that we aren't bothering project teams with their jobs messing up i am also up for it16:38
cloudnullbut ++ mnaser keeping the group tight will be ideal, as Im sure there will be gremlins16:38
pabelangerokay, I can't really think of anything more on the hardware side, and I missing anything obvious?16:39
pabelangerhttps://etherpad.openstack.org/p/T7uUOURdAM16:39
pabelangerare my notes so far16:39
mnaseri guess nothing in terms of filtering that happens that we have to be aware of?16:39
mnaserblocked ports, etc16:39
penickthese'll be pretty much open to the world16:39
mnaserand maybe not that relevant but just to know the rough location of the cloud16:39
cloudnullstorage, are we using local storage on the compute hosts?16:40
penickbut it would be ideal if you gave us a list of ports you'd like open16:40
penicklocal storage16:40
cloudnull++16:40
*** hansmoleman is now known as hans_lunch16:40
penickMy preference (and this might be a security requirement) would be a list of ports by host type, and we'll close the rest off16:40
*** gkadam__ has quit IRC16:40
pabelangerpenick: yah, open to the world if great on our side, especially for VMs. I am sure we could limit is down a bit for the control plane network16:40
mnaserusually vms have everything open, but control plane we can do that16:41
cloudnull++16:41
pabelangersorry, and where did we say this hardware is actually located at?16:42
penickif the vms have everything open, will you be using security groups to close em off?16:42
pabelangernot that it really matters, mostly curious16:42
penickOh, antarctica16:42
pabelangerpenick: our base jobs do setup some basic ports filtering but for the most part jobs don't firewall anything16:42
pabelangerWoah, really? That is awesome16:43
penickjk, gear is located in the US. Can be in the west coast, east coast, or central16:43
penickor spread across all three16:43
pabelanger:D16:43
penickAPAC and EMEA are possibilities too, but easier for us if it's in the US16:43
pabelangerpenick: okay, but all hardware in a single location right, not some in APAC, EMEA and US16:43
penickBy default it would be a single location, or I can have it spread around a bit more if you'd prefer16:44
pabelangerI don't think we have a preference for location, just reliable networking16:44
pabelangerI would guess single location, if that make is easy on oath side. But is we did multi site, maybe we are getting into multiple clouds / regions16:45
pabelangerbut I don't think we need to decide today16:45
penickOk, a single location is easier for us16:46
pabelangersingle it is!@16:46
cloudnull++ a single region is easier, however multi-region is doable if we want to do it (maybe future?).16:47
pabelangerI guess the last I had for penick, is what sort of timelines would you expect giving access over to community members. There is still a lot to discuss about design of cloud, but is this something you are ready to move on your side or still a ways out16:48
penickI'm ready to start moving now, but i'll need security approval before we can build anything. So assume it'll take a month or two to get everything in place and toss you the keys16:49
cloudnullsounds good! I'm ready whenever.16:50
pabelangerokay, that seem reasonable to me16:50
* mnaser is ready too16:50
mnaserexcited to work on this16:50
pabelangerso, with that in mind maybe we first get started on the paper tail bits, maybe setup a weekly meeting, decide to keep using #openstack-operators ML and IRC channel, see if anybody else is interested in help, etc16:51
pabelangerthen we can move to a design etherpad to scope our a simple cloud with OSA, being run from zuul.o.o16:52
logan-i have a cloud to volunteer for this if you all are interested in starting to flesh it out the zuul CD on limestone's ci cloud.16:52
penicksounds good to me, as the design comes together i'll be working in parallel to get the infra machinery moving in oath. Once the design is together i'll work on security approvals16:53
cloudnullthis channel works well for me and, I hope, brings some visibility to other operators/deployers16:53
pabelangerpersonally, I think #openstack-operators is a great place to do this, we can get some great community feedback on design, implementation and even running it I think16:53
cloudnullo/ logan- :)16:53
pabelangerlogan-: indeed, something to discuss for sure16:53
logan-o/ all16:53
pabelangerokay, I'll compose a basic plan around https://etherpad.openstack.org/p/T7uUOURdAM and post it to the openstack-operators ML, if everybody want to make sure they are subscribed16:54
pabelangerthen we can decide to pick a date for a weekly meeting and try to make it somewhat official :)16:54
cloudnullsounds good !16:54
*** electrofelix has quit IRC16:55
pabelangerI am also happy for somebody else to chairs the meetings too, they are at all interested :D16:55
pabelangeror even rotate16:55
*** electrofelix has joined #openstack-operators16:55
cloudnulla rotation would work well for me16:55
pabelanger++16:55
cloudnulla lot of this will be on my own time and I cant guarantee availability during the day16:56
cloudnullim sure others are in this same boat16:56
pabelangersame, this will also be personal for me16:56
pabelangerbut happy to help grow community members that are interested in this16:56
*** gkadam__ has joined #openstack-operators16:57
pabelangercloudnull: logan-: mnaser: penick: thanks for coming out today, expect an recap email and discussion when / where for next IRC meeting16:57
mnasermerci beaucoup16:57
penickSounds good!16:57
penickthanks, all :)16:57
pabelanger++16:57
cloudnull++16:58
*** gkadam has joined #openstack-operators17:00
*** gkadam_ has quit IRC17:00
*** electrofelix has quit IRC17:01
*** AlexeyAbashkin has quit IRC17:02
*** gkadam__ has quit IRC17:02
*** markvoelker has joined #openstack-operators17:08
*** gkadam has quit IRC17:18
*** harlowja has quit IRC17:22
*** simon-AS559 has quit IRC17:26
*** dcreno has joined #openstack-operators17:54
*** harlowja has joined #openstack-operators18:14
*** jamesmcarthur has joined #openstack-operators18:23
*** jamesmcarthur has quit IRC18:25
*** fragatina has quit IRC18:27
*** ckonstanski has quit IRC18:41
*** vijaykc4 has quit IRC18:41
*** jamesmcarthur has joined #openstack-operators18:45
*** vijaykc4 has joined #openstack-operators18:49
*** jamesmcarthur has quit IRC18:54
*** hans_lunch is now known as mriedem18:59
*** admin0 has quit IRC19:10
*** admin0 has joined #openstack-operators19:10
*** obre_ has quit IRC19:12
*** obre has joined #openstack-operators19:13
*** simon-AS559 has joined #openstack-operators19:19
*** kstev has quit IRC19:24
*** d0ugal has quit IRC19:47
*** kstev has joined #openstack-operators20:01
*** pcaruana has quit IRC20:03
*** d0ugal has joined #openstack-operators20:04
*** kstev1 has joined #openstack-operators20:15
*** kstev has quit IRC20:15
*** fragatina has joined #openstack-operators20:37
*** klindgren_ has joined #openstack-operators20:38
*** klindgren has quit IRC20:41
*** VW_ has joined #openstack-operators20:44
*** VW has quit IRC20:48
*** VW_ has quit IRC20:49
*** dcreno has quit IRC20:58
*** dcreno has joined #openstack-operators21:00
*** dcreno has quit IRC21:05
*** d0ugal has quit IRC21:08
*** d0ugal has joined #openstack-operators21:15
*** kstev1 has quit IRC21:25
*** dcreno has joined #openstack-operators21:25
*** superdan is now known as dansmith21:25
*** dcreno has quit IRC21:29
*** dcreno has joined #openstack-operators21:32
*** dcreno has joined #openstack-operators21:32
*** mriedem has quit IRC21:42
*** dcreno has quit IRC21:49
*** dcreno has joined #openstack-operators21:50
*** slaweq has quit IRC21:52
*** vijaykc4 has quit IRC21:58
*** slaweq has joined #openstack-operators22:10
*** slaweq has quit IRC22:21
*** markvoelker has quit IRC22:27
*** markvoelker has joined #openstack-operators22:28
*** markvoelker has quit IRC22:33
*** slaweq has joined #openstack-operators22:36
*** slaweq has quit IRC22:39
*** vijaykc4 has joined #openstack-operators22:53
*** simon-AS559 has quit IRC22:55
*** dbecker_ has quit IRC23:17
*** harlowja has quit IRC23:18
*** Mantorok has quit IRC23:26
*** chyka has quit IRC23:27
*** Mantorok has joined #openstack-operators23:31
*** jamesmcarthur has joined #openstack-operators23:40
*** jamesmcarthur has quit IRC23:44

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!