Wednesday, 2016-10-19

*** ducttape_ has quit IRC00:04
*** ducttape_ has joined #openstack-operators00:25
*** cartik has joined #openstack-operators00:28
*** cartik has quit IRC00:33
*** ducttape_ has quit IRC00:54
*** blair has joined #openstack-operators01:28
*** sudipto_ has joined #openstack-operators01:38
*** sudipto has joined #openstack-operators01:38
*** markvoelker has joined #openstack-operators01:43
*** sticker has joined #openstack-operators01:46
*** markvoelker has quit IRC01:49
*** ducttape_ has joined #openstack-operators01:55
*** sudipto_ has quit IRC01:59
*** sudipto has quit IRC01:59
*** ducttape_ has quit IRC02:00
*** sudipto_ has joined #openstack-operators02:12
*** sudipto has joined #openstack-operators02:12
*** Apoorva_ has joined #openstack-operators02:15
*** chlong has joined #openstack-operators02:16
*** Apoorva has quit IRC02:18
*** hj-hpe has joined #openstack-operators02:19
*** Apoorva_ has quit IRC02:19
*** sudipto_ has quit IRC02:21
*** sudipto has quit IRC02:21
*** ducttape_ has joined #openstack-operators02:27
*** VW has joined #openstack-operators02:45
*** armax has quit IRC02:58
*** ducttape_ has quit IRC03:05
*** VW has quit IRC03:18
*** ducttape_ has joined #openstack-operators03:20
*** ducttape_ has quit IRC03:25
openstackgerritXav Paice proposed openstack/osops-tools-contrib: Add livemigration helper for hypervisor maint  https://review.openstack.org/38830303:33
*** georgem1 has quit IRC03:46
*** armax has joined #openstack-operators04:14
*** ducttape_ has joined #openstack-operators04:21
*** ducttape_ has quit IRC04:26
*** cartik has joined #openstack-operators04:34
*** ekoorjr has quit IRC05:05
*** raginbaj- has joined #openstack-operators05:11
*** VW has joined #openstack-operators05:19
*** VW has quit IRC05:23
*** ducttape_ has joined #openstack-operators05:51
*** saneax-_-|AFK is now known as saneax05:55
*** jsech has joined #openstack-operators05:55
*** ducttape_ has quit IRC05:56
*** rcernin has joined #openstack-operators06:11
*** clayton has quit IRC06:12
*** markvoelker has joined #openstack-operators06:14
*** markvoelker has quit IRC06:19
*** belmoreira has joined #openstack-operators06:26
*** jsech has quit IRC06:27
*** markvoelker has joined #openstack-operators06:33
*** markvoelker has quit IRC06:38
*** markvoelker has joined #openstack-operators06:38
*** jsech has joined #openstack-operators06:47
*** Hosam_ has joined #openstack-operators06:47
*** clayton has joined #openstack-operators06:56
*** armax has quit IRC07:00
*** Hosam_ has quit IRC07:00
*** Hosam_ has joined #openstack-operators07:02
*** Hosam_ has quit IRC07:04
*** Hosam__ has joined #openstack-operators07:04
*** tesseract has joined #openstack-operators07:08
*** tesseract is now known as Guest9021107:09
*** matrohon has joined #openstack-operators07:12
*** rarcea has joined #openstack-operators07:22
*** clayton has quit IRC07:41
*** markvoelker has quit IRC07:41
*** clayton has joined #openstack-operators07:45
*** admin0 has joined #openstack-operators07:53
*** racedo has joined #openstack-operators07:57
*** markvoelker has joined #openstack-operators07:58
*** rcernin has quit IRC08:16
*** rcernin has joined #openstack-operators08:16
*** rcernin has quit IRC08:18
*** jsech has quit IRC08:18
*** rcernin has joined #openstack-operators08:19
*** rcernin has quit IRC08:19
*** rcernin has joined #openstack-operators08:19
*** jsech has joined #openstack-operators08:19
*** admin0 has quit IRC08:31
*** derekh has joined #openstack-operators08:39
*** admin0 has joined #openstack-operators08:43
*** hexlibris has quit IRC08:48
*** hexlibris has joined #openstack-operators08:49
*** ducttape_ has joined #openstack-operators08:52
*** christx2 has joined #openstack-operators08:57
*** ducttape_ has quit IRC08:57
*** Hosam__ has quit IRC09:16
*** kstev1 has joined #openstack-operators09:26
*** kstev has quit IRC09:26
*** markvoelker has quit IRC09:36
*** mestery has quit IRC09:44
*** mestery has joined #openstack-operators09:50
*** TonyXu has quit IRC09:53
*** TonyXu has joined #openstack-operators10:05
*** admin0 has quit IRC10:10
*** admin0_ has joined #openstack-operators10:10
*** electrofelix has joined #openstack-operators10:20
*** ducttape_ has joined #openstack-operators10:23
*** ducttape_ has quit IRC10:28
*** MrDan has joined #openstack-operators10:34
openstackgerritDaniel Mellado proposed openstack/osops-tools-contrib: Allow usage of Fedora24  https://review.openstack.org/38654711:10
*** georgem1 has joined #openstack-operators11:18
*** georgem1 has quit IRC11:18
*** markvoelker has joined #openstack-operators11:21
*** Hosam has joined #openstack-operators11:22
*** cdelatte has joined #openstack-operators11:23
*** Hosam has quit IRC11:24
*** Hosam_ has joined #openstack-operators11:24
openstackgerritDaniela Ebert proposed openstack/osops-tools-contrib: Add Open Telekom Cloud config example  https://review.openstack.org/38862311:25
*** Hosam_ has quit IRC11:27
*** Hosam has joined #openstack-operators11:27
*** saneax is now known as saneax-_-|AFK11:48
*** christx2 has quit IRC11:49
*** christx2 has joined #openstack-operators11:51
*** ducttape_ has joined #openstack-operators11:53
*** ducttape_ has quit IRC11:57
*** makowals_ has joined #openstack-operators11:58
*** ducttape_ has joined #openstack-operators11:58
*** makowals has quit IRC11:59
*** belmorei_ has joined #openstack-operators12:00
*** belmoreira has quit IRC12:01
*** sudipto has joined #openstack-operators12:11
*** sudipto_ has joined #openstack-operators12:11
*** maticue has joined #openstack-operators12:11
*** VW has joined #openstack-operators12:19
*** VW has quit IRC12:23
*** mriedem has joined #openstack-operators12:26
*** georgem1 has joined #openstack-operators12:27
*** ducttape_ has quit IRC12:28
*** cartik has quit IRC12:29
*** mnaser has quit IRC12:40
*** bjolo_ has joined #openstack-operators12:40
*** mnaser has joined #openstack-operators12:43
*** bjolo__ has quit IRC12:43
*** belmorei_ has quit IRC12:53
*** pilgrimstack has joined #openstack-operators12:56
*** belmoreira has joined #openstack-operators12:57
*** sudipto_ has quit IRC13:01
*** sudipto has quit IRC13:01
*** makowals_ has quit IRC13:03
*** ducttape_ has joined #openstack-operators13:04
*** makowals has joined #openstack-operators13:07
*** ducttape_ has quit IRC13:11
*** christx2 has quit IRC13:13
*** Hosam has quit IRC13:25
*** esker has joined #openstack-operators13:34
*** ducttape_ has joined #openstack-operators13:37
*** admin0_ has quit IRC13:38
*** admin0 has joined #openstack-operators13:38
*** rarcea has quit IRC13:56
*** belmoreira has quit IRC13:59
*** esker has quit IRC13:59
*** belmoreira has joined #openstack-operators14:05
*** vinsh has quit IRC14:07
*** pilgrimstack has quit IRC14:10
*** pilgrimstack has joined #openstack-operators14:10
*** belmoreira has quit IRC14:13
*** pilgrimstack has quit IRC14:18
*** markvoelker has quit IRC14:33
*** saneax-_-|AFK is now known as saneax14:36
*** vinsh has joined #openstack-operators14:43
openstackgerritJames Page proposed openstack/osops-tools-contrib: ansible: support full offline use for lampstack  https://review.openstack.org/38800614:51
*** vinsh has quit IRC14:55
*** vinsh_ has joined #openstack-operators14:55
*** vinsh has joined #openstack-operators14:56
*** _ducttape_ has joined #openstack-operators14:58
*** marst has joined #openstack-operators14:59
*** vinsh_ has quit IRC15:00
*** ducttape_ has quit IRC15:02
*** VW has joined #openstack-operators15:02
*** armax has joined #openstack-operators15:03
pjm6Hi there15:04
pjm6anyone here had experience with OpenStack Nova shared storage via iSCSI through an storage (for instance DELL EqualLogic)15:05
*** Guest90211 has quit IRC15:13
*** pcaruana has quit IRC15:14
*** rcernin has quit IRC15:16
*** _ducttape_ has quit IRC15:21
*** ducttape_ has joined #openstack-operators15:21
klindgrenpjm6, I haven't however are you attempting to map an iscsi volume per vm?  or trying to do a single large FS shared between a bunch of compute nodes?15:25
klindgren(ala vmfs)15:26
pjm6yes15:29
pjm6now i'm doing tests, using an iSCSI connection to my equallogic Storage15:29
pjm6and using OCFS215:29
pjm6in this case is the second one, large FS shared between several compute nodes (for testing porpuses i'm using 2)15:30
pjm6in this case, the compute node has direct access to the storage15:33
pjm6i was thinking instead of using OCFS2 a GlusterFS but that will cause more delay, as i must pass to a storage node instead of being directly connected to the storage15:36
klindgrenso someone else recently asked about this.  IIRC their are bugs surrounding storing multiple compute nodes worth of vm's on the same storage.  around golden image removal, assuming using raw backed qcow2 images.15:39
klindgrenAlso, my personal experience with clustered filesystems have proven to be less than durable.15:39
pjm6yes at least it seems to don't be must used15:40
pjm6but in order to use live migration15:40
pjm6and in case of a compute node dies, how could we deal with recovery?15:40
klindgrenhttps://bugs.launchpad.net/nova/+bug/162034115:40
openstackLaunchpad bug 1620341 in OpenStack Compute (nova) "Removing unused base images removes backing files of active instances" [Undecided,Incomplete]15:40
klindgrenmost people are using ceph, which is integrated with KVM so that all the KVM nodes have direct access to storage15:41
klindgrenbut is not a clustered filesustem15:41
klindgrenbut is not a clustered filesystem15:41
dmsimardtechnically cephfs is :(15:42
klindgrenpeople dont use cephfs.15:42
dmsimardbut they removed the "not production ready" label :(15:43
klindgrenalso people get away with shared nothing live migrations.  You can also look at doing cinder + boot from volume15:43
klindgrenso each vm is an iscsi volume15:44
pjm6so klindgren using this kind of arch is a bad choice, right?15:44
dmsimardI'm by no means saying /var/lib/nova should be residing on cephfs btw, was reacting to the "clustered filesystem" note :)15:44
pjm6Yes I had read that15:44
klindgrenmy expereince is this is the type of arch people use for vmware.15:45
pjm6klindgren, in that way i'm able to do live migrate , right?15:45
pjm6XenServer have something similar15:45
pjm6with pool of servers they all have iSCSI sync15:45
klindgrenthat is true, but they have deep intergration with the filesystem or pooling15:46
pjm6yup you're right15:46
klindgrenthat I haven't seen with libvirt/KVM15:46
pjm6so klindgren what best approach do you think I could take advantage with having an EqualLogic Storage ? (that only supports iSCSI connection)15:47
klindgrenif using xenserver + openstack thats fine.  Was assuming that you are using kvm.15:47
pjm6klindgren, yes i'm using KVM15:48
pjm6because xenserver integration don't seems to be all with openstack15:48
pjm6(at least when I test it)15:48
klindgrenI dont actually know the status of that.  I know the largest public runs on xenserver (rackspace)15:49
klindgrenbut unsure if they are running of stock drivers or custom15:49
klindgrenI dont really have a great recommendation for you.  We specifically made choices to run our cloud in a totally opposite direction.  Of: vm's on local storage.  If a compute nodes dies, your vm is gone (this has only happened twice in 3 years).  Deal with failure in the application.15:51
klindgrenWe expose failure domains to end users, and scope work to happen within those failure domains.  IE we will be patching all of AZ1.  We ask our end users to spread their application across multiple az's.15:52
klindgrenI was just aware of my past experience with cluster filesystems (and all the external locking/quorum/stonith) stuff and the extra level of complexity that caused with very little actual gain.15:54
klindgrenThe only people that I know that are doing VM's on "shared storage" without issue are using ceph + kvm.15:55
klindgrenusing the ceph block storage stuff (not cephfs as dmsimard pointed out)15:55
klindgrenI would hang around - though as other people might chime in.  I had a thought about possibly using lvm based storage on iscsi.  That would remove the need to do a clustered filesystem.16:00
klindgrenBut I odnt have any XP with that16:00
*** shamail has joined #openstack-operators16:01
*** admin0 has quit IRC16:07
*** marst has quit IRC16:08
*** pilgrimstack has joined #openstack-operators16:09
*** Rockyg has joined #openstack-operators16:11
*** marst has joined #openstack-operators16:17
*** matrohon has quit IRC16:19
pjm6hmm I see16:19
pjm6but LVM can deal good with concurrent access?16:20
pjm6so if a node die, the app are running in other AZ right?16:20
*** shamail has quit IRC16:22
klindgrenI *think* lvm can - thought I might be htinking about clvm.16:28
pjm6thanks klindgren, i will investigate further more16:30
pjm6but there is no much info about this topic16:30
klindgrenRight - with the exposed fault domains insterad of spending $$ on making the infra vm's run on all 5 9's.  We are asking the apps which typically deploy in a 3 web servers ,3 app servers ,3 db server config.  To just deploy the 3 web/app servers across different AZ's.  We had been asking people to still use baremetal for stateful data.16:30
klindgrendifferent az's in our case also guarantee people different server chassis, different cab's, different top of rack switches.  Possibly different critical power busses.16:32
pjm6in that way the app always run16:32
klindgrenalso makes it easy when we need to do maintenace/patching.   As people always wanted to know if their vm was on a server being taken down for maintenance, then the next question would be if they could choose when the maintenace was going to happen (before az's)16:33
pjm6yes16:34
pjm6that case scenario is good16:34
pjm6but unfortunately mine not work in that way :S16:34
pjm6thanks klindgren for all the help :)16:34
jlkI agree with klindgren. AZs are for expressing failure domains to consumers16:35
jlkgenerally with an agreement to never impact multiple AZs at the same time16:35
klindgrenyep.  But like I said - stick around, someone else might chime in.  Who is running a similar to yours setup.16:35
pjm6yes, will do that :D thanks16:37
*** Apoorva has joined #openstack-operators16:41
*** saneax is now known as saneax-_-|AFK16:44
*** pcaruana has joined #openstack-operators16:51
*** marst has quit IRC16:52
*** marst_ has joined #openstack-operators16:52
*** racedo has quit IRC16:52
*** derekh has quit IRC16:54
*** VW has quit IRC17:07
*** VW has joined #openstack-operators17:08
*** admin0 has joined #openstack-operators17:12
*** VW has quit IRC17:12
*** VW has joined #openstack-operators17:17
*** jsech has quit IRC17:24
*** admin0 has joined #openstack-operators17:26
*** electrofelix has quit IRC17:37
*** VW has quit IRC17:38
*** VW has joined #openstack-operators17:39
*** VW has quit IRC17:43
*** paramite has quit IRC17:45
*** VW has joined #openstack-operators17:57
*** MrDan has quit IRC17:57
*** jsech has joined #openstack-operators18:03
*** jsech has quit IRC18:09
*** Jack_ has joined #openstack-operators18:12
*** VW has quit IRC18:12
Jack_I am trying to understand how public,overlay and tenant network maps to neutron node in both L3 HA and DVR scenarios for mid scale deployment or large scale deployment.18:12
*** VW has joined #openstack-operators18:16
*** VW has quit IRC18:16
*** VW has joined #openstack-operators18:17
*** jsech has joined #openstack-operators18:18
*** Hosam has joined #openstack-operators18:20
*** VW has quit IRC18:22
*** Jack_ has quit IRC18:25
*** _ducttape_ has joined #openstack-operators18:28
*** ducttape_ has quit IRC18:32
*** jsech has quit IRC18:38
*** christx2 has joined #openstack-operators18:40
*** admin0 has quit IRC18:55
*** MrDanDan has joined #openstack-operators18:56
*** vinsh has quit IRC19:00
*** _ducttape_ has quit IRC19:05
*** markvoelker has joined #openstack-operators19:08
*** keekz has quit IRC19:10
*** keekz has joined #openstack-operators19:12
*** timburke has quit IRC19:14
*** timburke has joined #openstack-operators19:22
*** ducttape_ has joined #openstack-operators19:27
*** admin0 has joined #openstack-operators19:43
*** pcaruana has quit IRC19:50
*** dbecker has quit IRC19:54
*** erhudy has joined #openstack-operators20:07
*** Apoorva has quit IRC20:12
*** saneax-_-|AFK is now known as saneax20:12
*** Apoorva has joined #openstack-operators20:16
serverascodeanyone rate limit glance with haproxy?20:17
*** georgem1 has quit IRC20:19
*** rarcea has joined #openstack-operators20:26
*** VW has joined #openstack-operators20:33
*** rarcea has quit IRC20:40
*** saneax is now known as saneax-_-|AFK20:43
*** saneax-_-|AFK is now known as saneax20:50
*** christx2 has quit IRC20:54
*** markvoelker_ has joined #openstack-operators20:57
*** markvoelker has quit IRC21:00
*** maticue has quit IRC21:01
*** VW_ has joined #openstack-operators21:02
*** VW_ has quit IRC21:02
*** VW_ has joined #openstack-operators21:03
*** VW has quit IRC21:05
*** r-daneel has joined #openstack-operators21:11
*** mriedem has quit IRC21:17
*** mriedem has joined #openstack-operators21:19
*** marst_ has quit IRC21:24
*** rarcea has joined #openstack-operators21:33
*** VW_ has quit IRC21:37
*** pilgrimstack has quit IRC21:41
*** Apoorva_ has joined #openstack-operators21:47
*** Apoorva_ has quit IRC21:48
*** Apoorva has quit IRC21:48
*** Apoorva has joined #openstack-operators21:49
*** marst_ has joined #openstack-operators21:52
*** ducttape_ has quit IRC22:01
*** mriedem is now known as mriedem_away22:05
*** admin0 has quit IRC22:12
*** VW has joined #openstack-operators22:21
*** VW has quit IRC22:21
*** VW_ has joined #openstack-operators22:26
*** VW_ has quit IRC22:27
*** VW_ has joined #openstack-operators22:27
*** VW_ has quit IRC22:28
*** VW_ has joined #openstack-operators22:29
*** VW_ has quit IRC22:29
*** VW has joined #openstack-operators22:30
*** harlowja has quit IRC22:33
*** marst_ has quit IRC22:34
*** ducttape_ has joined #openstack-operators22:45
*** ducttape_ has quit IRC22:47
*** marst has joined #openstack-operators22:47
*** ducttape_ has joined #openstack-operators22:47
*** harlowja has joined #openstack-operators22:48
*** ant1her0 has joined #openstack-operators22:51
ant1her0Hi OpenStack operators, I'm doing user research for the OpenStack UX Team. We're specifically interested in hearing about needs, breakdowns, and issues you have while operating OS clouds. Please PM me if you're willing to provide some info about your experience.22:55
*** VW has quit IRC23:01
*** VW has joined #openstack-operators23:02
*** VW has quit IRC23:02
*** VW has joined #openstack-operators23:03
jlkant1her0: I've forwarded that on to my operations teams. Hopefully one of them will reach out.23:04
*** VW has quit IRC23:04
*** VW has joined #openstack-operators23:05
*** VW has quit IRC23:06
*** VW has joined #openstack-operators23:07
ant1her0jlk: Thanks so much!23:08
*** rarcea has quit IRC23:10
*** armax has quit IRC23:13
*** ducttape_ has quit IRC23:16
*** erhudy has quit IRC23:21
*** fragatina has joined #openstack-operators23:24
jlkant1her0: do you have an email address I can forward along?23:27
*** ducttape_ has joined #openstack-operators23:27
ant1her0jlk: travjones0@gmail.com23:30
ant1her0jlk: Thanks so much for helping out!23:30
*** ducttape_ has quit IRC23:38
*** VW has quit IRC23:41
*** MrDanDan has quit IRC23:43
*** ducttape_ has joined #openstack-operators23:51
mnaserjust throwing a warning towards this - https://bugs.launchpad.net/oslo.messaging/+bug/160976623:59
openstackLaunchpad bug 1609766 in oslo.messaging "oslo.messaging does not redeclare exchange if it is missing" [Undecided,Fix released] - Assigned to Kirill Bespalov (k-besplv)23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!