Tuesday, 2015-05-19

*** javeriak_ has quit IRC00:04
*** britthouser has quit IRC00:12
*** britthouser has joined #openstack-ansible00:13
*** sdake has quit IRC00:21
*** sdake has joined #openstack-ansible00:22
*** britthouser has quit IRC00:37
*** sdake has quit IRC00:38
*** appprod0 has joined #openstack-ansible00:43
*** javeriak has joined #openstack-ansible00:47
*** sacharya has joined #openstack-ansible00:48
*** daneyon has quit IRC00:51
*** sigmavirus24 is now known as sigmavirus24_awa01:08
*** sdake has joined #openstack-ansible01:37
*** sdake has quit IRC01:44
*** daneyon has joined #openstack-ansible02:37
*** appprod0 has quit IRC02:43
*** javeriak has quit IRC02:45
*** daneyon has quit IRC03:36
*** daneyon has joined #openstack-ansible03:36
*** davidjc has joined #openstack-ansible04:04
*** davidjc has quit IRC04:12
*** JRobinson__ is now known as JRobinson__afk04:32
*** JRobinson__afk is now known as JRobinson__04:57
*** davidjc has joined #openstack-ansible05:21
*** davidjc has quit IRC05:24
*** davidjc has joined #openstack-ansible05:24
*** markvoelker has joined #openstack-ansible05:32
*** fangfenghua has quit IRC05:43
*** JRobinson__ has quit IRC05:50
*** davidjc has quit IRC06:04
*** javeriak has joined #openstack-ansible06:11
*** sacharya has quit IRC06:31
*** markvoelker has quit IRC06:33
*** fangfenghua has joined #openstack-ansible07:15
*** markvoelker has joined #openstack-ansible07:33
*** markvoelker has quit IRC07:38
matttsvg: do we need to update cinder.conf so cinder can write into rbd ?08:14
svgnot sure what you mean, cinder.conf gets configured via container_vars.cinder_backends as defined in the user config08:18
svgdoing rbd things is just another backend, so the cinder.conf template didn't need to be patched08:19
matttsvg: derp, i was looking at a cinder.conf on a cinder-api node which remained unconfigured for rbd08:23
matttthe one on cinder-volume node is fine08:24
matttwonder if cinder-volume doesn't get restarted correctly when cinder.conf is updated, i'll need to dig into that separately08:26
*** markvoelker has joined #openstack-ansible08:34
svgmattt: that is possible, I filed a similar (unrelated) bug recently: https://bugs.launchpad.net/openstack-ansible/+bug/144995808:38
openstackLaunchpad bug 1449958 in openstack-ansible trunk "galera mysql instances are not restarted after an update to the config" [Undecided,Confirmed]08:38
*** markvoelker has quit IRC08:39
matttsvg: yeah, in this instance cinder volume creates were going into error, the moment i restarted cinder-volume they started working08:39
matttsvg: with your patch, do all nova creates end up on rbd volumes?  if so, can we make that part optional?08:40
svgI mattt that depends on the backends you define08:42
svgyou can configure multiple "backends" in cinder08:42
matttsvg: so i may want cinder to use RBD but that doesn't mean all nova boots should end up on RBD08:42
matttwhich if i'm understanding this correctly is how this will end up08:43
svgmultiple cinder backends show up in horizon as volume typed (admin, volumes, second tab)08:44
svgone of them is marked as default, don;t recall how that one is choosen though08:45
svgI presume the first08:45
matttsvg: have a look at https://review.openstack.org/#/c/181957/11/playbooks/roles/os_nova/templates/nova.conf.j208:45
matttdoes that not mean that irrespective of cinder, all nova instances will end up on rbd?08:45
svgimages pool != volumes pool08:46
matttbut they're both rbd pools is the issue08:46
svgthough I agree this is confusing and I don't grasp it completely myself08:46
matttsvg: http://ceph.com/docs/master/rbd/rbd-openstack/ is helpful08:47
matttso i see there being a few options08:47
mattt1. rbd-backed cinder volumes (which are implemented nicely with your patch)08:47
svge.g. when you create a vm, you have a choice of multiple "boot sources"08:47
mattt2. configuring nova to allow it to boot from an rbd-backed cinder volume08:48
mattt3. configuring nova to boot instances directly into rbd08:48
matttin your patch #3 seems to happen if #1 is enabled08:49
mattti think they can be mutually exclusive08:49
svghm, I didn;t really look at the use case of combining rbd and other cinder backends08:51
svggiven rbd snapshots/in pool copies are instantenous, those are a lot more performant08:51
svghave a look at https://dl.dropboxusercontent.com/u/13986042/20150519104921.png and check the select source drop down08:52
matttsvg: is it possible to boot nova instances using cli with local storage?08:55
mattts/with/onto/08:56
svghonestly, no clue08:59
svgI'm very new with openstack08:59
svgit's alreqady a big win we managed to set up womething so quickly thanks to this project :)08:59
matttare you guys running openstack in prod now ?09:00
svgnope, not yet09:01
svgwe hope to do that asap, but we're still learning a lot09:01
svgatm, we have one osad setup (juno) where we do some tests09:01
svgand it's not pretty so far :(09:01
svgpushed a deploy of 100vm's at once09:02
matttwhat happened?09:02
svgwhich ran without (obvious) errors09:02
svgbut half of them are not reacheable via their floating ip09:02
matttwe should ping Apsu about that later today, to see if he's seen that happening before and we can do to address09:03
svgany help will be appreciated; I'm not even sure how to start debugging that09:05
svgbut didn't look closely yet, it's one of my coworkers that works on that09:05
svgit's not easy to track things in logging09:06
svgearly June we expect a visit of a racker for some help/consultancy/validating the stuff we do etc09:06
matttsvg: yeah i'm not overly familiar w/ neutron, Apsu knows it very well though adn should be able to advise you where to look09:08
svgit might also be a misconfigured network thing on our side..09:27
*** markvoelker has joined #openstack-ansible09:35
*** markvoelker has quit IRC09:40
*** javeriak has quit IRC09:48
svgmattt right now working on other things, but I keep track of your latest comments10:02
svgthe one about nova running directly in rbd, is part of something I don't understand very well in openstack10:04
matttsvg: i'm not overly familiar w/ ceph so i'm probably not explaining it very well10:16
svgthis sounded more like a nova thing here10:16
svgnova and cinder10:16
svgI guess as cinder supports different types of backend, it also allows things that might make less sense with ceph (avoid copying images to volumes, and just do a ceph copy etc.)10:17
svgI'm not sure where the nova pool kicks in here10:18
svgone has an image (glance), volume (cinder) and vms (nova) pool IIRC10:18
matttsvg: so if you take ceph out of the picture, you can do the following10:19
svgI don;t understand what the nova pool does actually10:19
matttsvg: boot an instance onto local storage, boot an instance on local storage and then attach a cinder volume, or boot an instance w/ its root disk on a cinder volume10:19
matttsvg: with ceph, you can then configure cinder to use an rbd backend, which will allow you to boot instances w/ local storage and a cinder disk or a root cinder volume attached that's backed by rbd10:21
matttsvg: you also have the ability to replace local storage so that any instance booted has its root disk in ceph10:21
matttsvg: with your patch, it is now assumed that if you have configured cinder to use rbd then you want _all_ booted instances to have their root disk in rbd (not as cinder volumes)10:21
svg"or a root cinder volume attached that's backed by rbd" => on what ceph pool wouold that be then?10:22
svgI guess the cinder/volumes pool?10:22
matttsvg: yeah whatever you specified as rbd_pool in your openstack_user_variables.yml file10:23
mattt(in your default example it is 'volumes_hdd'10:23
svgok10:23
svgso in what case does the nova/vms pool get used?10:23
matttsvg: as long as you have cinder configured w/ an rbd backend then nova will use the nova pool, which is wrong because that doesn't use cinder10:24
svg(the one I configure for nova)10:24
matttso you're making one feature rely on another when they're not related10:24
svgI'm afraid I don;t follow you here10:25
matttok so if you look at https://review.openstack.org/#/c/181957/12/playbooks/roles/os_nova/templates/nova.conf.j210:25
matttyou have {% if cinder_backend_rbd_inuse|bool %} configure stuff {% endif %}10:26
matttso what you're saying is if cinder is configured to use rbd, then force nova to create all instances in rbd10:26
svgok10:26
matttbut images_type = rbd is not using cinder, so having it rely on cinder_backend_rbd_inuse|bool isn't right10:26
svgok10:27
mattt(and i could be wrong here too, anyone is welcome to step in)10:27
mattti think there is a disadvantage to having all instances go into rbd10:27
svgWhich is?10:27
matttwell one is that it's probably more expensive, due to replicas10:28
svg(this configuration was made based on http://ceph.com/docs/master/rbd/rbd-openstack/ btw)10:28
svgif you forget the forced cinder_backend_rbd_inuse for a while, I do configure everything on ceph/rbd herem as per those docs10:29
matttsvg: yeah i think that aside the configuration is fine10:29
mattti just think the conditional should be based on some user variable that isn't tied to cinder10:29
mattti have no issue w/ that configuration, i just think it should be optional and not dependent on cinder10:29
matttsvg: i think there are probably some benefits to using cinder to create rbd volumes which you attach to instances over that nova configuration10:30
matttone obvious one is that you can probably delete the instance while still having the volume in cinder10:30
svgI still don;t understand what that nova pool is used for, when I deploy an instance, I get choose to boot from a volume or from an image10:31
matttsvg: so if you boot from image the root disk of the instance will be stored in teh nova pool10:32
matttsvg: rather than existing on the compute host's local storage10:32
svgbut I always choose to boot from an image, as that is a generic, and booting from a volume is more of a specific10:33
matttsvg: correct, but you are assuming that the deployer wants all instances which aren't booted with a cinder volume to be stored in ceph10:33
matttand i think that assumption is wrong10:33
matttthey may have a small ceph cluster which they want to expose to cinder/glance only and not have every single instance live in ceph10:33
matttthat way if a customer wants a ceph instance they boot from a cinder volume, otherwise they boot an instance and it will land on local storage10:34
mattti don't see any harm in being able to allow people to deploy every instance onto ceph storage, but i think that hsould be optional and only enabled with a user variable10:34
*** markvoelker has joined #openstack-ansible10:36
*** markvoelker has quit IRC10:40
svgok10:40
svgI do see harm in using local storage though, as that might be (very) limited10:41
svgok, ok, I checked what we have in different pools, and the volumes pool is actually empty10:44
svgwe have a couple of images in the image / glance pool10:44
svgand all vm's have their dirk in the vms/nova pool10:45
svgdisks10:45
svgso the nova storage is actually "boot disks"?10:46
matttyep10:48
matttsvg: i agree that local storage is limited (you can't live migrate, etc.)10:48
matttbut for a lot of non-critical workloads it's fine, and cheap since most hypervisors have a ton of local storage in them10:49
svgone thing I seemed to have gotten wrong, is I thought ceph can do instantenous copies (diff copies, doesn;t recreate the data, but reuses the one from the image, sort of COW), but that this only worked within the same pool10:49
svgut it seems that works acrooss pools, and this is what happens when deploying from an image to the nova backend10:50
svgexcept when choosing the options "boot from image {creates a new volume)"10:50
matttyeah i thought it did that when you booted an instance from glance in rbd to a cinder volume10:51
matttactually when you boot a nova instance onto rbd that should work also10:53
svgwow10:54
svg*confused*10:54
svg"booted an instance from glance in rbd to a cinder volume"10:54
svgor does this mean, first create a volume from an image, then boot from that volume?10:54
matttyou can boot an instance from a glance image to a cinder volume and boot from that volume10:57
svgow, ok, just tested, that is what happens with the "boot from image {creates a new volume)"10:57
matttyeah, so doing that probably offers some advantages over the other method10:58
mattti'm not overly familiar w/ cinder but i'm guessing you can then delete the instance and retain the volume, you can do cinder backups on the volume (snapshots etc.)10:58
matttand whatever else cinder offers :p10:58
svgyes, given cinder is the only component that allows for multiple backends11:01
svgwe e.g. have separate volumes pools with hdd and ssd11:01
svgso e.g. a db instance would get a separate data volume with tits database files on sdd11:02
svgoops, *its* databases :)11:02
svgso nova can only have one type of storage backend, and the choice should be here local storage or ceph11:03
matttsvg: yeah exactly11:06
matttsvg: which again i think is fine, if that's what you want, but it should not be dependent on whether cinder is configured w/ an rbd backend because they're not related11:06
svgsure, I now understand11:06
matttyay!11:09
svgis the logging/kibana thing usefull?11:17
svgi just tried deploying an instance, where it copies and creates a new volume from an image, and it failed11:17
svglooking at kibana, no error message if found11:18
mattti don't use kibana personally11:25
matttlet me finish something up here and then i'll try it11:25
mattt(the boot from volume)11:25
*** markvoelker has joined #openstack-ansible11:36
*** markvoelker has quit IRC11:41
*** fangfenghua has quit IRC11:53
*** markvoelker has joined #openstack-ansible12:37
*** markvoelker has quit IRC12:43
*** markvoelker has joined #openstack-ansible12:53
*** Mudpuppy has joined #openstack-ansible13:26
*** Mudpuppy has quit IRC13:40
*** markvoelker has quit IRC13:59
*** yaya has joined #openstack-ansible14:22
*** sdake has joined #openstack-ansible14:42
*** davidjc has joined #openstack-ansible14:43
*** sdake_ has joined #openstack-ansible14:43
svgMay 19 16:46:22 dc2-rk4-ch1-bl1 dnsmasq-dhcp[4866]: not giving name dc2-rk4-ch1-bl1_cinder_api_container-cddfd56e to the DHCP lease of 10.0.3.165 because the name exists in /etc/hosts with address 10.16.8.2414:47
svgdoes this ring a bell to anyone?14:47
*** sdake has quit IRC14:47
svg(getting *lots* of those)14:48
*** jwagner_away is now known as jwagner14:51
*** davidjc has quit IRC14:51
svgprolly not related, but we now suddenly have lots of issues with different containers not being reacheable14:58
svgand this varies14:58
svgwhen I ping from a metal controller host to all of its containers, say around 10 containers, I get max 2-3 that reply, all the other ip's yield a "ping: sendmsg: Invalid argument"14:59
svgand this starts out once we deploy +/- 50 vm's per compute host (second time we notice this)15:00
*** metral is now known as metral_zzz15:06
matttsvg: so wait, you are losing connectivity to _containers_ when you boot a large number of VMs?15:08
svgat least ikt looks like it15:09
matttsvg: odd, do those instances share the same network as your infrastructure?15:10
svgdefine infrastructure?15:10
svgthey have a standard network setup as per rpc_userconfig15:11
matttsvg: are instances being booted in the same network as is defined in cidr_networks for containers?15:11
svg(duno if this is related, but kernel log show lots of ""[537441.256976] net_ratelimit: 6 callbacks suppressed""15:12
svgah, the instances, no, we use vxlan's so they are separated15:12
matttthat is super odd15:13
svgmattt I need to run to catch my train, will be back in 20'15:13
matttcool15:14
*** daneyon has quit IRC15:28
*** appprod0 has joined #openstack-ansible15:30
svgback15:30
svgon train so not a stable connection15:31
matttwelcome back15:32
svgso, any funky thoughts?15:32
matttsvg: just poking on my dev env15:34
matttsvg: i also have a ton of those 'not giving name' dhcp errors on my controllers15:34
matttso that should be unrelated15:35
svg i checked and we had those errors pretty much all the time, so I don't expect this to be related15:36
matttwould probably be prudent for us to update dnsmasq to not check /etc/hosts so we can prevent those log messages15:36
matttbut yeah unrelated15:36
matttso you spin up a ton of VMs, and you start losing access to containers15:37
matttApsu: you seen this?15:38
Apsumattt: No15:38
ApsuThat sounds fun15:38
*** saguilar has joined #openstack-ansible15:39
matttsvg: the containers aren't crashing right?15:39
svg not sure if that is the trigger, but if the error is there before at least it makes it more obvious15:39
svgcontainers are not crashing no, I can attach to them15:39
ApsuCan't pay attention this second, will read back in a minute15:40
matttsvg: do you have nova-compute running on the same controller node that you have containers on?15:42
svgno15:43
*** appprod0 has quit IRC15:45
matttsvg: i'm not too sure unfortunately :(15:52
svgon the compute hosts I have several syslog containbers that seems to have crashed15:54
mattti wouldn't imagine that'd cause the errors you're seeing15:59
*** sacharya has joined #openstack-ansible15:59
*** markvoelker has joined #openstack-ansible16:00
*** saguilar has quit IRC16:01
Apsusvg: You've got overlapping CIDRs, I believe.16:04
matttApsu: that would have been my guess16:05
matttafk for a bit, back later16:05
ApsuAlso, mattt, svg: "net_ratelimit: x callbacks suppressed" is when the kernel limits repeat syslog messages so they don't overwhelm the logger16:05
ApsuDoesn't actually give you any clue to message content16:05
svgApsu: overlapping on metal or containers or both?16:06
Apsusvg: Both probably. You've got a container named in /etc/hosts with a particular IP, yet the IP dnsmasq wants to give it is different16:07
*** daneyon has joined #openstack-ansible16:07
*** markvoelker has quit IRC16:07
svgbut that is about management interface vs the container backendnetwork?16:08
ApsuWell, let's find out16:09
ApsuCan you show me your host management interface CIDR (10. I assume, so shouldn't be an issue to share), as well as the user_config.yml CIDRs?16:11
svgApsu: I'm on a train right now with flaky connection, I'll dive into it when @home16:13
*** yaya has quit IRC16:17
*** saguilar has joined #openstack-ansible16:19
*** daneyon has quit IRC16:21
Apsusvg: Sounds good16:33
*** yaya has joined #openstack-ansible16:37
*** sigmavirus24_awa is now known as sigmavirus2416:51
*** sdake_ is now known as sdake16:55
svgApsu: so from one of my "controllers", br-mgmt has inet addr:10.16.8.50  Bcast:10.16.9.255  Mask:255.255.254.017:09
svgthe user config for networking is http://sprunge.us/DRjO17:09
*** javeriak has joined #openstack-ansible17:17
Apsusvg: Right, but, what about your host's management range?17:21
svgthe metal hosts you mean?17:22
Apsuyeah17:22
ApsuI mean17:22
Apsusvg | May 19 16:46:22 dc2-rk4-ch1-bl1 dnsmasq-dhcp[4866]: not giving name dc2-rk4-ch1-bl1_cinder_api_container-cddfd56e to the DHCP lease of 10.0.3.165 because the name exists in /etc/hosts with address 10.16.8.2417:22
ApsuThis is the error you got17:23
ApsuSo, something has 10.0.3.165 in its range17:23
svgI just checked, managent iface on metal is all within the excluded range   - 10.16.8.0,10.16.8.12717:23
svghm, that is  not how I undertand that error17:23
svgthe hosts has an entry in /etc/hosts for 10.16.8.24 dc2-rk4-ch1-bl1_cinder_api_container-cddfd56e (which osad configures like that) and because of that it refuses to hand out the lease17:24
ApsuWell, it's failing to tag the lease with the name17:24
ApsuBecause it's in the hosts file17:24
svgmattt confirmed me he has similar warnings on his dev setup, and we also have those messages in the past day (before we started havoing issues with container connectivity)17:25
svgso pretty sure that is unrelated17:25
ApsuOk17:26
ApsuSo you're losing ssh access, regardless of the name association17:26
ApsuDoes that still occur if you ssh to the IP of the container?17:26
svgyup17:28
svg(openstack-ansible targets ip's, as container names are not in dns)17:28
*** willemgf has joined #openstack-ansible17:29
ApsuSo this hosts issue is just breaking the dns resolution, which makes sense of course17:29
ApsuOk17:29
ApsuWe'll have to dig into what's up17:30
*** saguilar has quit IRC17:32
* svg welcomes coworker willemgf17:32
*** sdake has quit IRC17:40
*** sdake has joined #openstack-ansible17:42
*** jwagner is now known as jwagner_lunch17:44
*** daneyon has joined #openstack-ansible17:50
*** sdake has quit IRC17:51
*** sigmavirus24 is now known as sigmavirus24_awa17:55
*** sdake has joined #openstack-ansible17:56
*** javeriak has quit IRC17:56
*** daneyon has quit IRC17:59
*** daneyon has joined #openstack-ansible18:00
*** dkalleg has joined #openstack-ansible18:09
*** saguilar has joined #openstack-ansible18:10
*** dkalleg has quit IRC18:10
*** dkalleg has joined #openstack-ansible18:11
*** britthouser has joined #openstack-ansible18:14
*** daneyon has quit IRC18:14
*** britthou_ has joined #openstack-ansible18:16
*** daneyon has joined #openstack-ansible18:16
*** davidjc has joined #openstack-ansible18:16
*** yaya has quit IRC18:17
*** daneyon has quit IRC18:18
*** sdake has quit IRC18:18
*** davidjc has quit IRC18:18
*** britthouser has quit IRC18:19
*** davidjbc has joined #openstack-ansible18:19
*** jwagner_lunch is now known as jwagner18:39
*** javeriak has joined #openstack-ansible18:40
*** willemgf has quit IRC18:41
svgmattt apsu thanks for all your help, so far, this is really confusing us, and we will first start to dive in our networking setup, to seek out if there might be problems on our side. thx!18:49
Apsunp!18:50
*** britthou_ has quit IRC19:01
*** davidjbc has quit IRC19:03
*** britthouser has joined #openstack-ansible19:05
*** openstackgerrit has quit IRC19:06
*** openstackgerrit has joined #openstack-ansible19:06
*** logan2 has quit IRC19:07
*** logan2 has joined #openstack-ansible19:08
*** davidjc has joined #openstack-ansible19:26
*** britthouser has quit IRC19:31
*** dkalleg has quit IRC19:31
*** davidjc has quit IRC19:41
*** daneyon has joined #openstack-ansible19:45
*** jwagner is now known as jwagner_away19:49
*** javeriak has quit IRC19:50
matttApsu: those errors are because we write out our own /etc/hosts file which doesn't correspond w/ IPs that dnsmasq dishes out19:52
Apsumattt: Sounds like it, yeah. Should file a bug so we can sync them up19:53
ApsuProbably by generating a dnsmasq lease file that matches19:53
matttApsu: not sure it's the right solution, but i passed --no-hosts to dnsmasq and that cleared htem up19:53
*** javeriak has joined #openstack-ansible19:54
matttApsu: i'll create a bug for it so we can circle back at some point19:58
matttit does beg the question if we could do away with lxc's stock eth0 and we create eth0 w/ our container cidr instead19:58
ApsuYeah20:00
*** javeriak has quit IRC20:03
*** sdake has joined #openstack-ansible20:06
svgmattt: is that about the 'container management network'? I never understood what tha's for. given there's already an openstack management network20:09
matttsvg: openstack management network?20:11
svg 24   # Management (same range as br-mgmt on the target hosts)20:12
svg 25   container: 10.16.8.0/2320:12
svgas defined in rpc_user_config.yml20:12
svg^^ openstack management network, by container management network I mean the dnsmasq/dhcp network (eth0 on the containers)20:13
*** jwagner_away is now known as jwagner20:13
matttsvg: i can't see why it's not possible to do away w/ eth0 in the container20:15
svgmy thought exactly20:15
matttsvg: we'd have to ensure dnsmasq dishes out the same IPs every time a container boots though20:16
svg(though afaicr, there wa no way to de-configure that?)20:16
svgeuhm, I mean, why do we need dhcp?20:16
matttwell i think you'd still need to use dhcp20:17
matttbecause you can't configure the container's eth0 without networking20:18
svgWhy? is that lxc-specific?20:18
matttwell you can run arbitrary commands in an lxc container, not sure if our lxc ansible module permits that though20:19
svgI don't follow20:20
svgok, sorry, now it struck me20:21
svgso that's for initial config20:22
matttsvg: looks like the initial container stuff doesn't happen over the network tho, which makes sense20:22
matttso yeah i'm not entirely sure dhcp is necessary20:22
svgalso, given the ansible_ssh_host var is set to the mgmt ip20:23
svgI was told (by cloudnull) that dhcp network is essentially to allow a container to connect outside the cluster (to internet..)20:24
matttyeah i was just looking at that20:24
matttlooks like the 10.0.3.0/24 lxc network is hardcoded in the lxc-net stuff20:24
*** daneyon has quit IRC20:24
matttso you'd be hard pressed to change that20:24
svgbut, given that routes through dnsmasq on the metal host, those get external access over the mgmt network...20:25
svgguess it's one of the quirks :)20:25
matttsvg: i created https://bugs.launchpad.net/openstack-ansible/+bug/1456792 so we can look into it further20:27
openstackLaunchpad bug 1456792 in openstack-ansible "Excessive dnsmasq-dhcp log entries" [Undecided,New]20:27
* svg subsccribed20:32
*** sdake has quit IRC20:39
*** daneyon has joined #openstack-ansible20:41
*** daneyon has quit IRC20:45
*** sigmavirus24_awa is now known as sigmavirus2420:52
*** daneyon has joined #openstack-ansible20:53
*** daneyon has quit IRC20:54
*** britthouser has joined #openstack-ansible21:01
*** radek_ has joined #openstack-ansible21:29
*** davidjc has joined #openstack-ansible21:33
*** davidjc has quit IRC21:34
*** javeriak has joined #openstack-ansible21:42
*** britthouser has quit IRC21:45
*** radek_ has quit IRC21:46
*** britthouser has joined #openstack-ansible21:49
*** britthouser has quit IRC22:09
*** sacharya has quit IRC22:10
*** sigmavirus24 is now known as sigmavirus24_awa22:32
*** britthouser has joined #openstack-ansible22:35
javeriakhey guys, general question, there's a neutron-agent service being installed, thats not really configured for anything, what exactly is the purpose for it22:57
*** saguilar has quit IRC23:00
prometheanfireneutron-lb-agent?23:10
javeriaknope, jutst a 'neutron-agent', this is the juno branch code23:12
*** britthouser has quit IRC23:13
*** darrenc is now known as darren_afk23:25
*** britthouser has joined #openstack-ansible23:36
*** darren_afk is now known as darrenc23:44
*** daneyon has joined #openstack-ansible23:49
*** daneyon has quit IRC23:59
*** daneyon has joined #openstack-ansible23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!