Tuesday, 2018-10-23

*** rcernin has joined #openstack-containers00:11
*** rcernin_ has quit IRC00:12
*** rcernin has quit IRC00:13
*** rcernin has joined #openstack-containers00:14
*** chhagarw has joined #openstack-containers00:47
*** chhagarw has quit IRC00:51
*** hongbin has joined #openstack-containers01:27
*** hongbin has quit IRC01:36
*** hongbin has joined #openstack-containers01:37
*** hongbin_ has joined #openstack-containers01:41
*** hongbin has quit IRC01:43
*** canori01 has quit IRC01:54
*** ricolin has joined #openstack-containers02:04
*** hongbin has joined #openstack-containers02:05
*** hongbin_ has quit IRC02:07
*** dave-mccowan has quit IRC02:57
*** munimeha1 has quit IRC03:01
*** ramishra has joined #openstack-containers03:12
*** lxkong has quit IRC03:15
*** lxkong has joined #openstack-containers03:15
*** hongbin has quit IRC03:52
*** chhagarw has joined #openstack-containers03:58
*** udesale has joined #openstack-containers03:58
*** janki has joined #openstack-containers04:59
*** ttsiouts has quit IRC05:42
*** ttsiouts has joined #openstack-containers05:43
*** ttsiouts has quit IRC05:47
*** spsurya has joined #openstack-containers05:48
*** ramishra_ has joined #openstack-containers06:40
*** ramishra has quit IRC06:43
*** pcaruana has joined #openstack-containers06:56
*** rcernin has quit IRC07:06
*** ramishra_ is now known as ramishra07:33
*** mattgo has joined #openstack-containers07:39
*** pvradu has joined #openstack-containers08:14
openstackgerritSpyros Trigazis proposed openstack/magnum master: k8s_fedora: Deploy tiller  https://review.openstack.org/61233608:15
*** ttsiouts has joined #openstack-containers08:17
*** salmankhan has joined #openstack-containers09:38
*** udesale has quit IRC09:51
*** udesale has joined #openstack-containers09:52
*** pvradu has quit IRC10:00
*** pvradu has joined #openstack-containers10:00
*** pvradu_ has joined #openstack-containers10:05
*** serlex has joined #openstack-containers10:05
*** pvradu has quit IRC10:09
*** ianychoi has quit IRC10:22
*** ianychoi has joined #openstack-containers10:25
*** flwang1 has joined #openstack-containers11:00
flwang1strigazi: any chance you're around?11:00
strigaziflwang1: yes but I was leaving. Is it quick?11:01
flwang1strigazi: yep11:01
strigaziflwang1: tell me11:01
flwang1should the workder node name be resolved in pod?11:01
flwang1for example, in a pod based on alpine, i run exec -it pod-name sh11:02
flwang1then run 'nslookup <worker-node-name> <dns_nameserver>'11:02
strigaziflwang1: if the node names are resolvable the cloud, yes11:04
strigazicoredns uses the vm dns11:04
strigazithe vm dns should be able to resolve the node names11:04
strigazimakes sense?11:04
*** ttsiouts has quit IRC11:04
flwang1that said, if it's using 8.8.8.8, then the worker node can't be resolved?11:05
strigaziI guess not11:06
flwang1ok, we can discuss when you back11:07
strigaziif login in the master can you lookup the worker?11:07
strigaziif you login in the master can you lookup the worker?11:07
strigaziby name I mean11:07
* strigazi will be back11:08
*** ricolin has quit IRC11:13
*** udesale has quit IRC11:14
*** mattgo has quit IRC11:31
*** ricolin has joined #openstack-containers11:37
*** janki has quit IRC11:41
*** ttsiouts has joined #openstack-containers11:43
strigaziI'm back11:45
strigaziflwang1: ping11:46
flwang1hi11:46
*** ttsiouts has quit IRC11:48
*** mattgo has joined #openstack-containers11:49
*** ttsiouts has joined #openstack-containers12:02
*** dave-mccowan has joined #openstack-containers12:07
*** janki has joined #openstack-containers12:14
*** mattgo has quit IRC12:16
*** janki has quit IRC12:25
*** janki has joined #openstack-containers12:25
*** janki has quit IRC12:27
*** tobberydberg has quit IRC12:30
*** udesale has joined #openstack-containers12:31
*** mattgo has joined #openstack-containers12:32
*** janki has joined #openstack-containers12:34
*** jchhatbar has joined #openstack-containers12:38
*** janki has quit IRC12:40
*** jchhatbar is now known as janki12:50
*** livelace has joined #openstack-containers13:22
*** serlex has quit IRC13:37
*** livelace has quit IRC13:48
*** hongbin has joined #openstack-containers14:00
*** mattgo has quit IRC14:01
*** markguz_ has joined #openstack-containers14:05
*** ramishra has quit IRC14:06
*** markguz_ has quit IRC14:10
*** janki has quit IRC14:21
*** mattgo has joined #openstack-containers14:34
*** canori01 has joined #openstack-containers14:35
tobias-urdinstrigazi: backports for the recent fix https://review.openstack.org/#/q/status:open+project:openstack/magnum+topic:swarm-mode-f27-fix15:07
tobias-urdinlet me know if we dont want to backport all of them, i need rocky though, thx!15:07
strigazitobias-urdin: ack15:15
strigazitobias-urdin: isn't ocata eol?15:15
tobias-urdinnewton is eol, ocata is in extended maintenance15:17
strigaziok15:18
tobias-urdinocata "estimated 2018-08-27" according to releases.o.org15:18
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add heat_container_agent_tag  https://review.openstack.org/61272715:20
strigazieandersson: ^^15:20
*** ianychoi_ has joined #openstack-containers15:36
*** ianychoi has quit IRC15:40
*** ivve has joined #openstack-containers15:42
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add heat_container_agent_tag  https://review.openstack.org/61272715:43
openstackgerritSpyros Trigazis proposed openstack/magnum master: Add heat_container_agent_tag label  https://review.openstack.org/61272715:44
*** mattgo has quit IRC15:47
*** ricolin has quit IRC15:51
*** udesale has quit IRC16:06
eanderssonThanks looks a lot better strigazi16:12
*** ttsiouts has quit IRC16:16
*** ttsiouts has joined #openstack-containers16:17
*** ttsiouts has quit IRC16:21
openstackgerritJulia Kreger proposed openstack/magnum master: Minor fixes to re-align with Ironic  https://review.openstack.org/61274816:35
*** pvradu_ has quit IRC16:37
*** dtruong has quit IRC16:56
*** dtruong has joined #openstack-containers16:56
*** dtruong has quit IRC17:02
openstackgerritMerged openstack/magnum master: Make master node schedulable with taints  https://review.openstack.org/60862717:02
*** dtruong has joined #openstack-containers17:03
*** pvradu has joined #openstack-containers17:04
*** pvradu has quit IRC17:09
*** salmankhan has quit IRC17:20
*** pvradu has joined #openstack-containers17:42
*** pvradu has quit IRC17:53
*** irclogbot_1 has joined #openstack-containers18:35
*** flwang1 has quit IRC18:50
*** chhagarw has quit IRC18:57
*** salmankhan has joined #openstack-containers19:09
*** salmankhan has quit IRC19:13
*** pcaruana has quit IRC19:23
openstackgerritSpyros Trigazis proposed openstack/magnum stable/rocky: Make master node schedulable with taints  https://review.openstack.org/61279019:27
openstackgerritSpyros Trigazis proposed openstack/magnum stable/rocky: Add prometheus-monitoring namespace  https://review.openstack.org/61279119:34
brtknrstrigazi: is there a meeting later?20:03
strigaziyes https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-10-23_2100_UTC20:04
strigaziI got you patch in the agenda20:05
strigazilgtm20:05
*** openstackgerrit has quit IRC20:06
*** openstackgerrit has joined #openstack-containers20:37
openstackgerritMerged openstack/magnum master: Add prometheus-monitoring namespace  https://review.openstack.org/60090520:37
*** ivve has quit IRC20:42
*** schaney has joined #openstack-containers20:46
*** flwang has joined #openstack-containers20:47
flwangstrigazi:  meeting in 13 mins?20:47
openstackgerritFeilong Wang proposed openstack/magnum stable/rocky: Add prometheus-monitoring namespace  https://review.openstack.org/61279120:54
*** ttsiouts has joined #openstack-containers20:56
strigazimeeting in 2'20:57
strigazi#startmeeting containers21:00
openstackMeeting started Tue Oct 23 21:00:11 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: containers)"21:00
openstackThe meeting name has been set to 'containers'21:00
strigazi#topic Roll Call21:00
*** openstack changes topic to "Roll Call (Meeting topic: containers)"21:00
strigazio/21:00
ttsioutso/21:00
flwango/21:01
imdigitaljimo/21:02
strigazibrtknr: you here?21:03
eanderssono/21:03
*** salmankhan has joined #openstack-containers21:03
strigaziThanks for joining the meeting  ttsioutsflwang imdigitaljim eandersson21:03
strigaziAgenda:21:03
strigazi#link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-10-23_2100_UTC21:04
strigaziit has some items21:04
strigazi#topic Stories/Tasks21:04
*** openstack changes topic to "Stories/Tasks (Meeting topic: containers)"21:04
strigazi1. node groups https://review.openstack.org/#/c/607363/21:04
strigaziI think we are pretty close to the final state of the spec21:05
strigaziplease take a look21:05
ttsioutsstrigazi: tomorrow I will push again21:05
ttsioutsadapting ricardo's comments21:06
strigazioh I thought you pushed today, ok so take a looks guys take a look tmr as well :)21:06
ttsiouts:) tmr better21:07
strigazi@all do you want to discuss anything about nodegroups today?21:07
strigaziquestions about nodegroups?21:07
strigaziok, next then21:08
schaneyo/ sorry for lateness, but yes21:08
strigazischaney: hello. you have smth about nodegroups?21:09
schaneyany mechanism to interact with NGs individually?21:09
schaneyopposed to the top level cluster or stack21:09
strigazithe api will be like:21:09
ttsioutsschaney: do you mean updating a specific nodegroup?21:10
schaneyyes for scaling or the like21:10
strigazicluster/<cluster-identity>/nodegroup/<nodegroup-identity>21:10
strigaziso PATCH cluster/<cluster-identity>/nodegroup/<nodegroup-identity>21:10
ttsioutshttps://review.openstack.org/#/c/607363/2/specs/stein/magnum-nodegroups.rst@11721:10
colin-sorry i'm late!21:11
strigazicolin-: welcome21:11
schaneyoh gotcha, ill have to dig into the work a bit, under the hood magnum just targets the name of the node group represented by the heat parameter though?21:12
flwangttsiouts: the node groups is basically the same thing like node pool in GKE, right?21:13
ttsioutsschaney: that's the plan21:13
ttsioutsflwang: exactly21:13
flwangttsiouts: cool21:14
schaneyi see, did there happen to be any work with the "random node scale down" when magnum shrinks the cluster?21:14
flwangttsiouts: i will review the spec first21:14
schaneyfrom the API route it would seem so?21:14
flwangi think we probably better read the spec first and put comments in the code21:14
flwanginstead of discussing design details here21:15
schaneygood call21:15
ttsioutsschaney: we want to add a CLI for removing specific nodes from the cluster21:16
ttsioutsbut this will come further down the way21:16
strigazischaney: this won't be covered by this spec, but we should track it somewhere else21:16
schaneygotcha, thanks for the info21:16
ttsioutsflwang: thanks!21:16
ttsioutsflwang: tmr it will be more complete21:17
imdigitaljimwere looking at modifying our driver to perhaps consume senlin clusters for each group21:17
imdigitaljimmasters, minions-group-1, ... minions-group-n21:17
flwangimdigitaljim: is it a hard depedency?21:17
imdigitaljimwould all have a senlin profile/cluster in heat21:17
flwangi mean for senlin21:17
strigaziimdigitaljim: that could be done, that is why we have drivers21:18
strigaziflwang:  should be optional21:18
strigazilike alternative21:18
imdigitaljimit would probably be the driver would take on a senlin dependency (not magnum as a whole)21:18
imdigitaljimjust like octavia or not21:18
strigaziwhen the cluster drivers where proposed senlin and ansble were the reasoning behind it21:19
imdigitaljimwe're more focused on autoscaling/autohealing rather than cli manual scaling21:19
imdigitaljimthe senlin PTL is here and is actively talking to the heat PTL on managing the senlin resources in heat so we might be able to inhouse create a better opportunity for senlin + heat + magnum21:20
strigaziimdigitaljim:  here, like in this meeting?21:20
imdigitaljimno21:20
imdigitaljimsorry i just mean he works at blizzard21:21
strigaziThis plan is compatible with nodegroups and nodegroups make it actually easier21:22
imdigitaljimwe think so21:22
strigaziI'm not aware of it in detail, but it sounds doable21:23
schaneySenlin would work well with the NG layout, one thing to note is Senlin's dedicated API21:23
imdigitaljimyeah we arent either but we'll be working out over the next couple weeks21:23
imdigitaljimand see if its within reason on feasibility21:23
eanderssonThe Senlin PTL will be in Berlin btw21:23
imdigitaljim^21:23
strigazicool21:24
cbrumm__Its honestly too early to be talking about, there's a lot of heat/senlin ground work to do first21:24
cbrumm__But hey, it's a thing we're thinking about.21:24
strigazifair enough21:24
strigazishall we move on?21:25
cbrumm__yes21:26
strigaziI pushed two patches we were talking about, one is:21:26
strigaziAdd heat_container_agent_tag label https://review.openstack.org/#/c/61272721:26
strigaziwe discussed with flwang and eandersson already, others have a look too21:27
strigazithe tag of the heat-agent was hardcoded this makes it a label.21:27
strigazithe 2nd one needs some discussion, it is21:28
strigazideploy tiller by default https://review.openstack.org/#/c/612336/21:28
strigaziShall we have it in by default or optional?21:29
flwangstrigazi: any potential issue if we enable it by default?21:30
strigaziand then next steps are, with tls or without? a tiller per ns or with the cluster-role?21:30
strigaziflwang: the user might want a diffrent tiller config21:31
flwangstrigazi: that's the problem i think21:31
strigaziflwang: other than that, tiller will be there silent21:31
flwangwe have seen similar 'issue' with other enabling, like the keystone auth integration21:32
strigaziflwang: so you are in-favor of optional21:32
flwanga new enabling feature may introduce a bunch of config21:32
flwangbut now in mangum, we don't have a good way to maintain those config21:32
flwanglabels are too flexible and lose21:33
flwangi prefer optional21:33
flwangbased on the feedback we got so far, most of customer just want a vanilla k8s cluster21:33
flwangif they want something, they can DIY21:34
strigaziwhat vanilla means? api, sch, cm, kubelet, proxy, dns, cni21:34
flwangand i agree it's because we(catalyst cloud) are public cloud and our customers' requirements are vary21:34
flwangvanilla means a pure cluster, not too much plugins/addons21:34
cbrumm__flwang: We've been getting some similar feedback from power users too21:35
flwangfor private cloud, the thing maybe different21:35
cbrumm__but I think that's expected from power users that are used to DYI21:35
flwangmost of the customers of k8s know how to play with it21:35
flwangwhat they want is just a stable k8s cluster with good HA and integration with the underhood cloud provider21:36
flwangcbrumm__: what do you mean 'power users'?21:36
strigaziso optional21:37
cbrumm__people who've used k8s before that was not a managed service21:37
flwangcbrumm__: ok, i see, thx21:37
strigaziflwang: any argument against having this optional?21:38
*** spsurya has quit IRC21:38
flwangstrigazi: TBH, I would suggest we start to define a better addon architecture21:39
imdigitaljim^21:39
flwanglike refactor the labels21:39
imdigitaljimthats one of the goals our driver plans to be solving21:39
flwangimdigitaljim: show me the code ;)21:39
*** rtjure has quit IRC21:39
strigaziso we don't add anything until we refactor?21:40
flwangi have heard about the v2 driver million times, i want to see the code :D21:40
flwangstrigazi: i'm not saying that21:40
imdigitaljim:D when i get some free cycles and feel like its in a great uploading spot21:40
flwangi just say we should be more careful21:40
strigazithe current model is not unreasonable. we need to define careful and not make the service a framework21:42
strigaziI think for v1 which can be refactored only replaced/deprecated the model of addons is there21:43
strigazis/can/can't/21:44
strigazilabels for on/off and tags21:44
imdigitaljimimo we should look into a config file21:44
imdigitaljimkubernetes followed the same pattern when they realized there were too many flags21:45
strigaziconfig file to create the labels/fields or a config file to pass to the cluster21:45
strigazi?21:45
openstackgerritMerged openstack/magnum stable/queens: Provide a region to the K8S Fedora Atomic config  https://review.openstack.org/61219921:46
imdigitaljimopenstack coe cluster template create --config myvalues.yaml21:46
strigazithese values are like labels?21:46
imdigitaljimcould be everything21:46
imdigitaljimcould be just labels21:46
strigazicode too?21:47
strigazilike to code?21:47
imdigitaljim?21:47
strigazior link to code?21:47
imdigitaljiminstead of like --labels octavia_enabled=true, etc etc21:47
imdigitaljimit could be21:47
imdigitaljim[Loadbalancer}21:48
imdigitaljim[LoadBalancer]21:48
strigazigot it21:48
imdigitaljimoctavia_enabled=true21:48
imdigitaljimbut you could also do21:48
imdigitaljim[Network]21:48
imdigitaljimfloating_ ...  fixed_network= .. fixed_subnet=21:48
flwangyaml or the ini format21:48
imdigitaljimeither or21:49
imdigitaljimany21:49
imdigitaljimjson21:49
flwangyep21:49
imdigitaljimdoesnt matter however we want to do it21:49
flwangagree21:49
imdigitaljimimo im a fan of that model21:49
flwangand with that case, we can publish sample config files21:49
strigaziflwang: what would cover your concern about the loose label design?21:49
flwangand user can decide how to combine the config21:49
flwangstrigazi: yep, that's basically the arch in my mind21:50
strigazi#action strigazi to draft a spec and story for creating cluster with a config file.21:51
flwangstrigazi: can we discuss this one  https://github.com/kubernetes/cloud-provider-openstack/issues/280  next?21:51
strigaziI'll try to bring smth in the next meeting for this21:51
flwangstrigazi: thanks21:51
strigazibefore going into the bug of CPO21:52
strigazi@all take a look to the rest of the list of review in the agenda. they are ready to go in21:53
strigazihttps://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-10-23_2100_UTC21:53
colin-will do21:53
strigaziflwang: what might help a little is using the config drive before the metadata service21:54
strigazicolin-: thx21:54
flwangstrigazi: does that need change in magnum?21:55
flwangimdigitaljim: did you ever see this issue  https://github.com/kubernetes/cloud-provider-openstack/issues/280 ?21:55
strigaziimdigitaljim: eandersson colin- what is your experience with CPO21:55
flwangworker nodes are missing ips21:55
imdigitaljimi have not experienced this issue21:55
strigaziflwang: in the config of the CPO21:55
flwangimdigitaljim: probably because you're using v1.12?21:55
imdigitaljimwe have an internal downstream with a few patches on CPO21:55
imdigitaljimwere waiting on upstream commit permission for kubernetes org21:56
imdigitaljim(blizzard admin stuff)21:56
imdigitaljimwe're on 1.12.1 correct21:56
strigazipatches regarding this bug?21:56
imdigitaljimno21:56
imdigitaljimUDP support, LBaaS naming, and a LBaaS edge case21:56
flwanghttps://github.com/kubernetes/kubernetes/pull/65226#issuecomment-43193354521:56
colin-to jim, outloud just now i said "it's been better than starting from scratch"? :)21:57
colin-found it useful and it has definitely saved us some time, but as he said we've also found some gaps we want to address21:57
flwangit seems like a very common, high-chance problem21:57
strigazicolin-: you talk about k/cpo?21:58
colin-yes21:58
flwangstrigazi: when you say 'config of CPO', does that mean we at least have to use cm+CPO mode?21:58
strigaziflwang: https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/write-kube-os-config.sh#L1221:59
strigaziit is adding [Metadata]search-order=configDriver,metadataService22:00
flwangstrigazi: got it. but based on https://github.com/kubernetes/cloud-provider-openstack/issues/280#issuecomment-42741690822:01
flwangdoes that mean we only need add this one  [Metadata]search-order=configDriver,metadataService ?22:01
strigaziflwang: the code is... even if you disable set [Metadata]search-order=configDrive it still calls the metadataservice22:02
strigaziI tried with configdrive only and it was still doing calls to the APIs.22:03
flwangi'm confused22:03
flwangdoes that need any change in nova?22:03
flwangi mean nova config22:04
strigaziI'll end the meeting to be ~1 hour and we continue22:04
flwangcool, thanks22:04
openstackgerritMerged openstack/magnum master: Minor fixes to re-align with Ironic  https://review.openstack.org/61274822:04
strigazi@all thanks for joining the meeting22:04
strigazisee you next week22:05
strigazi#endmeeting22:05
*** openstack changes topic to "OpenStack Containers Team"22:05
openstackMeeting ended Tue Oct 23 22:05:22 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:05
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-10-23-21.00.html22:05
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-10-23-21.00.txt22:05
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-10-23-21.00.log.html22:05
*** ttsiouts has quit IRC22:06
*** ttsiouts has joined #openstack-containers22:06
strigaziCPO is talking to the metadata service and uses the config drive if available too22:06
flwangok22:07
strigaziflwang: you can set the order with that flag22:07
flwangbut you just said it doesn't work?22:07
flwangok22:07
strigaziyes, the docs claim that you can select only one22:07
flwang(11:02:25) strigazi: flwang: the code is... even if you disable set [Metadata]search-order=configDrive it still calls the metadataservice22:07
flwangdoes that mean 'CPO' will call both no matter what you set?22:08
strigaziI did set only the configDrive earlier today and then I was monitoing the nova api22:08
flwangthen will the result from api call overwrite the result got from configdrive?22:08
strigaziI was receiving call from the CM and the kubelet22:08
flwangand then run into the problem again?22:08
strigaziI haven't looked in the code but i think it might be the same result22:09
*** ttsiouts has quit IRC22:11
flwangshit22:11
flwangi have asked in  https://github.com/kubernetes/kubernetes/pull/65226 to backport this to v1.1122:12
flwangand many people want that in v1.11 as well22:12
strigaziI think because of this https://github.com/kubernetes/kubernetes/blob/13705ac81e00f154434b5c66c1ad92ac84960d7f/pkg/cloudprovider/providers/openstack/openstack_volumes.go#L50322:13
strigaziit always uses the metadata service22:13
strigaziit says in the comments: We're avoiding using cached metadata (or the configdrive),22:14
strigazirelying on the metadata service.22:14
flwangso though there is a config, the code just always skip it?22:15
strigazithat's my understanding22:16
flwangwait22:16
flwangwhy https://github.com/kubernetes/cloud-provider-openstack/issues/280#issuecomment-427416908  say it works22:16
strigaziI don't know I was trying to verify this comment22:17
flwangok, i'm leaving comment to him22:18
flwangto ask more details22:18
flwangstrigazi: thank you for all your time22:19
flwangalways helpful22:19
strigazialmost helpful :) I tried :)22:19
flwangstrigazi: have a good night, man22:20
strigazihe replied already22:21
strigazithanks flwang , see you tmr/later today22:21
flwangthanks, interesting talk22:22
strigazibefore going to sleep, quickly, about helm. what do you propose? opt-in works for me22:23
*** rcernin has joined #openstack-containers22:24
flwangi'd like to opt and then we can polish until we figure out a reasonable shape we all happy22:24
flwangis that OK for you?22:24
flwangbut it's totally not a blocker22:25
strigaziok, I'll add one more label to make this off by-default22:25
flwangi will need to talk about the flavor issue with you later22:25
flwangimdigitaljim: still around?22:26
strigaziok then22:26
strigazihave a nice day22:26
flwango/22:26
flwangstrigazi: btw22:27
imdigitaljimflwang: yes sorry22:27
flwangdo you have a few moment?22:28
imdigitaljimyeah22:29
imdigitaljimflwang: whats up?22:31
imdigitaljimpm me as well if you want22:31
*** threestrands has joined #openstack-containers23:02
*** hongbin has quit IRC23:10

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!