Monday, 2017-02-20

*** tianquan has quit IRC00:09
openstackgerritdengshaolin proposed openstack/fuxi master: Add document for volume providers  https://review.openstack.org/43574600:12
*** limao has joined #openstack-kuryr00:51
*** yedongcan has joined #openstack-kuryr01:10
*** yamamoto has joined #openstack-kuryr01:32
*** IRCFrEAK has joined #openstack-kuryr01:39
*** IRCFrEAK has left #openstack-kuryr01:39
*** tonanhngo has joined #openstack-kuryr01:40
*** tonanhngo has quit IRC01:43
*** tianquan has joined #openstack-kuryr02:07
*** tianquan has quit IRC02:11
*** tianquan has joined #openstack-kuryr02:15
*** yamamoto has quit IRC02:26
*** salv-orlando has joined #openstack-kuryr02:53
*** salv-orlando has quit IRC02:58
*** yamamoto has joined #openstack-kuryr03:26
*** yamamoto has quit IRC03:33
*** limao has quit IRC03:34
*** yuanying has joined #openstack-kuryr03:44
*** pmannidi has quit IRC03:48
*** tianquan has quit IRC04:00
*** limao has joined #openstack-kuryr04:03
*** hongbin has quit IRC04:11
*** yamamoto has joined #openstack-kuryr04:28
*** yamamoto has quit IRC04:34
*** danil has joined #openstack-kuryr04:42
*** madgoat has joined #openstack-kuryr04:45
*** madgoat has left #openstack-kuryr04:45
*** salv-orlando has joined #openstack-kuryr04:54
*** salv-orlando has quit IRC04:59
*** tianquan has joined #openstack-kuryr05:00
*** tianquan has quit IRC05:05
*** janki has joined #openstack-kuryr05:17
*** pmannidi has joined #openstack-kuryr05:22
*** jchhatbar has joined #openstack-kuryr05:23
*** janki has quit IRC05:25
*** yedongcan1 has joined #openstack-kuryr05:26
*** yedongcan has quit IRC05:29
*** yamamoto has joined #openstack-kuryr05:30
*** yamamoto has quit IRC05:36
*** tonanhngo has joined #openstack-kuryr05:42
*** tonanhngo has quit IRC05:43
*** tonanhngo has joined #openstack-kuryr05:43
*** tianquan has joined #openstack-kuryr05:46
*** salv-orlando has joined #openstack-kuryr05:55
*** salv-orlando has quit IRC06:00
danilHi, folks. Please help me with my problem. I have 3 physical machines. First one is OpenStack controller. It can create VMs on 3rd machine (compute node). I need to create containers on 2nd machine in network. and after that to union container's network and VM's network. It was decided to use kuryr. So, I can create kuryr network through the docker. I can see this network from controller machine.06:23
danilBut when I tru to attack a container to my new network, everytime I get an error (docker: Error response from daemon: failed to create endpoint pedantic_newton on network test_net: NetworkDriver.CreateEndpoint: vif_type(binding_failed) is not supported. A binding script for this type can't be found.)06:23
danilI get this error after running "docker run --network=test_net --rm -ti alpine /bin/sh"06:23
irenabdanil, according to the log message you shared, seems the problem is on the neutron side. What plugin/mechanism driver do you use?06:28
irenabmore specifically, what is running on the node 2 where you want to place containers?06:29
danilThere is only neutron-linuxbridge-agent.service. But when I hadn'n this one, it was the same error06:31
*** yamamoto has joined #openstack-kuryr06:33
danilAnd this is an exception witch is returned by server : http://textuploader.com/d12e206:36
irenabdanil, 'binding-failed' means that neutron was not able to identify how to bind the port (vif type), therefor local kuryr fails to find the proper script06:37
*** yuanying has quit IRC06:37
*** yuanying has joined #openstack-kuryr06:37
*** yuanying has quit IRC06:38
irenabat this stage, the problem is on neutron level, you should see that neutron port that is created during container creation has valid vif_type, in your case it should be 'linux_bridge' type06:38
*** yamamoto has quit IRC06:38
irenabneutron server is the one who fails to bind the port, please check your linux_bridge agent settings and that it has proper settings06:39
danilirenab, thank you. I'll check06:41
irenabdanil, you are more than welcome. Let me know if it works for you06:42
danilirenab, sure, thanks a lot06:42
*** jchhatbar is now known as janki06:45
*** salv-orlando has joined #openstack-kuryr06:57
*** yedongcan1 has quit IRC06:59
*** yedongcan has joined #openstack-kuryr06:59
danilЖй07:05
*** yuanying has joined #openstack-kuryr07:21
*** tonanhngo has quit IRC07:39
*** ltomasbo|away is now known as ltomasbo07:43
*** saneax-_-|AFK is now known as saneax07:59
*** tianquan has quit IRC08:01
*** tonanhngo has joined #openstack-kuryr08:03
*** tonanhngo has quit IRC08:04
*** tianquan has joined #openstack-kuryr08:11
*** pcaruana has joined #openstack-kuryr08:28
*** yamamoto has joined #openstack-kuryr08:36
*** yamamoto has quit IRC08:42
*** yuanying has quit IRC08:42
*** yuanying has joined #openstack-kuryr08:43
*** yuanying has quit IRC08:46
*** yuanying_ has joined #openstack-kuryr08:47
*** garyloug has joined #openstack-kuryr09:20
*** neiljerram has joined #openstack-kuryr09:26
*** pmannidi has quit IRC09:30
*** pmannidi has joined #openstack-kuryr09:30
openstackgerritxhzhf proposed openstack/kuryr-kubernetes master: modify test-requirement according to requirements project  https://review.openstack.org/43590309:36
*** yamamoto has joined #openstack-kuryr09:38
*** yamamoto has quit IRC09:44
*** neiljerram has quit IRC09:51
*** tianquan has quit IRC09:56
mchiapperohey folks, I have a question on CNI10:03
mchiapperocurrently there is no feedback from CNI towards kube-apiserver & the controller10:05
mchiapperoso, I guess kubernetes will rollback any pending operation if the kuryr CNI part will fail10:06
mchiapperois it right?10:06
*** yedongcan has quit IRC10:20
*** yedongcan has joined #openstack-kuryr10:21
*** neiljerram has joined #openstack-kuryr10:23
*** yedongcan has left #openstack-kuryr10:29
*** limao has quit IRC10:30
*** tianquan has joined #openstack-kuryr10:40
*** yamamoto has joined #openstack-kuryr10:40
*** yamamoto has quit IRC10:46
*** tianquan has quit IRC11:15
openstackgerritdengshaolin proposed openstack/fuxi master: Add document for volume providers  https://review.openstack.org/43574611:21
*** yamamoto has joined #openstack-kuryr11:42
*** yamamoto has quit IRC11:47
*** tonanhngo has joined #openstack-kuryr12:05
*** salv-orlando has quit IRC12:06
*** tonanhngo has quit IRC12:06
*** vikasc has joined #openstack-kuryr12:12
*** tianquan has joined #openstack-kuryr12:15
*** saneax is now known as saneax-_-|AFK12:16
*** vikasc has quit IRC12:20
*** jermz is now known as jerms12:24
*** tianquan has quit IRC12:33
*** tianquan has joined #openstack-kuryr12:34
*** vikasc has joined #openstack-kuryr12:36
*** garyloug has quit IRC12:38
*** tianquan has quit IRC12:39
*** dougbtv has joined #openstack-kuryr12:40
*** yamamoto has joined #openstack-kuryr12:44
*** vikasc has quit IRC12:45
*** yamamoto has quit IRC12:49
*** limao has joined #openstack-kuryr13:09
*** yamamoto has joined #openstack-kuryr13:10
*** saneax-_-|AFK is now known as saneax13:14
*** limao has quit IRC13:14
*** garyloug has joined #openstack-kuryr13:14
*** limao has joined #openstack-kuryr13:14
*** danil has quit IRC13:34
*** v1k0d3n has joined #openstack-kuryr13:35
*** yamamoto has quit IRC13:37
*** janki has quit IRC13:50
*** ivc- has quit IRC14:01
*** ivc_ has joined #openstack-kuryr14:02
*** yedongcan has joined #openstack-kuryr14:03
alraddarlaAre we having the meeting?14:07
openstackgerritMerged openstack/kuryr-kubernetes master: Updated from global requirements  https://review.openstack.org/43196714:08
*** janki has joined #openstack-kuryr14:11
ivc_i think apuimedo is on ptg14:16
*** yedongcan has quit IRC14:18
*** yedongcan has joined #openstack-kuryr14:19
apuimedoivc_: I'm here now14:19
ivc_apuimedo cool you got #chair now :)14:20
ivc_on openstack-meeting-414:20
apuimedoivc_: did somebody start the meeting then?14:21
ivc_i did14:21
ivc_tho i wasnt prepared to lead one on other topics14:21
*** yedongcan has quit IRC14:29
*** yedongcan has joined #openstack-kuryr14:30
*** yedongcan has quit IRC14:37
openstackgerritMerged openstack/kuryr-libnetwork master: Move docs from kuryr to kuryr-libnetwork  https://review.openstack.org/43325014:37
apuimedomchiappero: k8s will mark the pod as failed to deploy14:38
apuimedoactually, it marks it as failed to pull the image xD14:38
apuimedoit's pretty funny14:39
mchiapperothat means it will retry, right? But will the release_vif be called then?14:42
apuimedomchiappero: at some point it gives up14:43
apuimedomchiappero: you mean if the controller with release the port?14:43
apuimedoI don't think so14:43
apuimedoit's probably a bu14:43
apuimedo*bug14:43
*** yedongcan has joined #openstack-kuryr14:43
*** yedongcan has quit IRC14:43
*** yamamoto has joined #openstack-kuryr14:44
mchiapperoapuimedo: yes, in the controller. For macvlan there is now wait for the "active" state, however if CNI fails we need to clean up and remove the additional allowed pair. If it's a bug with Kubernetes then ok, we can probably just log it and wait, otherwise I think it must be addressed at some point14:46
openstackgerritIlya Chukhnakov proposed openstack/kuryr-kubernetes master: modify test-requirement according to requirements project  https://review.openstack.org/43590314:46
openstackgerritDarla Ahlert proposed openstack/kuryr-kubernetes master: Moving docs from Kuryr to Kuryr-Kubernetes  https://review.openstack.org/43325614:47
apuimedomchiappero: you should add a new handling in the handler for when the pod gets into error state14:47
apuimedoif you want to do something about it14:47
apuimedobut I think ivc_'s and my position will be that we wait for deletion14:49
mchiapperoapuimedo: ok, I don't know what k8s notifies exactly but I like it14:49
apuimedomchiappero: what I'm saying is, we don't need to clean up if it gets on 'Error' state14:50
apuimedowe just wait until the user sees it14:50
apuimedotrashes the pod14:50
mchiapperois there anychance that the pod will be spawned again on the same VM?14:50
apuimedoand then we already handle it14:50
apuimedomchiappero: I don't know how to do that with K8s api14:51
apuimedowe should check if it can happen14:51
apuimedobut afaik you would have to make a new one14:51
ivc_mchiappero i think what apuimedo means is that we can have separate handler that would specifically check pod.status for errors and cleanup if/when necessary, but that would require some careful research i think14:55
apuimedoivc_: that's a good way to put it14:55
apuimedoI was also pointing out that since the user needs to issue API deletion commands for errored pods, we already clean up14:56
apuimedoso yes. We need to do things on error cases, but it's not urgent because the basic use case is covered AFAICT14:56
*** yamamoto has quit IRC14:57
ivc_apuimedo there might be some in-progress states where we've done something but have not finished. my plan was to have some periodic job on kuryr side to cleanup things like that14:57
apuimedoyes. It's part of 'resync&cleanup'14:58
ivc_e.g. we started processing pod and created neutron-side objects but did not update annotation for some reason14:58
ivc_things may crash at any point and we should be prepared for that14:58
apuimedoivc_: agreed. I just have resync phobia14:58
apuimedowe have to be sure that we don't take down things that users create manually14:59
ivc_apuimedo i was thinking about something like 2-phase commit since we discussed 3rd party resource usage14:59
apuimedolike somebody creates some special port for kuryr14:59
*** yamamoto has joined #openstack-kuryr14:59
apuimedoand then places the annotation on the pod yaml14:59
apuimedomanually15:00
apuimedoivc_: that's also what I was thinking, because the alternative scares me15:00
ivc_apuimedo yup15:00
apuimedoI suppose you mean create 3rdparty object with everything we manage to allocate, then remove it once we manage to have the k8s annotation15:01
mchiapperoout of curiosity: is there any design decision not to have CNI-->apiserver communications?15:01
apuimedomchiappero: there is such communication15:01
apuimedobut ro15:01
mchiapperoyou mean that CNI is an observer15:02
apuimedoright15:02
apuimedoit 'watches'15:02
mchiapperook, I mean, the other way around15:02
apuimedo:-)15:02
mchiapperofor CNI to update the annotations15:02
*** tonanhngo has joined #openstack-kuryr15:02
mchiapperoI wonder whether there has never been such a need or was deemed bad for some design reason15:03
apuimedowe try to make cni as passive as possible. In which cases you'd want it to update the api server?15:03
apuimedomchiappero: it was about separation of concerns15:03
apuimedothe least amount of different actors that try to annotate15:03
mchiapperouhm, I'm not sure I have a usecase at the moment, but it's good to know anyway15:03
ivc_apuimedo one of the ideas that come to mind is to have Pod/Service handler translate metadata into 3rd party resource15:03
mchiapperook15:03
apuimedothe least amount of resourceVersion commits conflicts we'll deal with15:04
apuimedoivc_: you are talking about the two phase commit?15:04
*** tonanhngo has quit IRC15:04
ivc_apuimedo and the 3rd party resource handler would have the 'status' field (e.g. 'in-progress'/'commited')15:04
* apuimedo coping with the double conversations15:04
mchiapperofor macvlan I considered it as a sync mean, since the controller could set the allowed pairs once CNI is going to succeed15:04
apuimedo:-)15:04
mchiapperolol, ok, I'll stop for a while then15:05
ivc_apuimedo yep. and the neutron interactions would be based on 3rd party resource (that would also help with other things)15:05
apuimedomchiappero: are you working on non-trunked macvlan support?15:05
mchiapperoyes, actually garyloug15:05
mchiapperobut you can talk to whoever you want :)15:05
apuimedoivc_: can you make a flow diagram of that for discussion?15:05
apuimedomchiappero: I'll be trying to push neutron people into macvlan trunking this week15:06
ivc_apuimedo ok15:06
apuimedoIn the next cycle I'd like not to need allowed address pairs15:06
ivc_apuimedo in fact we have the same logic in services patch where Service handler just posts metadata to Endpoints where all the work is done15:07
mchiapperoapuimedo: cool15:07
apuimedoivc_: I noticed and loved it15:07
apuimedoit was a very clever design you got there15:07
mchiapperoapuimedo: as I said, I don't have a very good use case at the moment, just wanted to know about the preferred approach15:07
apuimedomchiappero: the preferred approach for right now15:08
apuimedois to leat the on deletion event handle the cleanup15:08
apuimedoand have the discussion about partial resources in the bigger resync/two-phasecommit/cleanup discussion15:08
apuimedoI'll try to work that discussion into the VTG15:08
mchiapperoapuimedo: ack15:10
mchiapperoapuimedo, ivc_, thanks for your replies15:11
ivc_np15:11
apuimedothanks for getting the conversation started ;-)15:12
mchiapperothis is probably a question for ltomasbo, but has anyone ever tried running nested containers with a remote apiserver? In case, is there anything to watch out for?15:19
ltomasbohi mchiappero, let me read the back log...15:20
mchiapperoit's unrelated to previous conversations15:21
ltomasbomchiappero, you mean having a VM (with trunk port) in a remote server,15:25
ltomasboand then create a container inside it?15:25
ltomasboI have tried the remote but without nested case, and I'm planning to test different VMs in the same server15:26
ltomasbomchiappero, but in the demo apuimedo did last week, he actually tried containers in VMs, where the VMs where in different servers15:27
*** tonanhngo has joined #openstack-kuryr15:30
*** hongbin has joined #openstack-kuryr15:30
*** tianquan has joined #openstack-kuryr15:37
mchiapperocool15:37
mchiapperoyes, I meant the apiserver on a different machine15:38
mchiapperoand then on or more VM elsewhere15:38
mchiapperoI mean, one or more kubelet inside a VM on a separate server15:38
mchiapperommm, one (or more)  kubelet inside one (or more) VM(s)15:39
mchiapperook, better now :D15:39
*** janki has quit IRC15:47
*** salv-orlando has joined #openstack-kuryr15:49
*** tianquan has quit IRC15:52
*** david-lyle has joined #openstack-kuryr15:53
*** yamamoto has quit IRC16:01
*** limao has quit IRC16:09
*** yamamoto has joined #openstack-kuryr16:14
*** david-lyle has quit IRC16:18
*** yamamoto has quit IRC16:24
*** pcaruana has quit IRC16:54
*** tonanhngo has quit IRC16:56
*** salv-orlando has quit IRC17:03
*** v1k0d3n has quit IRC17:06
*** hongbin has quit IRC17:34
*** salv-orlando has joined #openstack-kuryr17:34
*** hongbin has joined #openstack-kuryr17:35
*** yamamoto has joined #openstack-kuryr17:49
*** tianquan has joined #openstack-kuryr17:52
*** yamamoto has quit IRC17:55
*** garyloug has quit IRC17:56
*** tianquan has quit IRC17:57
*** ltomasbo is now known as ltomasbo|away18:00
openstackgerritMerged openstack/kuryr-kubernetes master: Remove unused logging import  https://review.openstack.org/43520518:04
*** salv-orl_ has joined #openstack-kuryr18:10
*** salv-orlando has quit IRC18:13
*** hongbin has quit IRC18:23
openstackgerritOpenStack Proposal Bot proposed openstack/kuryr-libnetwork master: Updated from global requirements  https://review.openstack.org/43196618:31
apuimedoalraddarla: ping18:45
*** david-lyle has joined #openstack-kuryr18:47
*** yamamoto has joined #openstack-kuryr18:48
*** tonanhngo has joined #openstack-kuryr18:48
*** v1k0d3n has joined #openstack-kuryr18:51
alraddarlaapuimedo, hi18:51
*** tonanhngo has quit IRC18:52
*** tonanhngo has joined #openstack-kuryr18:53
*** tonanhngo has quit IRC18:53
*** tonanhngo has joined #openstack-kuryr18:54
*** david-lyle has quit IRC19:15
apuimedowhat's the status on https://review.openstack.org/#/c/433253/19:20
apuimedoalraddarla: ^^19:20
alraddarlathat's the one i need you to let me know your opinion on19:21
alraddarlaneed to know what we should do about the mitaka milestones19:21
apuimedothe depends-on: should be If0e23ffe10f798d2f9f5b37e1b996bd06971572119:21
alraddarlaaka - if you want to delete totally, update, etc.19:21
apuimedonot the review number19:21
alraddarlai know, i was just waiting to fix that and only push one patch19:21
apuimedoah, ok19:22
apuimedokeep the milestone19:22
apuimedoalraddarla: sorry it took so long to come up with an answer19:22
alraddarlano problem. i'll update that in just a little bit19:25
apuimedothanks alraddarla19:27
apuimedoping me when it's done and I'll approve19:27
*** david-lyle has joined #openstack-kuryr19:27
openstackgerritDarla Ahlert proposed openstack/kuryr master: Delete and move outdated docs  https://review.openstack.org/43325319:30
*** yamamoto has quit IRC19:31
apuimedoalraddarla: you dropped the milestone from the index19:32
alraddarladoh19:34
openstackgerritDarla Ahlert proposed openstack/kuryr master: Delete and move outdated docs  https://review.openstack.org/43325319:36
apuimedothanks alraddarla ;-)19:37
alraddarla:) no problem apuimedo19:37
apuimedoalraddarla: can you check why the gate in https://review.openstack.org/#/c/433256/ failed?19:39
alraddarlaapuimedo, it's saying 'could not install deps' for all 4 failures19:45
*** david-lyle_ has joined #openstack-kuryr19:50
*** david-lyle has quit IRC19:50
*** tianquan has joined #openstack-kuryr19:54
*** tianquan has quit IRC19:59
*** david-lyle_ has quit IRC19:59
*** yamamoto has joined #openstack-kuryr20:01
apuimedommm20:04
apuimedoodd20:04
apuimedoivc_: pin20:08
apuimedo*ping20:08
*** yamamoto has quit IRC20:12
*** yamamoto has joined #openstack-kuryr20:14
apuimedoirenab: ltomasbo|away: ivc_: I posted a review on https://review.openstack.org/#/c/433359/120:14
apuimedoIdeally driver bases would all be documented. But it's probably best to do it in a separate patch20:15
apuimedoalraddarla: I know why it fails20:15
apuimedoalraddarla: the depends-on: doesn't have the right kind of identified20:16
apuimedo*identifier20:16
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes master: Moving docs from Kuryr to Kuryr-Kubernetes  https://review.openstack.org/43325620:24
apuimedoalraddarla: I sent a patch to fix it20:24
apuimedojust the minor commit msg edit20:24
openstackgerritMerged openstack/kuryr-libnetwork master: Fix build releasenotes error  https://review.openstack.org/43344820:25
*** yamamoto has quit IRC20:31
alraddarlaapuimedo,  thanks20:38
*** salv-orl_ has quit IRC20:42
*** david-lyle has joined #openstack-kuryr20:43
*** david-lyle has quit IRC20:50
*** v1k0d3n has quit IRC20:51
*** hongbin has joined #openstack-kuryr20:51
hongbinapuimedo: hey toni, i have a patch for libnetwork: https://review.openstack.org/#/c/427923/ , that you might consider to include it in the release (but it is not a hard requirement)20:55
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes master: Moving docs from Kuryr to Kuryr-Kubernetes  https://review.openstack.org/43325620:57
*** v1k0d3n has joined #openstack-kuryr20:58
*** david-lyle has joined #openstack-kuryr21:03
openstackgerritMerged openstack/kuryr-libnetwork master: avoid unnecessary neutron api call in revoke_expose_ports  https://review.openstack.org/42943221:06
*** yamamoto has joined #openstack-kuryr21:07
*** yamamoto has quit IRC21:14
*** yamamoto has joined #openstack-kuryr21:14
openstackgerritMerged openstack/fuxi master: Fix the installation guide  https://review.openstack.org/39929621:16
openstackgerritMerged openstack/kuryr-kubernetes master: Moving docs from Kuryr to Kuryr-Kubernetes  https://review.openstack.org/43325621:32
openstackgerritMerged openstack/fuxi master: Add parameter 'region_name' for novaclient  https://review.openstack.org/43065821:35
*** alyonapy has joined #openstack-kuryr21:54
*** david-lyle has quit IRC21:55
*** pmannidi has quit IRC21:56
*** pmannidi has joined #openstack-kuryr21:57
*** alyonapy has quit IRC21:59
*** yamamoto has quit IRC22:01
*** v1k0d3n has quit IRC22:03
*** alyonapy has joined #openstack-kuryr22:03
*** salv-orlando has joined #openstack-kuryr22:07
*** hongbin has quit IRC22:43
*** alyonapy has quit IRC22:44
*** alyonapy has joined #openstack-kuryr22:46
*** alyonapy has quit IRC22:57
*** alyonapy has joined #openstack-kuryr22:57
*** alyonapy has quit IRC22:57
*** alyonapy has joined #openstack-kuryr22:58
*** alyonapy has quit IRC22:58
*** pmannidi has quit IRC23:24
*** tonanhngo has quit IRC23:29
*** tonanhngo has joined #openstack-kuryr23:38
*** tonanhngo has quit IRC23:42
*** tianquan has joined #openstack-kuryr23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!