Monday, 2018-12-17

*** ivve has quit IRC00:23
*** ivve has joined #openstack-containers00:35
openstackgerritMohammed Naser proposed openstack/magnum master: wip: multinode conformance  https://review.openstack.org/62544800:46
*** kothari has quit IRC00:49
openstackgerritMohammed Naser proposed openstack/magnum master: wip: multinode conformance  https://review.openstack.org/62544800:49
openstackgerritMohammed Naser proposed openstack/magnum master: wip: multinode conformance  https://review.openstack.org/62544801:44
*** dave-mccowan has joined #openstack-containers02:07
openstackgerritMohammed Naser proposed openstack/magnum master: wip: multinode conformance  https://review.openstack.org/62544802:11
*** hongbin has joined #openstack-containers02:11
*** hongbin has quit IRC02:28
openstackgerritMohammed Naser proposed openstack/magnum master: wip: multinode conformance  https://review.openstack.org/62544802:40
*** dave-mccowan has quit IRC02:47
*** hongbin has joined #openstack-containers02:50
*** lbragstad has joined #openstack-containers03:26
*** PagliaccisCloud has quit IRC03:37
*** ramishra has joined #openstack-containers03:42
*** ykarel has joined #openstack-containers03:43
*** udesale has joined #openstack-containers03:49
*** zufar has quit IRC03:50
*** itlinux has quit IRC03:59
*** itlinux has joined #openstack-containers04:29
*** itlinux has quit IRC04:30
*** ricolin has joined #openstack-containers04:31
*** hongbin has quit IRC04:37
*** ykarel has quit IRC04:40
*** PagliaccisCloud has joined #openstack-containers04:48
*** ykarel has joined #openstack-containers05:01
*** ykarel has quit IRC05:10
*** ykarel has joined #openstack-containers05:12
*** itlinux has joined #openstack-containers05:33
*** janki has joined #openstack-containers05:36
*** rcernin has joined #openstack-containers05:46
*** rcernin has quit IRC05:47
*** rcernin has joined #openstack-containers06:09
*** rcernin has quit IRC06:09
*** rcernin has joined #openstack-containers06:09
*** rcernin has quit IRC06:09
*** itlinux has quit IRC07:09
*** itlinux has joined #openstack-containers07:24
*** yolanda has joined #openstack-containers07:50
*** ykarel is now known as ykarel|lunch07:58
*** ignaziocassano1 has joined #openstack-containers08:00
*** pcaruana has joined #openstack-containers08:18
ignaziocassano1Please, help me on magnum under queens08:29
ignaziocassano1kubernets cluster does not work08:30
ignaziocassano1the cloud-init output log on master reports docker volums is created but before it reports: /var/lib/cloud/instance/scripts/part-007: line 57: /etc/etcd/etcd.conf: No such file or directory08:32
*** ykarel|lunch is now known as ykarel08:39
ignaziocassano1Anyone could help me , please ?08:40
strigaziignaziocassano1: can you show me: cluster show, cluster template show, stack show, and /var/log/cloud-init-output.log08:48
strigaziignaziocassano1: From the output, I think that the node failed to pull the etcd container.08:49
ignaziocassano1would you like I send the output log ? in /etc/sysconfig/heat-params proxy is configured correctly08:55
ignaziocassano1ah ok08:55
ignaziocassano1sorry08:56
ignaziocassano1I did nod read your request08:56
ignaziocassano1root@tst2-osctrl01 ~]# openstack coe cluster show rrrrrrrrrrrrrr08:58
ignaziocassano1+---------------------+------------------------------------------------------------+08:58
ignaziocassano1| Field               | Value                                                      |08:58
ignaziocassano1+---------------------+------------------------------------------------------------+08:58
ignaziocassano1| status              | CREATE_IN_PROGRESS                                         |08:58
ignaziocassano1| cluster_template_id | 3eb07fc2-dbdc-4468-9d4d-c3ebab572d2f                       |08:58
ignaziocassano1| node_addresses      | []                                                         |08:58
ignaziocassano1| uuid                | a40e5cb5-0b35-47f2-b65c-d4b5a6a48d6b                       |08:58
ignaziocassano1| stack_id            | 6ae09921-dea5-40f3-a4a7-b66b3446d4f0                       |08:58
ignaziocassano1| status_reason       | None                                                       |08:58
ignaziocassano1| created_at          | 2018-12-17T08:26:34+00:00                                  |08:58
ignaziocassano1| updated_at          | 2018-12-17T08:26:41+00:00                                  |08:58
ignaziocassano1| coe_version         | None                                                       |08:58
ignaziocassano1| labels              | {u'etcd_volume_size': u'80'}                               |08:58
ignaziocassano1| faults              |                                                            |08:58
ignaziocassano1| keypair             | opstkcsi                                                   |08:58
ignaziocassano1| api_address         | None                                                       |08:58
ignaziocassano1| master_addresses    | []                                                         |08:58
ignaziocassano1| create_timeout      | 60                                                         |08:58
ignaziocassano1| node_count          | 1                                                          |08:58
ignaziocassano1| discovery_url       | https://discovery.etcd.io/0fbe42ac761147fb3e2e46bfaec92d93 |08:58
strigaziignaziocassano1 paste.openstack.org08:58
ignaziocassano1| master_count        | 1                                                          |08:58
ignaziocassano1| container_version   | None                                                       |08:58
ignaziocassano1| name                | rrrrrrrrrrrrrr                                             |08:58
ignaziocassano1| master_flavor_id    | m1.small                                                   |08:58
ignaziocassano1| flavor_id           | m1.small                                                   |08:58
ignaziocassano1+---------------------+------------------------------------------------------------+08:58
ignaziocassano1ok09:00
ignaziocassano1sorry but it's firts time I use irc09:00
ignaziocassano1Paste #73745809:02
ignaziocassano1I connected to paste.openstack.org and pasted text09:02
ignaziocassano1its return the above code09:02
ignaziocassano1Paste #73745909:04
ignaziocassano1Can you read my baste ?09:04
ignaziocassano1Can you read my paste ?09:06
strigaziyes09:06
strigaziand /var/log/cloud-init-output.log09:07
ignaziocassano1ok09:07
ignaziocassano1just a moment09:07
*** lpetrut has joined #openstack-containers09:09
*** ricolin has quit IRC09:10
ignaziocassano1Paste #73746109:10
ignaziocassano1googling I found other two peoples with the same issue with queens09:17
strigazican you access docker.io from the node?09:19
strigaziit seems that the atomic pull command doesn't work09:19
openstackgerritMerged openstack/magnum master: Delete Octavia loadbalancers for fedora atomic k8s driver  https://review.openstack.org/49714409:20
ignaziocassano1I am going to try09:20
strigaziatomic install --system --system-package no --storage ostree --name server docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-109:21
ignaziocassano1In the master bash proxy is not setted09:21
ignaziocassano1so I must set it09:21
ignaziocassano1If I set the proxy in bash I can09:23
ignaziocassano1Probably the cloud init does not inherit proxy settings ?09:24
ignaziocassano1I am going to sent new paste09:25
*** jonaspaulo_ has joined #openstack-containers09:25
ignaziocassano1Paste #73746209:26
openstackgerritWayne Chan proposed openstack/python-magnumclient master: Update mailinglist from dev to discuss  https://review.openstack.org/62552409:28
openstackgerritYang Le proposed openstack/magnum master: Update mailing list from dev to discuss  https://review.openstack.org/62552509:28
strigaziignaziocassano1: that is the problem, proxy09:28
ignaziocassano1Why it dos not inherit it ?09:29
ignaziocassano1in /etc/sysconfig/heat it is a variable with proxy09:30
ignaziocassano1sorry : heat-params09:30
*** ttsiouts has joined #openstack-containers09:34
ignaziocassano1Strigazi: the proxy variables in /etc/sysconfig/heat-params are HTTP_PROXY and HTTPS_PROXY.....do you think uppercase is the problem ?09:36
ignaziocassano1No, it is not. http_proxy in uppercase works. The problem is it does not inherit the variable :-(09:40
ignaziocassano1Any patch, please ?09:41
strigaziignaziocassano1: you can try, try with to unset lowercase, source with upper case ans try to pull again09:41
strigaziignaziocassano1: you can try, try to unset lowercase, source with uppercase ans try to pull again09:41
ignaziocassano1I did it and it reports: kubernetes-apiserver:v1.11.5-109:42
ignaziocassano1server already present09:42
ignaziocassano1curl works either with uppercase or lowercase09:43
strigaziatomic install --system --system-package no --storage ostree --name server docker.io/openstackmagnum/kubernetes-proxy:v1.11.5-109:44
ignaziocassano1it works09:45
ignaziocassano1with HTTP_PROXY and HTTPS_PROXY variables09:46
strigaziwith uppercase?09:46
strigaziand lowercase?09:46
ignaziocassano1yes09:49
ignaziocassano1both09:49
ignaziocassano1first attemps with lowercase: OK09:51
ignaziocassano1second attemps with uppercase: OK09:51
*** rtjure has quit IRC09:52
ignaziocassano1stringazi: just a moment for a coffee and I come back09:54
openstackgerritBharat Kunwar proposed openstack/magnum master: Config drive support for k8s_fedora_atomic swarm_mode  https://review.openstack.org/61129109:57
openstackgerritBharat Kunwar proposed openstack/magnum master: Config drive support for k8s_fedora_atomic swarm_mode  https://review.openstack.org/61129109:57
ignaziocassano1here I am09:59
*** PagliaccisCloud has quit IRC10:02
ignaziocassano1Strigazi: the atomic endpoint could be copied on a local site ?10:04
ignaziocassano1This is a strange idea but I need it works10:05
strigaziyou can mirror all the containers you need and use container_infra_prefix to pull from a local regisytr10:06
strigaziyou can mirror all the containers you need and use container_infra_prefix to pull from a local regisrty10:06
strigazi*registry10:06
strigazicontainer_infra_prefix10:07
strigaziignaziocassano1: https://docs.openstack.org/magnum/latest/user/10:07
ignaziocassano1OK10:09
ignaziocassano1but if you read the cloud-init output the source of /etc/sysconfig/heat-paramas is done after the atomic command10:10
*** lbragstad has quit IRC10:10
*** PagliaccisCloud has joined #openstack-containers10:10
ignaziocassano1So I think a patch is required10:10
strigaziwhat you mean? what is after?10:13
ignaziocassano1If you read cloud init output it runs atomic install --storage ostree --system --system-package no --set REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt --name heat-container-agent docker.io/openstackmagnum/heat-container-agent:rawhide10:15
ignaziocassano1then it runs . /etc/sysconfig/heat-params10:16
ignaziocassano1So, atomic command does not get variables10:16
ignaziocassano1Could you confirm it ?10:17
strigaziignaziocassano1: https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/configure-etcd.sh10:18
strigaziline 3 sources the params10:18
strigaziline 38 runs atomic10:18
ignaziocassano1So, it should work10:19
strigaziyes10:19
ignaziocassano1mmmm10:19
*** PagliaccisCloud has quit IRC10:21
strigazitry to put set -x in the beginning of this fules and try to create a new cluster10:23
ignaziocassano1the script /var/lib/cloud/instance/scripts/part-007 runs atomic and initialize variables from /etc/sysconfig/heat-params10:23
ignaziocassano1which file ?10:23
ignaziocassano1Sorry I lost somesthing10:23
strigazihttps://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/configure-etcd.sh10:23
ignaziocassano1Sorry I lost something10:24
ignaziocassano1That file is on git, where I find it on my installation ?10:24
ignaziocassano1On the magnum controller ?10:24
*** salmankhan has joined #openstack-containers10:25
ignaziocassano1Yes, it is on the magnum controller10:25
ignaziocassano1Must I modify it ?10:26
ignaziocassano1I modified it10:27
ignaziocassano1let me run another cluster10:27
strigaziok10:27
ignaziocassano1it is starting10:34
ignaziocassano1would you like the new cloud-init output log ?10:34
strigaziyes, it works now?10:35
ignaziocassano1no10:35
ignaziocassano1it is waiting for the atomic command output10:35
ignaziocassano1probably it does not read the proxy variable10:36
ignaziocassano1+ mount -a10:36
ignaziocassano1+ chown -R etcd.etcd /var/lib/etcd10:36
ignaziocassano1+ chmod 755 /var/lib/etcd10:36
ignaziocassano1+ _prefix=docker.io/openstackmagnum/10:36
ignaziocassano1    10.102.186.116+ atomic install --system-package no --system --storage ostree --name=etcd docker.io/openstackmagnum/etcd:v3.2.710:36
ignaziocassano1sorry10:37
ignaziocassano1would you like a paste of the file ?10:37
strigaziignaziocassano1: if you can paste the full output.log in paster.o.o10:38
ignaziocassano1ok10:40
ignaziocassano1http://paste.openstack.org/show/737472/10:45
ignaziocassano1Can you get it ?10:47
strigaziignaziocassano1: doesn't work it tries to pull it but nothing10:51
strigaziignaziocassano1: before atomic pull try to add:10:51
strigaziexport HTTP_PROXY=${HTTP_PROXY}10:52
strigaziexport HTTPS_PROXY=${HTTPS_PROXY}10:52
*** salmankhan1 has joined #openstack-containers10:52
ignaziocassano1OK10:53
ignaziocassano1I am going to modify the file and run another cluster10:53
*** salmankhan has quit IRC10:54
*** salmankhan1 is now known as salmankhan10:54
strigazibrtknr: can you test occm with cinder volumes?10:54
ignaziocassano1I inser also an echo instruction to veryfy the content of the variable10:55
brtknrstrigazi: we do not have support for cinder volumes on our cluster... there was a patch to support it but we managed to make do with manila10:55
strigazibrtknr: we use manial as well at CERN. Other sites offer cinder, like catalyst.10:56
strigazibrtknr: I couldn't mount cinder volumes with occm :(10:56
brtknrstrigazi: anything noteworthy in the logs?10:57
brtknroccm logs*10:57
brtknrstrigazi: are you using cinder-csi-plugin or cinder-flexvolume?10:59
strigazibrtknr: nothing in the logs11:00
strigazibrtknr: no, I just moved from the intree provider to external CPO. no other changes11:00
brtknrare you able to create/mount cinder volumes outside of magnum?11:00
brtknroutside of kubernetes*11:01
ignaziocassano1ehiiiiii atomic is downloading11:01
strigaziignaziocassano1: the values need to exported then11:01
strigazibrtknr: I'm doing this https://docs.openstack.org/magnum/latest/user/#using-cinder-in-kubernetes11:02
ignaziocassano1Yes I exported them11:02
ignaziocassano1now it is in : configuring kubernetes (master)11:03
brtknrstrigazi: you may need to do this instead: https://github.com/kubernetes/cloud-provider-openstack/blob/master/examples/cinder-csi-plugin/nginx.yaml11:04
ignaziocassano1it is always there11:04
brtknrstrigazi: or this: https://github.com/kubernetes/cloud-provider-openstack/blob/master/examples/cinder-flexvolume/nginx.yaml11:05
strigaziignaziocassano1: you need to do the same in all places where atomic is used11:05
brtknrstrigazi: the way you are doing it looks like an old API11:05
strigazibrtknr: i guess so11:06
brtknrstrigazi: lemme know how it goes :)11:06
brtknrstrigazi: the second one looks closer to the current API11:07
ignaziocassano1Strigazi: do you know the name of the script I must modify ?11:08
brtknrstrigazi: we may be able to remove the `--volume-driver cinder` option altogether11:08
ignaziocassano1configure-kubernetes-master.sh onfigure-kubernetes-minion.sh.... ?11:09
*** udesale has quit IRC11:10
strigaziignaziocassano1: and start-heat-container-agent11:10
ignaziocassano1ok11:17
ignaziocassano1I modified them11:17
ignaziocassano1I am going to create another cluster11:17
ignaziocassano1with your name in honor of your help11:18
ignaziocassano1all atomic install went fine but now the output reports:11:25
ignaziocassano1Waiting for Kubernetes API...11:25
ignaziocassano1+ curl --silent http://127.0.0.1:8080/version11:25
ignaziocassano1nothing on 8080 port11:28
ignaziocassano1it continues with curl sleep and curl on 808011:30
ignaziocassano1in the cloudinit I read Failed to enable unit: Unit file kube-apiserver.service does not exist.11:32
ignaziocassano1Failed to start kube-apiserver.service: Unit kube-apiserver.service not found.11:32
ignaziocassano1activating service kube-controller-manager11:32
ignaziocassano1Failed to enable unit: Unit file kube-controller-manager.service does not exist.11:32
ignaziocassano1Failed to start kube-controller-manager.service: Unit kube-controller-manager.service not found.11:32
ignaziocassano1activating service kube-scheduler11:32
ignaziocassano1Failed to enable unit: Unit file kube-scheduler.service does not exist.11:32
ignaziocassano1Failed to start kube-scheduler.service: Unit kube-scheduler.service not found11:32
*** PagliaccisCloud has joined #openstack-containers11:33
ignaziocassano1OPSsss... it was my error in modifying script for proxy11:35
ignaziocassano1I try again...sorry11:35
ignaziocassano1it finisched the master and it is starting minion11:47
ignaziocassano1strigazi ?11:47
strigaziI'll be back, if it stats the minion, it should be good11:51
ignaziocassano1minion seems to loop here:11:51
ignaziocassano1curl -sf --cacert /etc/flanneld/certs/ca.crt --cert /etc/flanneld/certs/proxy.crt --key /etc/flanneld/certs/proxy.key 'https://10.0.0.10:2379/v2/keys/atomic.io/network/config?quorum=false&recursive=false&sorted=false'11:51
ignaziocassano1+ echo 'Waiting for flannel configuration in etcd...'11:51
ignaziocassano1Waiting for flannel configuration in etcd...11:51
ignaziocassano1+ sleep 511:51
strigaziignaziocassano1: check the flannel-service.sh if it has atomic, but you are almost there11:51
ignaziocassano1on minion ?11:52
strigaziignaziocassano1: do you pass --docker-volume-size in the CT? try without --docker-volume-size and --dockerstorage-driver overlay211:52
ignaziocassano1ahhh11:52
*** rtjure has joined #openstack-containers11:53
ignaziocassano1when I create a template I must give a flavor grater than the flavor I pass to cluster, otherwhise the docker volume is not created11:54
ignaziocassano1and an error is given in the cloud init because no space is found on atomic volume group :-(11:54
ignaziocassano1Anycase11:55
ignaziocassano1I pass a docker volume-size11:55
ignaziocassano1I am going without docker-volume-size and I check flannel-service.sh11:56
brtknrignaziocassano1: You're in a for a win :)11:56
ignaziocassano1I hope11:57
ignaziocassano1Where I fount the flannel-service.sh ?11:57
ignaziocassano1Where I find the flannel-service.sh ?11:58
brtknrcheck /var/log/cloud-init.log12:00
brtknrits one of the startup scripts.... but it will be called something different12:00
brtknrlike 001.sh or something12:01
ignaziocassano1OK12:01
ignaziocassano1first I am tryng without docker volume size ....ok ?12:01
brtknrignaziocassano1: sure...12:01
ignaziocassano1and overlay212:02
ignaziocassano1I think flannel is generate by write-network-config.sh12:08
ignaziocassano1any case....master has finisched12:10
ignaziocassano1minion ist starting12:10
ignaziocassano1minion is starting12:11
ignaziocassano1also in this case minion loops12:13
ignaziocassano1 curl -sf --cacert /etc/flanneld/certs/ca.crt --cert /etc/flanneld/certs/proxy.crt --key /etc/flanneld/certs/proxy.key 'https://10.0.0.18:2379/v2/keys/atomic.io/network/config?quorum=false&recursive=false&sorted=false'12:13
ignaziocassano1+ echo 'Waiting for flannel configuration in etcd...'12:13
ignaziocassano1Waiting for flannel configuration in etcd...12:13
ignaziocassano1+ sleep 512:13
ignaziocassano1in the minion cloud init output I found atomic install --storage ostree --system --system-package=no --name=flanneld docker.io/openstackmagnum/flannel:v0.9.012:17
ignaziocassano1but it worked12:17
strigazi1. you can try calico12:21
strigazishow me your cluster template12:22
ignaziocassano1I think curl i blocked because it does not have noproxy12:22
ignaziocassano110.0.0.18:2379 should be the master address. Is correct ?12:22
*** PagliaccisCloud has quit IRC12:23
strigazi10.0.0.18:2379 should be master12:23
ignaziocassano1OK12:23
ignaziocassano1If I thhr on minion from bash the curl return exit status 22 with the proxy12:24
ignaziocassano1without it returns 012:24
ignaziocassano1If I try on minion from bash the curl return exit status 22 with the proxy12:24
ignaziocassano1without it returns 012:25
ignaziocassano1could be the problem ?12:25
ignaziocassano1No I think not12:27
ignaziocassano1minion is running /var/lib/cloud/instance/scripts/part-00812:27
strigazimaybe you must unset the proxy for this?12:27
ignaziocassano1and in that script all variables are correctly12:27
ignaziocassano1it runs the curl and goes in loop12:28
strigazimaybe after the atomic cmd you need to unset *_PROXY12:29
brtknrignaziocassano1: why do you have {u'etcd_volume_size': u'80'}?12:30
*** janki has quit IRC12:31
ignaziocassano1I passed it with labels12:31
ignaziocassano1I can remove it and restart12:31
ignaziocassano1In my mind I thought etcd need a volume12:32
ignaziocassano1Do I remove it and restart ?12:32
ignaziocassano1as far as the proxy is concerned I try to modify /var/lib/cloud/instance/scripts/part-008 on the fly and run it12:33
ignaziocassano1ok ?12:33
strigaziI don't know by heart what part-008 is12:33
ignaziocassano1it starts the curl that generates the loop12:35
ignaziocassano1Unsetting the proxy before curl it forks12:35
ignaziocassano1But I must serch on the magnum server where it is implemented and modify it12:36
ignaziocassano1are you agree ?12:36
ignaziocassano1it is in kubernetes minion shell script12:38
*** udesale has joined #openstack-containers12:39
ignaziocassano1I also will delete the label for etcd_volume_size12:40
ignaziocassano1OK ?12:41
ignaziocassano1started new cluster12:46
ignaziocassano1meanwhile I need a coffee12:46
*** lpetrut has quit IRC12:51
ignaziocassano1here I am12:51
ignaziocassano1many coffees , no food ;-(12:51
ignaziocassano1now the cloudinit on minion works fine12:53
ignaziocassano1I erased the proxy where it runs curl to get information from master12:54
ignaziocassano1the heat stack finished successfully12:54
ignaziocassano1strigazi have you abbandoned me ?12:55
strigaziignaziocassano1: I'm here, so CREATE_COMPLETE?12:57
*** janki has joined #openstack-containers12:59
ignaziocassano1yes13:04
ignaziocassano1now I am going on with the documentation with kubectl command13:04
ignaziocassano1I do not know kubernetes , so I must install kubeclt  on my workstation13:05
*** jonaspaulo_ has quit IRC13:10
strigaziignaziocassano1: see here step three for cluster config https://docs.openstack.org/magnum/latest/install/launch-instance.html#provision-a-kubernetes-cluster-and-create-a-deployment13:12
*** Bhujay has joined #openstack-containers13:15
ignaziocassano1kubectl -n kube-system get po13:17
ignaziocassano1pods is forbidden: User "Magnum User" cannot list pods in the namespace "kube-system"13:18
ignaziocassano1:-(13:18
strigaziyou have and older magnum client13:18
strigaziignaziocassano1: try to upgrade you client13:18
ignaziocassano1ok13:18
strigaziignaziocassano1: https://docs.openstack.org/releasenotes/magnum/queens.html#upgrade-notes13:19
ignaziocassano1which version I need ?13:22
ignaziocassano1I installed lated on my ubuntu 16.0413:22
ignaziocassano11.1313:22
ignaziocassano1Ahhh13:23
ignaziocassano1I need olso the magnum client13:23
ignaziocassano1ok13:23
*** dave-mccowan has joined #openstack-containers13:25
*** Bhujay has quit IRC13:33
*** tbarron has joined #openstack-containers13:35
*** dave-mccowan has quit IRC13:42
*** pcaruana has quit IRC13:50
ignaziocassano1I installed magnum client 2.8.013:58
*** lbragstad has joined #openstack-containers13:58
ignaziocassano1but the command  kubectl -n kube-system get po  gives Error from server (Forbidden): pods is forbidden: User "Magnum User" cannot list pods in the namespace "kube-system"13:59
*** lbragstad has quit IRC13:59
*** ttsiouts has quit IRC14:00
*** ttsiouts has joined #openstack-containers14:00
*** ttsiouts has quit IRC14:04
*** ttsiouts has joined #openstack-containers14:04
*** lbragstad has joined #openstack-containers14:04
*** pcaruana has joined #openstack-containers14:05
brtknrdid you do `openstack coe cluster config <cluster-name>`?14:06
brtknrignaziocassano1:14:06
strigaziignaziocassano1 you need 2.9.014:07
*** dave-mccowan has joined #openstack-containers14:07
brtknrstrigazi: any luck with cinder?14:08
brtknrstrigazi: with the two new approaches?14:09
ignaziocassano1Yes14:10
ignaziocassano1I do14:10
*** dave-mccowan has quit IRC14:11
ignaziocassano1$(openstack coe cluster config strkub --dir ~/clusters/strkub)14:12
ignaziocassano1strkub is the name of the cluster14:12
strigazibrtknr: didn't try yet14:12
strigaziignaziocassano1: magnum --version must be >= 2.9.014:13
strigazior pip freeze | grep magnum14:13
ignaziocassano1aahhh14:13
ignaziocassano1but queens gives 2.8.014:13
ignaziocassano1How can I install 2.9.014:13
ignaziocassano1?14:14
ignaziocassano1on ubuntu 16 ?14:14
brtknrpip install python-magnumclient==2.9.0?14:14
ignaziocassano1OK14:14
brtknr pip install python-magnumclient>=2.9.0 -U14:16
ignaziocassano1Do I need a particular version of kubectl14:18
ignaziocassano1?14:18
ignaziocassano1 $(openstack coe cluster config strkub --dir ~/clusters/strkub)14:19
ignaziocassano1 export KUBECONFIG=/root/clusters/strkub/config14:19
ignaziocassano1kubectl -n kube-system get po14:20
ignaziocassano1No resources found.14:20
ignaziocassano1my config under /root/clusters/strkub is:14:23
strigaziignaziocassano1 works14:23
ignaziocassano1apiVersion: v114:23
ignaziocassano1clusters:14:23
ignaziocassano1- cluster:14:23
ignaziocassano1    certificate-authority: /root/clusters/strkub/ca.pem14:23
ignaziocassano1    server: https://10.102.186.107:644314:23
ignaziocassano1  name: strkub14:23
ignaziocassano1contexts:14:23
ignaziocassano1- context:14:23
ignaziocassano1    cluster: strkub14:23
ignaziocassano1    user: admin14:23
ignaziocassano1  name: default14:23
ignaziocassano1current-context: default14:23
ignaziocassano1kind: Config14:23
ignaziocassano1preferences: {}14:23
ignaziocassano1users:14:23
ignaziocassano1- name: admin14:23
ignaziocassano1  user:14:23
ignaziocassano1    client-certificate: /root/clusters/strkub/cert.pem14:23
ignaziocassano1    client-key: /root/clusters/strkub/key.pem14:23
strigazikubectl get all --all-namespaces14:24
ignaziocassano1yessss14:24
ignaziocassano1it works14:24
ignaziocassano1documentation reposrd the get po should returns somthing14:25
ignaziocassano1documentation reposrd the get po should returns somsthing14:25
ignaziocassano1documentation reposrd the get po should returns something14:25
ignaziocassano1but it does not return anything14:25
ignaziocassano1 kubectl get all --all-namespaces returns a lot of things14:26
ignaziocassano1Please, two things and sorry I am disturbing a lot14:28
ignaziocassano1I would be sure it works14:28
ignaziocassano1I must study kubernetes and swarm14:29
ignaziocassano1Any axample to deploy a container and check if it works ?14:29
ignaziocassano1I run kubectl run nginx --image=nginx --replicas=514:30
ignaziocassano1it worked14:30
ignaziocassano1but now where I get the address where ngunx is deployed ? Is the master node on port 80 ?14:31
ignaziocassano12 thing: modifications we made should be object of a patch ?14:36
ignaziocassano1Are you memeber ov magnum developement team ?14:37
ignaziocassano1Are you memeber of magnum developement team ?14:37
ignaziocassano1Or I must do something to report changes we made for obtaining patches ? I know aother persons who reported same issues14:39
strigaziyou can create a story in storyboard.openstack.org for magnum and add an entry here:14:40
strigazihttps://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2018-12-18_2100_UTC14:41
strigaziyes, I'm the PTL of the magnum project14:42
strigaziyou can access a service in your cluster with kubernetes services https://kubernetes.io/docs/concepts/services-networking/service/14:43
*** munimeha1 has joined #openstack-containers14:59
ignaziocassano1OK15:00
ignaziocassano1Firstly I will call my collegue who knows kubernets for testing. I it works fine I will post on storyboard15:03
ignaziocassano1Firstly I will call my collegue who knows kubernets for testing. If it works fine I will post on storyboard15:04
strigaziignaziocassano1: cool, in which organization do you work?15:07
ignaziocassano1I work in Italy : www.csipiemonte.it15:07
*** spiette has quit IRC15:08
ignaziocassano1The site is also in english . We work for local government departments of Piemonte. Piemonte is a region in north-west of italy15:09
ignaziocassano1Is there any documentation that explain which ports must be opened between magnum external networks a and controllers ?15:10
*** spiette has joined #openstack-containers15:10
ignaziocassano1I saw port 5000 for keystone and some ports for heat . It could be useful writing some documentation because many people got errors on this15:11
ignaziocassano1If you read emailing list operators, Zufar Dhiyaulhaq  is becoming crazy for this. I talked with him on yesterday15:13
strigazihttps://docs.openstack.org/magnum/latest/install/install.html15:13
strigazithe compute instances must be able to access, keystone, magnum (for kubernetes nova, cinder, octavia/neutron if you want to use the LoadBalancer service type)15:15
strigazicompute instance == the virtual machines that compose the cluster15:16
*** hongbin has joined #openstack-containers15:17
ignaziocassano1Sorry, I read:15:17
ignaziocassano115:17
ignaziocassano1Important15:17
ignaziocassano1Magnum creates clusters of compute instances on the Compute service (nova). These instances must have basic Internet connectivity and must be able to reach magnum’s API server. Make sure that the Compute and Network services are configured accordingly.15:17
strigaziThe configuration you need to do depends on your deployment ^^15:19
ignaziocassano1yes, bu basically compute instances need to access heat for wait condition (for example)15:20
strigazicluster nodes need to issue token, talk the magnum API, talk to nova (when using the cloud provider), talk to cinder if you want to mount volumes, pull images from docker.io or an internal to the organization registry and heat for the wait handle and software deployments15:20
ignaziocassano1Now it is more clear15:21
ignaziocassano1strigazi I have no works to say thank you for your help. You was ver very patient with me and my poor english15:24
ignaziocassano1strigazi I have no words to say thank you for your help. You was ver very patient with me and my poor english15:24
strigaziyou are welcome15:25
ignaziocassano1where are you from ?15:25
strigaziI work at cern.ch15:26
*** ykarel is now known as ykarel|away15:26
ignaziocassano1So your skill is impressive15:27
ignaziocassano1Thank you again.15:27
ignaziocassano1bye15:29
*** pcaruana has quit IRC15:29
*** ignaziocassano1 has quit IRC15:30
strigazicheers15:30
*** ykarel|away has quit IRC15:35
*** gp-csi has joined #openstack-containers15:38
*** itlinux has quit IRC15:40
*** gp-csi has quit IRC15:40
*** pcaruana has joined #openstack-containers15:51
*** ykarel|away has joined #openstack-containers15:52
*** gp-csi has joined #openstack-containers15:56
*** udesale has quit IRC15:57
*** gp-csi has quit IRC15:58
*** gp-csi has joined #openstack-containers16:02
*** janki has quit IRC16:05
*** ykarel|away is now known as ykarel16:12
*** Bhujay has joined #openstack-containers16:16
*** pcaruana has quit IRC16:28
*** gp-csi has quit IRC16:30
*** itlinux has joined #openstack-containers16:40
colin-seems you made a big impression strigazi :)16:41
strigazicolin-: xD16:45
strigazicolin-: do you run by any change cinder csi?16:46
*** ttsiouts has quit IRC16:56
*** Bhujay has quit IRC16:56
*** ttsiouts has joined #openstack-containers16:56
*** ttsiouts has quit IRC17:01
colin-i know we have used it, not sure if the template we're deploying today uses it for anything17:03
colin-why do you ask?17:03
dimsstrigazi : not sure if you are following along this work - https://github.com/kubernetes-sigs/cluster-api-provider-openstack (also see https://github.com/kubernetes-sigs/cluster-api)17:11
*** ramishra has quit IRC17:12
strigazidims: I do and we intend to use/try it soon. I'm following the development for the aws one too.17:16
strigazicolin-: moving to openstack-ccm is not a drop-in replacement for cinder :( lbaas works great but cinder doesn't17:17
colin-gotcha17:19
*** flwang1 has joined #openstack-containers17:19
flwang1strigazi: around?17:19
strigaziflwang1: not for long, tell me17:20
flwang1strigazi: i just saw your message on slack, do you think we should just give up the effort on CPO+Cinder and go for csi?17:20
dimsstrigazi : lxkong just got it working and will help update some of the doc around ccm + cinder17:21
strigaziflwang1: I'm interested as a developer of magnum. Undortunately as a user we sold to cephfs and manila17:22
flwang1dims: that's good. i asked lxkong took a look yesterday. but my question is17:23
strigaziflwang1: it looks good, I was comparing it with csi-cephsfs17:23
flwang1are we going to support it ?17:23
dimsflwang1 : yes, we (kubernetes/cloud-provider-openstack folks) are going to support it17:24
flwang1as strigazi said, if csi is the trend, what's is our goal to maintain it?17:24
strigaziflwang1:  that are missing are, pass the openstack_ca, use the trustee, move things to kube-system, everything seems to be runnning in the default ns17:24
dimsflwang1 : it's the trend, but it's not ready yet. (and it has to work with things like cephfs and manila like strigazi pointed out)17:25
flwang1dims: cool, if that's the long term goal. we (catalyst cloud) would like to help17:25
dimsgreat!17:25
strigazidims: FYI, patch to use the external ccm: https://review.openstack.org/#/c/577477/17:26
strigaziflwang1: I have to go see you later17:27
flwang1strigazi: have a good night17:27
flwang1strigazi:  i will test lingxian's fix and add comments on the magnum patch17:27
strigaziflwang1: oh, I left you comment at the patch for k8s-keystone-auth17:27
flwang1strigazi: great, thanks17:28
strigaziflwang1:  we can do a followup patch for csi-cinder if you want17:28
strigaziflwang1: see you17:28
flwang1dims: did lxkong figure out any fix we need for CPO for cinder?17:28
flwang1strigazi: re csi-cinder, we should17:28
dimsstrigazi : ack, i got to step out for a bit will review when i get back. in general the tip missing in the doc was that kube-controller-manager's --external-cloud-volume-plugin should be set to `openstack` and the config file has to be "/etc/kubernetes/cloud-config" (exactly the same path!!) and has to be available to kubelet and kube-controller-manager17:28
dimsflwang1 : yes ... ^17:28
dims(`--cloud-provider` set to external for kubelet/kube-controller-manager and kube-apiserver needless to say)17:29
flwang1dims: no code change?17:29
dimsnope17:30
flwang1dims: cool17:30
dimsflwang1 : we have had the test suites running for a while, just missing docs17:30
flwang1dims: very nice17:30
dimsback in a bit ...17:30
flwang1dims: ok, i have another question for you after you back17:31
flwang1dims: as for this issue https://github.com/kubernetes/cloud-provider-openstack/issues/387  just wanna confirm with you about the binary name since Magnum are going to use it for keystone auth17:34
*** ykarel is now known as ykarel|away17:37
*** PagliaccisCloud has joined #openstack-containers17:50
*** PagliaccisCloud has quit IRC17:55
dimsflwang1 : replied in issue18:02
*** PagliaccisCloud has joined #openstack-containers18:03
flwang1dims: thanks. i tested with kubectl 1.8.x which doesn't work with old way, and with kubectl 1.11.x it worked for me with the client-keystone-auth cmd18:04
flwang1since i'd like to auto generate the kubeconfig for magnum k8s cluster for keystone auth, so i'd like to confirm the correct name of the keystone auth client binary18:05
dimsflwang1 : i believe the name will remain, unless the kubectl stops accepting the kubeconfig with the full path of binary under "exec"18:06
flwang1dims: ok, so the convention is not very strict, right?18:06
flwang1i'm pretty sure the full name/path does work18:07
dimsi belive the convention is for finding things that augment the kubectl command line (that inject new command line params etc)18:07
flwang1dims: cool, all good. thanks for all the help18:08
mordredooh look, it's flwang1 and dims18:08
* dims waves to mordred 18:09
* mordred wavs to dims18:09
*** salmankhan has quit IRC18:09
flwang1mordred: waves from New Zealand18:09
flwang1mordred: did you forget to merge my ansible patch for Magnum? ;)18:09
mordredflwang1, dims: SO - I have a 'fun' edge case I hit with magnum18:09
mordredflwang1: uhoh. probably so, yes18:10
mordredflwang1: have a link handy?18:10
flwang1mordred: i'm listening...18:10
flwang1mordred: wait a sec18:10
mordreddims, flwang1: https://github.com/rook/rook/issues/2371 is the issue we filed with rook and has *way* more info than you'd ever want ...18:11
mordredbut the tl;dr is18:11
mordredthe flexvolume driver from rook that mounts cephfs shares winds up mounting them in the kubelet container and they don't propagate up to the host, so the pods where the filesystem is supposed to be mounted don't actually see the cephfs filesystem18:12
flwang1https://github.com/ansible/ansible/pull/4468618:12
mordredeven though the /var/lib/kubelet is mounted rshared - and the tempfs filesystems mounted for secrets DO propagate back up to the host18:12
flwang1i saw there is a last comment asking change 2.7 to 2.8, anything else i need to change?18:12
mordredflwang1: no - I think that's it18:13
flwang1mordred: ok, i will do it today18:13
flwang1cheers18:13
flwang1mordred: as for your rook question, i saw you asked days ago, still haven't got the answer?18:14
mordredno - haven't figured it out or found any workarounds :(18:14
dimsmordred : "smells" like a implementation problem in the flex volume provider18:14
mordreddims: yeah - I pushed this up fairly blind: https://github.com/rook/rook/pull/237618:14
mordreddims: based on a similar 'smell' feeling - since the tempfs volumes are behaving properly, it seems liek the kubelet can mount things that are seen on the host18:15
mordreddims: but I don't know enough about flexvolume land or the various magics involved - so I keep hoping someone smart will say something magical :)18:15
mordredflwang1: ping me when you have that updated and I'll leave a shipit - sorry for the delay there18:18
flwang1mordred: will do it today, thanks18:20
flwang1mordred: flexvolume is out of my knowledge base, can't help, sorry18:21
*** corvus has joined #openstack-containers18:21
mordredkk. I mostly am not sure if it's an atomic thing, a magnum thing, a rook thing, or an unfortunate combo of them18:22
dimsmordred : what version of k8s and rook?18:23
flwang1mordred: are you using master Magnum?18:24
mordreddims: latest of rook - 0.9.0 - and basically just followed their getting started guide: https://www.nytimes.com/2018/12/17/science/donald-knuth-computers-algorithms-programming.html and https://rook.io/docs/rook/v0.9/ceph-filesystem.html18:24
mordredwe're using the magnum that's deployed in vexxhost18:24
mordredcorvus: remember off the top of your head the k8s version it deployed?18:25
*** ykarel|away has quit IRC18:25
corvusmordred: yes, we used kube_tag to specify it: kube_tag=v1.11.5-118:26
dimsthanks corvus18:26
dimsmordred : i see some updates to the mount code in k8s that rook imports, but rook is still vendoring in 1.11.3 ...18:26
mordreddims: oh goodie - so if it is a rook issue, it could also need an update to the vendored code18:27
dimsprobably worth going through https://github.com/kubernetes/kubernetes/commits/master/pkg/util/mount/mount.go and related files..18:28
dimsdoes your patch work?18:28
dimsassumption from before seems to be if /var/lib/kubelet has correct magic then things get visible - https://github.com/kubernetes/kubernetes/pull/4572418:29
mordreddims: I have no clue - I'm not sure how to test it - I'm guessing i'd need to build it, and then put the resulting binary down into the flexvolume paths right?18:29
mordreddims: yeah - and I verified in the magnum containerized kubelet that it's marking /var/lib/kubelet as rshared - so that part *should* be good18:30
dimsmordred : since you are using v1.11.5-1 k8s and and k8s in rook in 1.11.3 ... don't think that may be a good idea yet18:30
mordreddims: oh - so that pull is not in 1.11.5?18:31
dimsdon't see it - https://github.com/kubernetes/kubernetes/compare/v1.11.3...v1.11.518:33
dimsso ... back in a few hours. need to take young one to dentist18:33
mordreddims: awesome. thanks18:33
flwang1v1.11.5-1 is the highest version we're publising IIRC18:38
flwang1just in case, if you need any change of the k8s config, you can just ssh into the vm and change it, restart the k8s component to verify18:38
*** flwang1 has quit IRC18:48
*** PagliaccisCloud has quit IRC19:44
*** yolanda has quit IRC21:18
*** rcernin has joined #openstack-containers21:28
*** hongbin has quit IRC22:29
*** itlinux has quit IRC22:42
*** munimeha1 has quit IRC23:20
*** itlinux has joined #openstack-containers23:34
*** absubram has joined #openstack-containers23:53
*** PagliaccisCloud has joined #openstack-containers23:57

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!