Tuesday, 2019-02-26

*** eandersson has quit IRC00:16
*** eandersson_ has joined #openstack-containers00:17
*** hongbin has quit IRC00:19
*** mrodriguez has quit IRC00:36
*** sapd1 has joined #openstack-containers00:45
*** hongbin has joined #openstack-containers01:55
*** _fragatina has quit IRC02:05
openstackgerritFeilong Wang proposed openstack/python-magnumclient master: Keystone auth support  https://review.openstack.org/62309202:10
*** nwonknu has quit IRC02:11
*** nwonknu has joined #openstack-containers02:17
flwangjakeyip: ping02:20
openstackgerritFeilong Wang proposed openstack/python-magnumclient master: Keystone auth support  https://review.openstack.org/62309202:47
*** mgariepy has quit IRC02:53
*** mgariepy has joined #openstack-containers02:56
*** sapd1 has quit IRC02:57
*** sapd1 has joined #openstack-containers03:04
*** sapd1 has quit IRC03:14
*** ykarel has joined #openstack-containers03:19
*** hongbin has quit IRC03:33
*** ramishra has joined #openstack-containers03:36
*** udesale has joined #openstack-containers03:54
*** sdake has joined #openstack-containers04:07
*** sdake has quit IRC04:10
*** sdake has joined #openstack-containers04:12
*** sdake has quit IRC04:13
*** sdake has joined #openstack-containers04:14
*** sdake has quit IRC04:23
jakeyipsorry was away, what's up flwang ?04:25
*** dave-mccowan has quit IRC04:44
openstackgerritJake Yip proposed openstack/magnum master: python3 fix: encode content to UTF-8  https://review.openstack.org/63833604:45
openstackgerritJake Yip proposed openstack/magnum master: python3 fix: decode binary cert data if encountered  https://review.openstack.org/63833604:48
*** _fragatina has joined #openstack-containers05:12
*** janki has joined #openstack-containers05:26
*** ramishra has quit IRC05:32
*** ramishra has joined #openstack-containers05:38
*** _fragatina has quit IRC05:47
*** ivve has joined #openstack-containers06:23
*** jchhatbar has joined #openstack-containers06:43
*** janki has quit IRC06:45
*** ramishra has quit IRC07:31
*** ramishra has joined #openstack-containers07:32
*** logan- has quit IRC07:35
*** logan- has joined #openstack-containers07:37
*** _fragatina has joined #openstack-containers07:41
*** ykarel is now known as ykarel|lunch07:56
*** rcernin has quit IRC07:56
*** alisanhaji has joined #openstack-containers08:19
*** _fragatina has quit IRC08:23
*** serhatd has joined #openstack-containers08:31
*** pcaruana has joined #openstack-containers08:35
*** ttsiouts has joined #openstack-containers08:49
*** _fragatina has joined #openstack-containers08:54
*** ign0tus has joined #openstack-containers08:59
*** ttsiouts has quit IRC09:07
*** ttsiouts has joined #openstack-containers09:08
*** ykarel|lunch is now known as ykarel09:10
*** ttsiouts has quit IRC09:13
*** ttsiouts has joined #openstack-containers09:14
*** jchhatbar has quit IRC09:16
*** jchhatbar has joined #openstack-containers09:16
*** jchhatbar has quit IRC09:22
*** jchhatbar has joined #openstack-containers09:23
*** FlorianFa has quit IRC09:30
*** tbarron has quit IRC09:30
*** spiette has quit IRC09:35
*** FlorianFa has joined #openstack-containers09:35
*** spiette has joined #openstack-containers09:37
*** _fragatina has quit IRC09:39
*** sapd1 has joined #openstack-containers09:49
*** ricolin has joined #openstack-containers10:10
*** _fragatina has joined #openstack-containers10:39
*** ttsiouts has quit IRC11:07
*** ttsiouts has joined #openstack-containers11:08
*** ttsiouts has quit IRC11:12
*** udesale has quit IRC11:29
*** ttsiouts has joined #openstack-containers12:08
*** janki has joined #openstack-containers12:14
*** jchhatbar has quit IRC12:15
*** janki has quit IRC12:24
*** janki has joined #openstack-containers12:24
*** logan_ has joined #openstack-containers12:27
*** logan_ is now known as Guest1164712:28
*** janki has quit IRC12:29
*** kaiokmo has quit IRC12:30
*** jento has quit IRC12:30
*** ArchiFleKs has quit IRC12:30
*** logan- has quit IRC12:30
*** ramishra has quit IRC12:30
*** mgariepy has quit IRC12:30
*** nwonknu has quit IRC12:30
*** openstackgerrit has quit IRC12:30
*** ArchiFleKs has joined #openstack-containers12:31
*** Guest11647 is now known as logan-12:31
*** nwonknu has joined #openstack-containers12:37
*** ramishra_ has joined #openstack-containers12:38
*** _fragatina has quit IRC12:44
*** henriqueof has joined #openstack-containers12:47
*** ttsiouts has quit IRC12:51
*** ttsiouts has joined #openstack-containers12:51
*** ttsiouts has quit IRC12:56
brtknrstrigazi: is there a problem using '\' in passwords inside cloud-config?12:58
*** ttsiouts has joined #openstack-containers13:03
*** sdake has joined #openstack-containers13:15
*** mgariepy has joined #openstack-containers13:18
brtknrDont worry, I figured it out... the password needs to be inside double quotes with escaped \13:23
*** dave-mccowan has joined #openstack-containers13:54
*** serhatd_ has joined #openstack-containers13:58
*** serhatd has quit IRC14:00
*** kaiokmo has joined #openstack-containers14:12
*** ykarel is now known as ykarel|away14:13
*** alisanhaji has quit IRC14:17
*** alisanhaji has joined #openstack-containers14:19
*** sdake has quit IRC14:21
*** ivve has quit IRC14:27
*** sdake has joined #openstack-containers14:33
*** dave-mccowan has quit IRC14:34
*** ykarel|away is now known as ykarel14:46
*** sdake has quit IRC14:55
*** jento has joined #openstack-containers14:58
*** ykarel_ has joined #openstack-containers14:59
*** sdake has joined #openstack-containers15:00
*** ykarel has quit IRC15:00
*** hongbin has joined #openstack-containers15:03
*** ttsiouts has quit IRC15:13
*** ttsiouts has joined #openstack-containers15:13
*** ttsiouts has quit IRC15:18
*** ykarel_ is now known as ykarel|away15:32
*** rado has joined #openstack-containers15:32
*** ttsiouts has joined #openstack-containers15:34
*** sapd1 has quit IRC15:35
*** rado has quit IRC15:38
*** rado has joined #openstack-containers15:40
*** rado has quit IRC15:45
*** hongbin has quit IRC15:48
*** ricolin has quit IRC15:52
*** sdake has quit IRC15:59
*** sdake has joined #openstack-containers16:01
*** sdake has quit IRC16:02
*** ign0tus has quit IRC16:15
*** sdake has joined #openstack-containers16:18
*** belmoreira has quit IRC16:20
*** itlinux has joined #openstack-containers16:25
*** sdake has quit IRC16:25
*** sdake has joined #openstack-containers16:27
*** dave-mccowan has joined #openstack-containers16:28
*** jmlowe has quit IRC16:32
*** pcaruana has quit IRC16:32
*** sdake has quit IRC16:35
*** sdake has joined #openstack-containers16:36
*** sdake has quit IRC16:40
*** sdake_ has joined #openstack-containers16:41
*** ykarel|away has quit IRC16:41
*** ykarel has joined #openstack-containers16:42
*** mrodriguez has joined #openstack-containers16:45
*** sdake_ has quit IRC16:45
*** sdake has joined #openstack-containers16:47
*** sdake has quit IRC16:50
*** _fragatina has joined #openstack-containers16:52
*** sdake has joined #openstack-containers16:53
*** ttsiouts has quit IRC16:59
*** ttsiouts has joined #openstack-containers17:00
*** sdake has quit IRC17:00
*** sdake_ has joined #openstack-containers17:02
*** ttsiouts has quit IRC17:05
*** _fragatina has quit IRC17:08
*** itlinux_ has joined #openstack-containers17:09
*** _fragatina has joined #openstack-containers17:10
*** itlinux has quit IRC17:13
*** serhatd_ has quit IRC17:28
*** jmlowe has joined #openstack-containers17:40
*** flwang1 has joined #openstack-containers17:43
flwang1brtknr: what's the node-ip for? just the fixed ip of the worker node?17:44
flwang1strigazi: do we have weekly meeting today?17:44
brtknrflwang1: yes, to make sure that the kubelet is serving on a fixed ip but it doesnt seem to be working for me17:52
strigaziflwang1: We have a meeting today. See you then17:54
*** kaiokmo has quit IRC17:58
flwang1brtknr: ok, i have no objection for that argument, but we do need some tests18:01
flwang1strigazi: cool18:01
*** flwang1 has quit IRC18:02
*** itlinux_ has quit IRC18:13
*** itlinux has joined #openstack-containers18:17
*** sdake_ has quit IRC18:30
*** sdake has joined #openstack-containers18:31
*** sdake has quit IRC18:35
*** sdake has joined #openstack-containers18:37
*** sdake has quit IRC18:46
*** _fragatina has quit IRC18:50
*** sdake has joined #openstack-containers18:51
*** sdake has quit IRC18:55
*** sdake_ has joined #openstack-containers18:56
*** sdake_ has quit IRC18:56
*** sdake has joined #openstack-containers18:59
*** sdake has quit IRC19:00
*** sdake_ has joined #openstack-containers19:00
henriqueofflwang1: When I add the 'ingress_controller=octavia' label to cluster, shouldn't it install and configure the Octavia ingress controller?19:01
henriqueofDoes anyone know?19:03
*** henriqueof has quit IRC19:04
*** sdake_ has quit IRC19:06
*** sdake has joined #openstack-containers19:06
*** ramishra_ has quit IRC19:09
*** sdake has quit IRC19:10
*** sdake has joined #openstack-containers19:12
*** itlinux_ has joined #openstack-containers19:19
*** nwonknu has quit IRC19:20
*** sdake has quit IRC19:20
*** itlinux has quit IRC19:21
*** sdake_ has joined #openstack-containers19:21
*** sdake_ has quit IRC19:25
*** sdake has joined #openstack-containers19:26
*** nwonknu has joined #openstack-containers19:27
*** ykarel has quit IRC19:33
*** sdake has quit IRC19:35
*** sdake_ has joined #openstack-containers19:36
*** henriqueof has joined #openstack-containers19:43
*** sdake_ has quit IRC19:45
*** sdake has joined #openstack-containers19:47
*** itlinux_ has quit IRC19:52
*** sdake has quit IRC19:55
*** itlinux has joined #openstack-containers19:56
*** sdake has joined #openstack-containers19:56
*** sdake has quit IRC20:03
*** itlinux has quit IRC20:16
*** itlinux has joined #openstack-containers20:20
*** alisanhaji has quit IRC20:22
brtknrflwang: if i can explain what i mean by it not working for me, i can see that the node-ip is labelled correctly but its not respected20:22
brtknrthis is what I'm talking about: http://paste.openstack.org/show/746081/20:26
brtknrNotice the discrepancy between Internal IP and 'alpha.kubernetes.io/provided-node-ip'20:27
*** colby_ has joined #openstack-containers20:30
flwangbrtknr: yep, that's interesting20:34
brtknrflwang: mind you, this is only an issue when there is more than 1 interface20:35
flwangbrtknr: i would suggest file a bug/story for tracking20:37
brtknrflwang: i am not sure where... i am not certain whether this is a fedora atomic issue or kubernetes issue20:38
flwangbrtknr: i understand, but a bug will be easy for tracking and get help from others20:39
flwangstrigazi: seems the /resize api works well, i'm so excited20:39
brtknrflwang: what was the silver bullet with resize api? what was the missing piece?20:40
flwangbrtknr: we need a way to delete particular nodes from the cluster20:40
flwangthough we can do it with heat api, but we want to do it in magnum for autoscaling/auto healting stuff20:41
brtknris this a precedent to node groups?20:41
flwangnot really a precedent, but it will support node groups when it's ready20:42
flwangi have already put a nodegroup_id as a parameter20:42
flwangbrtknr: here is the patch https://review.openstack.org/#/c/638572/20:43
brtknrim looking at it now20:43
flwangbrb20:44
brtknris it only interactable through curl or also magnumclient?20:44
jakeyipui2zx920:45
jakeyipsorry ignore me20:48
flwangjakeyip: that's a good password20:48
flwangbrtknr: both20:48
jakeyipargh20:48
flwangbut i haven't started the client work20:49
flwangjakeyip: are you working on the py3 issue of heath check?20:49
jakeyipyeah I've submitted another changeset yesterday. needs to discuss about it20:49
henriqueofDoes adding 'ingress_controller=octavia' annotation to cluster configures octavia ingress or should I do the steps described here https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-octavia-ingress-controller.md?20:49
*** jmlowe has quit IRC20:49
jakeyipthe issue is I am not sure what format the certs come in, it depends on the backend. so I need to just make them text. does that make sense?20:51
strigazibrtknr: let's discuss the ip issue in the meeting?20:52
jakeyipwondering if @eandersson_ is around?20:53
flwangjakeyip: can you pls share the link?20:53
schaneyflwang: is there still ongoing discussion on that autoscaler thread about using instance IP vs instance ID as the nodes_to_remove parameter?20:53
jakeyiphttps://review.openstack.org/#/c/638336/320:53
flwangschaney: i think we have decided to go for instance ID since it's more reliable than IP20:54
flwangwith the new resize api, we should be able to do auto healing easily20:55
*** itlinux has quit IRC20:59
strigazi#startmeeting containers20:59
openstackMeeting started Tue Feb 26 20:59:26 2019 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.20:59
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:59
*** openstack changes topic to " (Meeting topic: containers)"20:59
openstackThe meeting name has been set to 'containers'20:59
strigazi#topic Roll Call20:59
*** openstack changes topic to "Roll Call (Meeting topic: containers)"20:59
strigazio/20:59
schaneyo/20:59
jakeyipo/20:59
brtknro/20:59
flwango/21:00
strigazi#topic Stories/Tasks21:00
*** openstack changes topic to "Stories/Tasks (Meeting topic: containers)"21:00
strigazi1. openstack autoscaler21:00
strigazi#link https://github.com/kubernetes/autoscaler/pull/169021:01
strigaziby far the highest number of comments in the repo21:01
schaneywas glad to get a chance to review, any thoughts on how far off a merge is?21:01
flwangschaney: are you Scott?21:02
schaneywill we want to wait on the magnum resize API?21:02
schaneyflwang: yes thats me21:02
strigaziI think it is close if we show aggrement to them21:02
flwangschaney: welcome to join us21:02
strigazifrom the openstack and magnum side21:02
schaney:)21:02
strigaziForm openstack PoV should be ok,21:02
flwangschaney: we also need work in gophercloud for the new api, so i'm not sure if we can wait21:03
*** spsurya has quit IRC21:03
flwangstrigazi: ^21:03
strigazisince they (CA maitainers) are ok to have two21:03
strigaziWhat difference does it make to us?21:03
flwangstrigazi: yep, we can get current one in, and propose the new magnum_manager21:03
schaneyany thoughts on moving away from the API polling once the magnum implementation is complete?21:03
flwangno difference for Magnum21:04
strigaziif we agree on the design, implementation and direction21:04
strigazischaney: we can do that too21:04
schaneyawesome21:04
strigaziwe can leave that as a third step?21:05
flwangstrigazi: can we just rename the current pr to openstack_magnum_manager.go21:05
strigaziFirst merge like it is now21:05
flwangand refactor it as long as we have the new api21:05
strigazi2nd add resixe api21:05
strigaziand then remove polling21:05
strigazischaney: makes sense?21:06
*** jmlowe has joined #openstack-containers21:06
strigaziall this would happen in this cycle21:06
schaneysounds good, it will be easier to start tackling specific areas once it's out there21:06
flwangFWIW, i don't mind get current PR in with current status, and as long as the new /resize api ready, we can decide how to do in CA21:06
strigaziif there are no more objections with the current implementation, we can push the CA team to merge21:07
*** itlinux has joined #openstack-containers21:07
brtknrits not so much an objection but how will the cluster API stuff affect this?21:08
strigaziIs the ip vs id vs uuid thing clear?21:08
strigazibrtknr: not at all21:08
strigazithe cluster api will be very different21:08
schaneyI am not actually clear on how your templates create the IP mapping21:08
strigazilike google has two implementations, one for gce and one for gke21:08
flwangschaney: are you talking about this https://review.openstack.org/639053 ?21:09
strigazischaney: flwang bare with me for the explanation, also this change needs more comments in the commit message ^^21:09
strigaziin heat a resource group creates a stack with depth two21:10
strigazithe first nested stack kube_minions has a ref_map output21:10
strigaziwhich goes like this:21:10
strigazi0: <smth>21:10
strigazi1: <smth>21:10
strigaziand so on21:10
strigaziThese indices are the minion-INDEX numbers21:11
strigaziand the indices in the ResourceGroup21:11
strigaziA RG supports removal_poliies21:12
strigaziA RG supports removal_policies21:12
strigaziwhich means you can pass a list of indices as a param, and heat will remove these resources from the RG21:12
brtknrI am not clear on what is using the change made in https://review.openstack.org/639053 atm21:13
strigaziadditionally, heat will track which indices have removed and won't create them again21:13
strigazibrtknr: bare with me21:13
strigaziso,21:13
strigaziin the first imlementation of removal policies in the k8s templates21:14
strigazithe IP was used as an id in this list:21:14
strigazi0: private-ip-121:14
strigazi1: private-ip-221:14
strigazi(or zero based :))21:14
strigazithen it was changes with this commit:21:15
strigazihttps://github.com/openstack/magnum/commit/3ca2eb30369a00240a92c254c95bea6c7a60fee121:15
strigaziand the ref_map became like this:21:16
strigazi0: stack_id_of_nested_stack_0_depth_221:16
strigazi1: stack_id_of_nested_stack_1_depth_221:16
strigaziand the above patch broke the removal policy of the resouce group21:16
strigazimeaning, if you passed a list of ips to the removal policy after the above patch21:17
strigaziheat wouldn't understand to whoch index in the RG that ip belonged too21:17
strigazithat is why for flwang and schaney didn't work21:18
schaneygotcha21:18
strigaziflwang: now proposes a change21:18
strigazito make the ref_map:21:18
strigazi0: nova_server_uuid_021:18
strigazi1: nova_server_uuid_121:18
strigaziyou can inspect this map in current cluster like this:21:19
*** imdigitaljim has joined #openstack-containers21:19
colin-sorry i'm late21:19
strigaziopenstack stack list --nested | grep <parent_stack_name> | grep kube_minions21:19
strigaziand then show the nested stack of depth 121:20
strigaziyou will see the list21:20
strigaziyou will see the ref_map21:20
strigazieg:21:20
brtknr`openstack stack list --nested` is a nice trick!21:21
brtknrtil21:21
strigazihttp://paste.openstack.org/show/746304/21:22
strigazithis is with the IP ^^21:22
brtknrive always done `openstack stack resource list k8s-stack --nested-depth=421:22
eandersson_o/21:23
strigazihttp://paste.openstack.org/show/746305/21:23
strigazithis is with the stack_id21:23
imdigitaljim\o21:23
strigazicheck uuid b4e8a1ec-0b76-48cb-b486-2a5459ea45d421:23
strigaziin the ref_map and in the list of stacks21:24
imdigitaljimi like the new change to uuid =)21:24
flwangimdigitaljim: yep, uuid is more reliable than ip for some cases21:24
strigaziafter said change, we will see the nova uuid there21:24
strigaziso, in heat we can pass either the server uuid or the index21:25
strigazithen heat will store the removed ids here:21:25
strigazihttp://paste.openstack.org/show/746306/21:26
strigazimakes sense?21:26
jakeyipsounds good to me21:27
schaneyyep! the confusion on my end was the "output" relationship to removal policy member21:27
schaneyand the nested stack vs the resource representing the stack21:28
schaneymakes sense now though21:28
strigaziI spent a full moring with thomas on this21:29
brtknrdo you need https://review.openstack.org/639053 for resize to work?21:29
strigazibrtknr: yes21:29
strigazibrtknr: to work by giving nova uuids21:30
brtknras it doesnt seem linked on gerrit as a dependency21:30
jakeyipin https://github.com/openstack/magnum/commit/3ca2eb30369a00240a92c254c95bea6c7a60fee1 the name for key is OS::stack_id does that need to change or will that be confusing if we use it for soething else21:30
strigazijakeyip: i don't think we have an option there21:31
strigaziit needs to be explained well21:31
brtknryes, probably better to call it nova_uuid?21:31
flwangbrtknr: because i assume https://review.openstack.org/639053  will be merged very soon21:31
brtknror OS::nova_uuid?21:31
flwangbut the resize patch may take a bit long, sorry for the cofustion21:31
strigazibrtknr: doesn't work nova_uuid or other name21:31
strigazinot sure if OS::nova_uuid makes sense21:31
strigazinot sure if OS::nova_uuid makes sense to heat21:32
strigazi(to me it does)21:32
brtknroh okay! i didnt realise it was a component of heat21:32
strigazibrtknr: needs to be double checked21:33
strigazithe important part is that the ref_map I mentioned before has the correct content21:33
brtknrhttps://docs.openstack.org/heat/rocky/template_guide/composition.html21:33
brtknrsounds like we're stuck with OS::stack_id21:33
strigaziyeap21:34
flwangwe should move and discuss details on the patch21:34
strigaziwith comments in code, should be ok21:34
flwangstrigazi: i will update the patch based on above discussion21:35
strigazischaney and colleagues, flwang brtknr we have agreement right?21:35
brtknr+121:35
strigaziimdigitaljim: colin- eandersson_ ^^21:35
colin-on the UID portion?21:36
schaneyyeah UUID will work well for us21:36
strigaziyes21:36
colin-lgtm21:36
imdigitaljimthanks for the clarity, uuid looks good and works for us21:36
strigazi\o/21:36
jakeyipI don't have objections but I would like to read the patch first, I am a bit confused whether stack_id is same as nova_uuid or can you get one from the other21:37
strigazijakeyip: they are different21:37
strigazibut that stack_id logically corresponds the a nova_uuid21:37
strigazibut that stack_id logically corresponds to that nova_uuid21:38
jakeyipif we can dervie nova_uuid from stack_id should we do that instead?21:38
eandersson_sounds good21:38
strigazijakeyip: well it is the other way round21:39
strigazijakeyip: derive the stack_id from the nova_uuid21:39
imdigitaljimwell the stack contains the nova server21:39
imdigitaljimso it makes sense to use stack anywasy21:39
strigazijakeyip: the CA or the user will know which server wants to remove21:39
jakeyipI thought the stack will have a nova_server with the uuid21:40
*** eandersson_ is now known as eandersson21:40
strigaziimdigitaljim: that is correct but the user or the CA will know the uuid21:40
imdigitaljimyou mean the nova uuid?21:40
imdigitaljimstrigazi:^21:40
strigaziyes21:40
strigazieg when you do kubectl descibe node <foo> you see the nova_uuid of the server21:41
imdigitaljimyeah but its in autoscaler, im missing why the user's knowledge even matters21:42
strigazijakeyip: also to be clear, the nova_uuid won't replace the stack uuid, the stack will still have its uuid21:42
imdigitaljimeither way, im happy with the approach21:42
jakeyipyes. but can whichever code just looks for stackid and the OS::Nova::Server uuid of that stack ?21:42
imdigitaljimgood choices21:42
strigaziimdigitaljim: for example for the resize API, there are cases that a user won't to get rid of a node21:43
brtknroh yeah you're right, under `ProviderID:                  openstack:///231ba791-91ec-4540-a580-3ef493e36055`21:43
imdigitaljimah fair point21:43
imdigitaljimgood call21:43
strigazijakeyip: can you image the user frustration? additionally, in the autoscaler21:44
strigazithe CA wants to remove nova_server with uuid A.21:44
strigazithen the CA needs to call heat and show all nested stacks to find which stack this server belongs too21:45
schaneyit saves a gnarly reverse lookup =)21:46
colin-certificate authority?21:46
jakeyipthat's just code? I am more worried about hijacking a variable that used to mean one thing and making it mean another21:46
colin-sorry not following the thread of convo21:46
brtknrcolin-: cluster autoscaler?21:46
strigazimaybe can be done with a single list? or with an extra map we maintain as stack output21:46
colin-oh21:46
colin-that might get tricky :)21:46
imdigitaljimCAS maybe haha21:46
strigazicolin-: I got used to it xD21:46
flwangshould we move to next topoic?21:47
flwangstrigazi: i'd like to know the rolling upgrade work and the design of resize/upgrade api21:47
imdigitaljimcould you include both server_id and stack_id in the output and use as a reference point?21:48
imdigitaljimis that a thing?21:48
strigaziimdigitaljim: I don't think so21:48
jakeyipagree with flwang maybe we discuss this straight after meeting21:48
flwangi'm thinking if we should use POST instead of PATCH for actions and if we should follow the actions api design like nova/cinder/etc21:48
imdigitaljimwould be interesting to test21:48
imdigitaljimif the heat update can target either or21:48
imdigitaljimjust like if ip AND stackid are present21:48
imdigitaljimbecause then it would be trivial problem21:49
strigaziit is a map, so I don't think so21:49
strigaziflwang: what do you mean?21:50
jakeyipflwang: is there a review with this topic ?21:50
imdigitaljimstrigazi: is the key it looks for OS::stack_id?21:50
strigaziresize != upgrade21:51
imdigitaljim(im new to some of this convo)21:51
brtknrhttps://storyboard.openstack.org/#!/story/200505421:51
strigaziimdigitaljim: the key is stack_id21:51
imdigitaljimkk21:51
imdigitaljimthanks21:51
strigaziflwang:  can you explain more the PATCH vs POST21:52
flwangstrigazi: nova/cinder is using api like    <id>/action       and including action name in the post body21:52
strigaziflwang: we can do that21:52
flwangso two points,   1. should we use POST instead of PATCH   2. should we follow the post body like nova/cinder, etc21:52
strigaziflwang: in practice, we won't see any difference21:52
*** itlinux has quit IRC21:53
strigazipointer on 2.?21:53
flwangstrigazi: i know, but i think we should make openstack like a building, instead of building blocks with different design21:53
imdigitaljimwell21:53
imdigitaljimfollowing a more restful paradigm21:53
imdigitaljimpatch is practically the only appropriate option21:53
imdigitaljimhttps://fullstack-developer.academy/restful-api-design-post-vs-put-vs-patch/21:53
flwangstrigazi: https://developer.openstack.org/api-ref/compute/?expanded=rebuild-server-rebuild-action-detail,resume-suspended-server-resume-action-detail21:53
imdigitaljimsomethign liket his21:53
flwangimdigitaljim: openstack do have some guidelines about api design21:54
flwangbut the thing i'm discusing is a bit different from the method21:54
jakeyipflwang: pardon my ignorance what is the difference between this and PATCH at https://developer.openstack.org/api-ref/container-infrastructure-management/?expanded=update-information-of-cluster-detail#update-information-of-cluster21:55
flwangjakeyip: we're going to add new  api   <cluster id>/actions  for upgrade and resize21:55
strigaziimdigitaljim: flwang I agree with flwang , we can follow a similar pattern than other projects.21:56
imdigitaljimi also agree following similar patterns as other projects21:56
imdigitaljimjust making sure we understand them =)21:56
*** ttsiouts has joined #openstack-containers21:56
flwangimdigitaljim: thanks, and yes, we're aware of the http method differences21:56
jakeyipflwang: for resize it will be something in addition to original PATCH function ?21:57
flwangand here, upgrade/resize are really not normal   update for the resource(cluster here)21:57
strigazipersonally I prefer patch, but for the data model we have, there is no real difference, at least IMO21:57
brtknrflwang: although nova seems to use PUT for update rather than PATCH or POST21:57
flwangboth resize and upgrade cases, we're doing node replacement, delete, add new, etc21:57
flwangbrtknr: yep, but that's history issue i think21:58
strigazibrtknr: also put is user for properties/metadata only21:58
strigazibrtknr: also put is used for properties/metadata only21:58
flwangwhen we say PATCH, it's most like a normal partial update for the resource21:58
flwangbut those actions are really beyond that21:59
*** henriqueof has quit IRC21:59
strigaziI might add that they are "to infinity and beyond"21:59
flwangstrigazi: haha, buzz lightyear fans here21:59
brtknrhmm i'd vote for PATCH but there is not much precedence in other openstack projects... i wonder why22:00
jakeyipI feel POST is good. PUT/PATCH is more restrictive. It's much easier to refactor POST into PATCH/PUT later if it makes sense later, but not the other way round22:00
jakeyipsince we don't have a concrete idea of how it is going to look like POST let us get on with it for now22:01
flwangyep, we can discuss on patch22:01
flwangwe're running out time22:01
flwangstrigazi: ^22:01
strigaziyes,22:02
strigazijust vert quickly, brtknr can you mention the kubelet/node-ip thing?22:02
imdigitaljimpost makes sense for these scaling operations22:02
imdigitaljimbut maybe patch if versions are updated or anything?22:02
brtknrstrigazi: yes, its been bugging me for weeks, my minion InternalIP keeps flipping between the ip addresses it has been assigned on 3 different interfaces...22:03
strigaziI think we can drop the node-ip, since the certs has only one ip22:03
brtknrI have a special setup where each node has 3 interfaces, 1 for provider network and 1 high throughput and 1 high latency22:03
strigaziI think we can drop the node-ip, since the certificate has only one ip22:03
brtknrhowever assigning node-ip is not working22:04
colin-whose poor app gets the high latency card XD?22:04
brtknrcolin-: low latency :P22:04
colin-sorry- understand we're short on time22:04
*** itlinux has joined #openstack-containers22:04
brtknri have applied --node-ip arg to kubelet and the ip doesnt stick, the ips still keep changing22:05
*** rcernin has joined #openstack-containers22:05
brtknrthe consequence of this is that pods running on those minions become unavailable for the duration that the ip is on a different interface22:05
brtknrmy temporary workaround is that the order that kube-apiserver resolves host is Hostname,InternalIP,ExternalIP22:06
strigazibrtknr: I thought it might be simpler :) we can discuss it tmr or in storyboard/mailing list?22:06
imdigitaljimrandom question22:06
brtknrIt was InternalIP,Hostname,ExternalIP22:06
imdigitaljimhttps://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/22:06
imdigitaljim--address 0.0.0.022:06
imdigitaljimdo you bind a specific address?22:06
brtknrimdigitaljim: yes, i bound it to node IP22:07
imdigitaljimgotcha22:07
brtknrit was already bound to node ip22:07
brtknrby default22:07
imdigitaljimjust curious of how that is all done with multi-interface22:07
colin-personally curious how the kube-proxy or similar would handle such a setup and rule/translation enforcement etc22:07
brtknris there any reason why we cant do Hostname,InternalIP,ExternalIP ordering by default22:07
imdigitaljimhttps://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/22:07
imdigitaljimsame with kube-proxy?22:08
imdigitaljimdo you do stuff here different?22:08
*** itlinux has quit IRC22:08
brtknrI havent touched kube-proxy settings because I couldnt find it22:09
*** openstackgerrit has joined #openstack-containers22:09
openstackgerritFeilong Wang proposed openstack/magnum master: [WIP] Support <cluster>/actions/resize API  https://review.openstack.org/63857222:09
imdigitaljim--bind-address 0.0.0.0     Default: 0.0.0.022:09
imdigitaljimmaaybe?22:09
strigazibrtknr: /etc/kubernetes/proxy22:09
imdigitaljimcheck this out?22:09
strigaziin magnum it has the default22:09
imdigitaljimwhich is all interfaces?22:10
strigaziyes22:10
imdigitaljimshouldnt it be node only here/22:10
imdigitaljim?22:10
strigazifor proxy?22:10
imdigitaljimi guess what hes doing with his interfaces22:10
brtknroh okay, i'll try adding --bind-address=NODE_IP22:10
brtknrto /etc/kubernetes/proxy22:10
imdigitaljimim just curious i dont have a solution22:11
colin-failing that i'd try imdigitaljim suggestion of wildcarding it22:11
imdigitaljimbut maybe worth a shot22:11
colin-just for troubleshooting22:11
brtknrwildcarding?22:11
colin-oh, that may be the default my mistake22:11
colin-0.0.0.0/022:11
brtknrcolin-: how would that help?22:12
brtknraccording to the docs, 0.0.0.0 is already the default22:12
brtknrhttps://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/22:12
brtknrfor --bind-address22:12
strigazibrtknr: colin- imdigitaljim let's end the meeting and just continue?22:12
brtknrsure, this is not going to be resolved very easily :)22:13
strigazithanks22:13
imdigitaljimyeah im just throwing out ideas22:13
imdigitaljimmaybe a few things to think/try22:13
*** gyee has joined #openstack-containers22:13
imdigitaljimmaybe to get brtknr unstuck22:14
strigaziflwang: brtknr jakeyip schaney colin- eandersson imdigitaljim thanks for joining and for the discussion on autoscaler.22:14
imdigitaljimyeah thanks for clearing up some stuff =)22:14
imdigitaljimlooking forward to the merge22:14
strigazi:)22:15
strigazi#endmeeting22:15
*** openstack changes topic to "OpenStack Containers Team"22:15
openstackMeeting ended Tue Feb 26 22:15:19 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:15
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2019/containers.2019-02-26-20.59.html22:15
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2019/containers.2019-02-26-20.59.txt22:15
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2019/containers.2019-02-26-20.59.log.html22:15
brtknrnice, i'm excited about all the developments!22:15
eanderssonsame!22:15
jakeyipthanks all!22:15
brtknri am keen to get more closely involved in development of nodegroups22:16
*** ttsiouts has quit IRC22:16
flwangbrtknr: pls go for it22:16
brtknri remember there is a blueprint for that somewhere22:16
flwangstrigazi: do you still have a moment ?22:16
flwangbrtknr: there is already a patch somewhere22:16
*** itlinux has joined #openstack-containers22:16
*** ttsiouts has joined #openstack-containers22:16
brtknrflwang: do you know why its stuck?22:17
strigaziflwang: let's do it quickly, I still need to review a paper :)22:17
flwangstrigazi: i'd like to understand the rolling upgrade progress and design22:18
strigazibrtknr: for NGs a colleague of mine is working on it, I push him to push :)22:18
strigaziflwang: do you have a question?22:18
flwangif we want to do the base os image upgrade and k8s upgrade22:18
flwangstrigazi: is the functionality patch ready for review?22:19
strigaziI think we discussed the design before. The upgrade per API, is controller by CTs22:20
strigaziif you change the image OS, the server will be rebuilt22:21
flwangwhere to control the cordon/drain order?22:21
*** ttsiouts has quit IRC22:21
strigazilet's take it one by one?22:21
strigazistart from the API22:21
strigazithe api is has the CT and nodegroup as payload22:22
strigaziis that clear enough?22:22
flwangyes22:22
flwangand conductor will finally get the call22:22
strigazicorrect22:22
flwangdoes that mean conductor will control the order and do the orchestration22:23
strigazilet's start with he easy scenario which is to upgrade containers in the nodes22:23
flwangyou mean k8s components here, right?22:23
strigaziinitially yes22:24
strigazibut the same applies to ther containers too22:24
strigazilike flannel22:24
strigazior coredns22:24
flwangyep22:24
*** itlinux has quit IRC22:24
flwangthose are the container running on top of the k8s22:24
strigazithe conductor as in any other operation will just contact heat and a software deployment will be triggered.22:25
*** itlinux has joined #openstack-containers22:26
strigazithe script of the deployment will upgrade what ever it contains, do kubectl apply, atomic update or other commans22:26
flwangso for the k8s rolling upgrade case,  after calling heat stack update,  heat-container-agent will call the   upgrade.sh to do the magic?22:27
strigaziso if you pass a CT that has only version changes it will change only what I mentioned22:27
strigaziyes, as it does with cluster creation now22:28
*** ttsiouts has joined #openstack-containers22:28
*** sdake has joined #openstack-containers22:28
strigazithis what I'm doing at the moment22:30
flwangcool, will you post a new patch set this week for the functionality patch?22:30
strigazitmr moring, like in ~11 hours22:30
flwangfantastic22:30
flwangnow i'm polishing the resize api22:30
strigaziI was doing this all day22:30
*** sdake has quit IRC22:30
flwangso i'm keen to get agreement with the api design22:31
brtknrstrigazi: is your colleague actively working on NG btw?22:31
flwangcombine with the upgrade api22:31
strigazibrtknr: yes22:31
*** itlinux has quit IRC22:31
strigaziyou can ping him in the gerrit22:31
*** sdake has joined #openstack-containers22:31
strigazibrtknr: https://review.openstack.org/#/q/owner:theodoros.tsioutsias%2540cern.ch+status:open22:31
brtknrcool thanks!22:32
*** dave-mccowan has quit IRC22:33
flwangstrigazi: if we go for the way like   clusterID/action    {'upgrade': {}}22:33
strigazibrtknr: start from: https://review.openstack.org/#/c/604823/ https://review.openstack.org/#/c/604824/22:33
flwangthen we just need one API patch for both resize and upgrade22:33
strigaziflwang: interesting22:34
strigaziI'll take a look in the moring22:35
flwangstrigazi: that's the design in nova/cinder/manila/senlin, etc22:35
strigaziI'll take a look in the morning22:35
flwangcool, let me know your thoughts later, cheers22:35
strigazicheers22:35
flwangi'm so excited for all those good work we're doing22:36
strigazi:)22:36
* strigazi has signed off22:36
brtknrflwang: :)22:37
brtknrI am currently managing the upgrades using an ansible playbook... a native upgrade mechanism would be pretty cool22:38
*** _fragatina has joined #openstack-containers22:40
*** sdake has quit IRC22:40
*** sdake_ has joined #openstack-containers22:41
flwangbrtknr: for our case, we can't call our k8s service as beta without rolling upgrade support22:42
imdigitaljimim currently working on a simple kickoff mechanism for kubernetes to manage the in-place upgrade22:43
imdigitaljimit is our intention to share it too when we're satisfied with the work22:43
imdigitaljimwe're looking forward to putting out the whole driver with everything ready22:43
brtknrwhats the diff between upgrade and rolling upgrade?22:43
imdigitaljimwell im not sure what context youre refering but when the upgrades are done what they do is roll through each node22:44
imdigitaljimand swap it out with a newer version22:44
flwangrolling generally means one by one and no down time for services22:44
imdigitaljimthe roll meaning old one goes offline, new one comes online22:44
imdigitaljimyeah to maintain no downtime22:45
brtknroh ok22:45
imdigitaljiminstead of all offline swap out, all back online new versions22:45
imdigitaljimor a full k8s rebuild22:45
brtknrgotcha22:45
*** sdake_ has quit IRC22:45
brtknri think i am doing a non-rolling upgrade atm :P22:45
brtknrin that case22:45
imdigitaljimdo you basically just rebuild the clusters?22:45
brtknrno, i run the ansible-playbook in parallel across all the nodes22:46
brtknrnot one-by-one22:46
brtknrand upgrade the kubernetes related containers22:46
imdigitaljimwhat do you do with them?22:46
imdigitaljimjust change their versions/download new containers?22:46
brtknrthis: https://github.com/stackhpc/p3-appliances/blob/master/ansible/container-infra-upgrade.yml22:47
brtknryep basically22:47
imdigitaljimso if there are any config changes that probably wont without manual changes from version to version no?22:47
imdigitaljimlike new required ones that you dont already have or removed old ones that are no longer present in new version22:48
imdigitaljimnot just reconfigu22:48
*** sdake has joined #openstack-containers22:48
brtknryeah thats true, but so far ive not had any problems22:48
brtknrapi is pretty backwards compatible22:48
brtknrk8s api*22:49
brtknrim running queens22:49
*** henriqueof has joined #openstack-containers22:49
brtknrso started at 1.9.3 and now im running 1.13.222:49
brtknri can downgrade to earlier versions at any point22:49
*** itlinux has joined #openstack-containers22:50
brtknri miss out on things like occm for particular version of k8s22:52
brtknrimdigitaljim: --bind-address=NODE_IP didnt work22:54
brtknri dont think --bind-address=0.0.0.0/0 is valid22:54
*** sdake has quit IRC22:56
*** sdake has joined #openstack-containers22:57
*** itlinux has quit IRC22:57
*** sdake has quit IRC23:00
*** sdake has joined #openstack-containers23:01
*** sdake has quit IRC23:06
*** sdake_ has joined #openstack-containers23:06
*** sdake_ has quit IRC23:10
*** sdake has joined #openstack-containers23:11
*** dave-mccowan has joined #openstack-containers23:18
*** sdake has quit IRC23:20
*** sdake has joined #openstack-containers23:22
*** sdake has quit IRC23:25
*** sdake has joined #openstack-containers23:27
*** henriqueof has quit IRC23:28
*** sdake has quit IRC23:30
*** ttsiouts has quit IRC23:30
*** ttsiouts has joined #openstack-containers23:31
*** sdake has joined #openstack-containers23:33
*** sapd1 has joined #openstack-containers23:33
*** sdake has quit IRC23:35
*** ttsiouts has quit IRC23:35
*** sdake has joined #openstack-containers23:36
*** sdake has quit IRC23:37
*** sdake has joined #openstack-containers23:38
*** sdake has quit IRC23:40
*** sdake has joined #openstack-containers23:42
*** henriqueof has joined #openstack-containers23:43
*** sdake has quit IRC23:50
*** sdake has joined #openstack-containers23:53
*** sdake has quit IRC23:55
*** sdake has joined #openstack-containers23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!