Wednesday, 2019-09-25

*** goldyfruit_ has joined #openstack-containers00:07
*** flwang1 has joined #openstack-containers00:14
*** sapd1_x has joined #openstack-containers00:25
*** goldyfruit_ has quit IRC00:33
*** goldyfruit has joined #openstack-containers00:35
*** goldyfruit has quit IRC01:00
*** sapd1_x has quit IRC01:09
*** sapd1_x has joined #openstack-containers01:25
*** goldyfruit has joined #openstack-containers01:26
*** sapd1_x has quit IRC01:37
*** hongbin has joined #openstack-containers01:49
openstackgerritMerged openstack/magnum-ui master: Update the constraints url  https://review.opendev.org/68285602:22
openstackgerritMerged openstack/magnum-ui master: Generate PDF documentation  https://review.opendev.org/68289702:22
*** hongbin has quit IRC02:36
*** ykarel has joined #openstack-containers02:39
*** ricolin has joined #openstack-containers02:47
*** ykarel_ has joined #openstack-containers02:48
*** ykarel has quit IRC02:51
*** dave-mccowan has quit IRC02:53
*** flwang1 has quit IRC03:28
*** ramishra has joined #openstack-containers03:44
*** udesale has joined #openstack-containers04:07
*** goldyfruit has quit IRC04:24
*** ykarel_ has quit IRC04:56
*** iokiwi has quit IRC05:09
*** iokiwi has joined #openstack-containers05:09
*** ykarel_ has joined #openstack-containers05:16
*** pcaruana has joined #openstack-containers06:42
*** ykarel_ is now known as ykarel07:26
*** dims has quit IRC07:46
*** ykarel is now known as ykarel|lunch07:56
*** ivve has joined #openstack-containers08:01
strigazibrtknr: meeting today? Feilong is not online.08:23
brtknryes, in 30 mins right?08:23
strigaziI don't know, whiithout feilong doesn't make much sense.08:28
strigazibrtknr: any issues with nodegroups? feilong said nothing works?08:42
brtknrstrigazi: i use a dedicated baremetal devstack for testing magnum changes and i didnt see any of those problem...08:45
brtknrstrigazi: I'd like to retest but could we rebase all the changes to the current master? theres quite a few merge conflicts08:46
brtknris ttstious arround?08:48
strigazihe is08:50
strigazibut we can't rebase indefinetely, let's test the current state and then rebase?08:50
*** ykarel|lunch is now known as ykarel08:53
brtknrstrigazi: i was worried it was missing the calico changes but its not08:55
brtknri'll verify again08:55
brtknrstrigazi: did flwang tell you he's not able to attend?09:04
brtknrlets start the meeting, its 9AM UTC09:06
strigazi#startmeeting containers09:08
openstackMeeting started Wed Sep 25 09:08:41 2019 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.09:08
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.09:08
*** openstack changes topic to " (Meeting topic: containers)"09:08
openstackThe meeting name has been set to 'containers'09:08
strigazi#topic Roll Call09:08
*** openstack changes topic to "Roll Call (Meeting topic: containers)"09:08
strigazio/09:08
jakeyipo/09:09
strigazibrtknr:09:10
brtknro/09:12
*** ttsiouts has joined #openstack-containers09:12
strigazi#topic Stories and Tasks09:13
*** openstack changes topic to "Stories and Tasks (Meeting topic: containers)"09:13
ttsioutso/09:13
strigazilet's discuss quickly fedora coreos status and reasoning09:13
strigazithen nodegroups09:13
strigazibrtknr: jakeyip anything else you want to discuss09:13
strigazi?09:13
brtknrstein backports09:14
jakeyipnothing from me09:14
strigaziok09:14
brtknralso when to cut the train release09:15
strigaziSo for CoreOS09:15
strigazi1. we need to change from Atomic, there is no discussion around it09:15
strigazi2. Fedora CoreOS is the "replacement" supported by the same team09:16
strigaziI say replacement because it is not drop in replacement09:16
strigaziI mean replacement in quotes09:16
strigazireasons to use it, at least from my POV09:17
strigaziwe have good communication with that community09:17
strigazithe goal is to run the stock OS and run everything in containers09:18
brtknralso they told me yesterday that they would like to support our use case transition from `atomic install --system ...`09:18
strigazithe transition is probably podman run09:18
strigaziany counter argument?09:20
jakeyipsounds good09:20
brtknrat first, my worry was no more `atomic` but i am more reassured by the fact that the intended replacement is podman/docker09:20
strigazithe work required is around the heat agent and replacement for the atomic cli09:21
brtknrwe should be able to run privileged container for kube-* services right?09:21
strigaziatomic is just a python cli that writes a systemd unit which does "runc run"09:21
strigaziwe could09:22
brtknrand podman is containers running under systemd iiuc09:22
strigaziI hope at least, because k8s 1.16 is not playing nice in a container09:23
strigaziyes09:23
strigazilike my comment in https://review.opendev.org/#/c/678458/09:23
brtknr>I hope at least, because k8s 1.16 is not playing nice in a container09:23
brtknrin what way?09:23
strigazithe kubelet container is not propagating the mounts to the host09:24
strigazionly kubelet, the others are fine09:24
strigazilet's move to nodegroups? we won't solve this here09:25
strigaziI mean the 1.16 issue09:25
brtknrsounds like problem with podman?09:25
strigazithat was with atomic, not podman09:26
strigazipodman, atomic, they all use runc09:26
brtknrokay.. anyway i think what we need is to convince ourselves fcos is the best alternative09:26
brtknrbefore we build more momentum09:27
strigaziI'm convinced, whenever I ask the ubuntu community for help, I found the door closed09:28
brtknrI suppose the community is an important aspect...09:28
strigazibrtknr: what are your concerns?09:29
strigazianyway, we will go to their meeting and we see.09:29
strigazi#topic Nodegroups09:30
brtknri am just concerned about the risks as it seems experimantal09:30
*** openstack changes topic to "Nodegroups (Meeting topic: containers)"09:30
brtknras with all exciting things XD09:30
strigazicompared with centos, openstack and kubernetes is too experimental09:30
strigazior compared with debian, apache server, I can bring more :)09:31
brtknri think we need to find out from fcos community what are the things that are going to stay and things that may be uprooted09:32
brtknrbut happy to move on to nodegroups09:32
strigaziwe are fixing an issue with labels, brtknr did you find anything?09:35
strigazidid manage to add nodegroups?09:35
strigazidid you manage to add nodegroups?09:35
brtknrstrigazi: i just created a cluster but i was using kube_tag=v1.16.0 so it failed09:36
brtknrretrying now with v1.15.309:36
brtknrbut i have tested the full lifecycle in one of the earlier patchsets09:38
brtknrcreate update and delete, also scaling09:38
brtknrand everything seemed to work for me09:38
brtknralso nice work adding the tests ttsiouts09:39
brtknrit feels like a complete package now09:39
ttsioutsbrtknr: i'll push again today adapting your comments09:39
ttsioutsbrtknr: we also identified this issue with labels that strigazi mentioned09:40
ttsioutsbrtknr: thanks for testing!09:40
strigazibrtknr: perfect09:43
brtknrttsiouts: i will repost my output to ng-6 saying everything is working for me09:43
strigaziexcellent09:44
strigazioh, one more thing09:44
strigazifor nodegroups, we (CERN) need to spawn cluster across projects09:44
strigazifor nodegroups, we (CERN) need to spawn clusters across projects09:44
strigazieg ng1 in project p1 and ng2 in project p209:45
brtknrso one nodegroup in 1 project another in a different project?09:45
strigaziyes09:45
brtknrthat sounds messy...09:45
strigaziin the db og ngs, we have proejct_id already09:45
brtknrisnt there tenant isolation between networks?09:45
strigazinova is messy09:45
strigaziso the mess comes from tehre09:46
strigaziso the mess comes from there09:46
brtknror are you planning to use public interface?09:46
brtknrs/public/external09:46
strigazipublic inteface09:46
brtknrhmm interesting09:46
strigaziyes, it depends on what you use the cluster for09:46
*** flwang1 has joined #openstack-containers09:46
flwang1sorry for the late09:47
brtknrhi flwang1 :)09:47
flwang1was taking care  sick kids09:47
flwang1brtknr: hello09:47
flwang1is strigazi around?09:47
strigazifor our usage and it is not an issue, and it is opt-in anyway09:47
brtknryep09:47
strigazihi flwang109:47
flwang1strigazi: hello09:47
brtknrflwang1: hope the kids get better!09:47
flwang1brtknr: thanks09:48
flwang1my daughter got fever since yesterday09:48
brtknrstrigazi: is multi project supported in current ng implementation?09:48
flwang1anything i can help provide my opinion?09:48
flwang1oh, you're discussing the ng stuff09:49
strigazibrtknr: no, but it is a small change09:49
brtknrflwang1: i cannot reproduce the issue you commented in ng-6 patch09:49
*** ArchiFleKs has quit IRC09:49
brtknrstrigazi: is this for admins only?09:49
strigazibrtknr: no09:49
strigazibrtknr: 100% for uers09:50
strigazibrtknr: 100% for users09:50
flwang1brtknr: in my testing, after removing the 4 new fields from nodegroup table, the cluster is getting stable09:51
strigazibrtknr: nova doesn't have accounting, for gpus, FPGAs, Ironic cpus are == to vcpus09:51
flwang1i haven't dig into the root cause09:51
strigaziflwang1: what fields?09:51
strigaziwhat are you talking about?09:51
flwang1stack_id, status, status_reason, version09:51
strigaziyou dropped things from the db?09:51
strigaziare all migrations done?09:51
brtknrflwang1: did you apply `magnum-db-manage upgrade` after checking out ng-9?09:52
flwang1i did09:52
flwang1for sure09:52
brtknri didn't need to delete anything09:52
flwang1i mean09:52
strigaziwhat is the error?09:52
strigazinot the VM restarts/rebuils that is irrelevant09:52
flwang1i have mentioned the error i saw in the ng6 patch09:52
brtknri also had to checkout the change in python-magnumclient then `pip install -e .`09:53
flwang1the problem i got is the vm restart/rebuild09:53
strigazibad nova09:54
strigazino resources09:54
strigaziwhen heat sends a req to nova09:54
strigaziand nova fails09:54
strigaziheat retries09:54
strigazideletes the old vm and tries again09:54
strigazisame everything but diferent uuid09:54
flwang1strigazi: so you mean, it's because my env(devstack) is lacking resource?09:55
strigazithis happens when you don't have resources09:55
strigaziyes09:55
strigazior it missbehaves in some other way09:55
flwang1strigazi: ok, i will test again tomorrow then09:55
strigazieg can't create ports09:55
strigazitry the minumum possible09:55
flwang1ok, i don't really worry about the ng work, overall looks good for me09:56
strigaziok, if I +2 and bharat verifies you are ok?09:56
strigaziwe test at cern in three different dev envs plus bharat's tests09:57
flwang1strigazi: i'm ok with that09:58
strigaziflwang1: for train?09:58
brtknrI'm mostly happy to get things merged after rebase and addressing all the minor comments, now that we also have solid unit tests... i am sure we will find minor issues with it later but its been hanging around for too long :)09:59
flwang1strigazi: for train09:59
flwang1just one silly question09:59
flwang1what's the 'version' standing for in the ng table?09:59
flwang1i can't see a description for that09:59
strigaziplaceholder for uprades with nodereplacement10:00
strigazinow it will work as it is implemented10:00
strigazior we can leverage it now10:00
flwang1so it's a version as kube_tag?10:01
strigazigive me 5', sorru10:01
strigazigive me 5', sorry10:01
brtknri have to leave in 30 minutes for our team standup10:02
flwang1brtknr: no problem10:03
flwang1i will be offline in 15 mins as well10:03
flwang1i'm addressing the comments from heat team for the ignition patch10:03
flwang1i'm very happy they're generally OK with that10:04
brtknrbtw can we start using etherpad for agenda like other teams, e.g. keystone: https://etherpad.openstack.org/p/keystone-weekly-meeting10:04
brtknrand put a link to this in the channel's idle topic10:04
flwang1brtknr: we were using wiki, but  i'm ok with etherpad10:04
brtknror a link to wiki... i prefer the etherpad UI...10:05
brtknrhttps://etherpad.openstack.org/p/magnum-weekly-meeting10:05
brtknrthere10:05
brtknr:)10:05
flwang1cool10:07
flwang1i just proposed a new patchset for the ignition patch10:08
brtknrflwang1: looks pretty solid10:14
flwang1after it's done, we still have quite a lot work on magnum side to get the fedora coreos driver ready10:16
strigaziwhere were we?10:16
flwang1strigazi: the fedora coreos driver10:16
strigaziflwang1: for ngs10:16
flwang1for ngs10:16
flwang1(22:01:04) flwang1: so it's a version as kube_tag?10:17
strigaziflwang1:  the ngs in different projects, is there an issue?10:17
strigaziflwang1: oh, this10:17
strigaziflwang1: we can use it now too10:17
flwang1so the version is the coe version of current node group?10:17
flwang1the current name 'version' is quite confused for me10:18
strigazithis is an incremental version for the ng10:19
strigaziso that we have some tracking10:19
strigaziwhen a user upgrades something10:19
strigazibut for now it is a placeholder, to implement it10:20
strigazimakes sense?10:20
strigazibrtknr: before you go, any reason to not have ngs in different projects as an opt-in option?10:21
strigazibrtknr: flwang1: still here?10:22
brtknrstrigazi: i dont have major objections to it but perhaps this can be added on later?10:22
flwang1i'm10:22
flwang1i'm thinking and checking the gke api10:22
brtknror is it required imminintly10:22
flwang1https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters.nodePools#NodePool10:22
strigazibrtknr: why? for us it is10:22
flwang1what do you mean ngs in different project?10:23
*** pcaruana has quit IRC10:23
*** rcernin has quit IRC10:23
strigaziflwang1: because nova doesn't have accounting for GPUs, FPGAs, and ironic cpus are accounted as vcpus10:23
brtknrstrigazi: i find it slightly unintuitive10:23
strigazibrtknr: we won't advertise this10:24
brtknri was under the impression that projects imply complete separation10:24
strigazithis doesn't say much ^^10:24
brtknri prefer supporting ng per region under the same project10:24
strigaziwe do multicloud applications10:24
flwang1strigazi:  can you explain 'ngs in different projects?10:24
brtknrflwang1: so ng1 lives in project A and ng2 lives in project B, both part of the same cluster10:25
flwang1does that mean cluster 1 in project A can have a NG which belongs to project B?10:25
strigaziagain, this is opt-in10:25
flwang1brtknr: that doesn't sound good for me10:25
brtknri was under the impresssion that a cluster belongs to a project10:26
strigaziif magnum doesn't have it, we will investigate something else10:26
flwang1if we want to have it, it needs to be disabled by default10:26
flwang1unless the cloud operators enable it10:26
brtknrit then seems like a jump in logic to have children nodegroups spanning different projects10:26
strigazithat makes 100% sense10:26
strigaziif we want to have it, it needs to be disabled by default that10:27
strigazibrtknr: how do you do accounting for ironic nodes mixed with vms?10:27
strigazieverything starts from there10:28
strigaziand nova cells10:28
strigaziin the ideal openstack cloud, I understand, it does not make sens.10:28
brtknrstrigazi: okay i'm happy with disabled by default.10:29
strigaziflwang1: ?10:30
strigazipolicy or config option?10:30
flwang1i'm ok, if it's disabled by default10:31
flwang1config10:31
flwang1i just worry about the security hell10:31
strigaziflwang1: brtknr I'll send you a presentation why we do it10:31
flwang1strigazi: pls do10:31
strigaziflwang1: don't, because for a cloud with proper network it won't work anywya10:32
strigaziwell, it can work10:32
strigazibut you need an extra router10:32
strigazivrouter10:32
flwang1from a public cloud pov, it doesn't make sense10:33
strigaziflwang1: this might be also usefull for running the master nodes in the operators tenant :)10:33
strigaziwell, see my comment above :)10:33
flwang1strigazi: i can see the extra benefit ;)10:33
flwang1i have to go, sorry10:33
strigazisorry, I completely forgot about that10:33
flwang1it's late here10:33
strigaziok10:33
strigazisee you10:33
strigazi#endmeeting10:34
*** openstack changes topic to "OpenStack Containers Team"10:34
openstackMeeting ended Wed Sep 25 10:34:02 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)10:34
openstackMinutes:        http://eavesdrop.openstack.org/meetings/containers/2019/containers.2019-09-25-09.08.html10:34
flwang1strigazi: last question10:34
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/containers/2019/containers.2019-09-25-09.08.txt10:34
openstackLog:            http://eavesdrop.openstack.org/meetings/containers/2019/containers.2019-09-25-09.08.log.html10:34
strigazitell me10:34
flwang1as for the fedora coreos driver, are we going to use docker to install the k8s components?10:34
flwang1and keep anything else same like the fedora atomic driver?10:34
strigaziflwang1: we will discuss it with the fedora coreos devs to see what is better10:34
flwang1ok10:35
strigaziwe will try to run whatever is possible in containers10:35
strigaziall the CNI and plugins we have are not affected by this10:35
strigazi90% of our work is reusable10:35
flwang1yep, only the k8s components so far10:35
flwang1pls keep me in the loop10:36
flwang1i want to start the work asap10:36
*** pcaruana has joined #openstack-containers10:36
strigazisure, the meeting has logs too, I'll send you the relavant links10:36
flwang1ok, have to go really10:36
strigazigood night10:36
flwang1have a good day10:36
strigazibrtknr: I'll be back later10:36
brtknrsee you :) i need to be another meeting now10:37
flwang1strigazi: brtknr: pls help review https://review.opendev.org/675511 thank you very much10:43
*** ttsiouts has quit IRC11:01
*** udesale has quit IRC11:15
*** ttsiouts has joined #openstack-containers11:22
*** goldyfruit has joined #openstack-containers11:32
*** yoctozepto has quit IRC12:10
*** yoctozepto has joined #openstack-containers12:15
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-6: Add new fields to nodegroup objects  https://review.opendev.org/66708812:16
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-7: Adapt parameter and output mappings  https://review.opendev.org/66708912:16
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-8: APIs for nodegroup CRUD operations  https://review.opendev.org/64779212:16
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-9: Driver for nodegroup operations  https://review.opendev.org/66709012:16
*** goldyfruit has quit IRC12:16
*** dave-mccowan has joined #openstack-containers12:22
*** dave-mccowan has quit IRC12:26
*** yoctozepto has quit IRC12:26
*** yoctozepto has joined #openstack-containers12:26
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-6: Add new fields to nodegroup objects  https://review.opendev.org/66708812:28
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-7: Adapt parameter and output mappings  https://review.opendev.org/66708912:28
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-8: APIs for nodegroup CRUD operations  https://review.opendev.org/64779212:28
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-9: Driver for nodegroup operations  https://review.opendev.org/66709012:28
*** dave-mccowan has joined #openstack-containers12:28
*** dave-mccowan has quit IRC12:42
ttsioutsbrtknr: are you around?12:43
brtknrttsiouts: yep hi12:46
ttsioutsbrtknr: thanks again for reviewing12:46
ttsioutsbrtknr: I ended up refactoring some things in k8s template definitions12:47
ttsioutsbrtknr: the thing is that for output mappings is a bit more complex12:47
brtknrttsiouts: no worries! is it still working?12:47
ttsioutsbrtknr: looks like it's working fine12:48
ttsioutsbrtknr: whenever you have the time, check what I did and if you want something else I'm happy to address it12:48
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-9: Driver for nodegroup operations  https://review.opendev.org/66709012:50
ttsioutsbrtknr: ^^ adapted the comments in the last patch12:50
brtknrI will retest cluster creation, nodegroup creation/scaling/deletion and report back12:51
ttsioutsbrtknr: thanks again!12:51
brtknrttsiouts: did you respond to flwang1's comment about min_nodes/max_nodes not being labels?12:58
ttsioutsbrtknr: doing that now12:58
brtknrttsiouts: do you think it should be a label?13:00
strigazino, why labels?13:02
ttsioutsbrtknr: not really.13:03
ttsioutsbrtknr: it would be much easier to set/update them as NG attributes13:03
strigazithey will be consumed by the autoscaler13:03
brtknrso we are going to leave it as it? fine by me13:04
brtknrbtw is there a scope for creating multiple nodegroups at cluster creation time as a continuation of this work?13:09
brtknrttsiouts: strigazi ^13:10
brtknri suppose openstack coe cluster create k8s; openstack coe nodegroup create k8s ng1; can both be fired off simulatenously?13:10
brtknrstrigazi: how do i simulate autoscaling?13:18
brtknri.e. generating fake workload13:18
brtknrttsiouts: ng-7  +343, -337... nice :)13:19
*** udesale has joined #openstack-containers13:22
brtknrubuntu@devstack-master:/opt/stack/magnum$ openstack coe cluster resize k8s-flannel --nodegroup bharat 113:33
brtknrResizing %(nodegroup)s outside the allowed range: min_node_count = %(min_node_count)s, max_node_count = %(max_node_count)s (HTTP 400) (Request-ID: req-b7b798c8-d9ac-43c7-bbbd-d1d59f3efce5)13:33
brtknrttsiouts: ^13:33
brtknrdoesnt look right13:34
brtknr       if nodegroup.min_node_count > cluster_resize_req.node_count:13:36
brtknr           raise exception.NGResizeOutBounds(13:36
brtknr               nodegroup=nodegroup.name, min_nc=nodegroup.min_node_count,13:36
brtknr               max_nc=nodegroup.max_node_count)13:36
brtknr       if (nodegroup.max_node_count and13:36
brtknr               nodegroup.max_node_count < cluster_resize_req.node_count):13:36
brtknr           raise exception.NGResizeOutBounds(13:36
brtknr               nodegroup=nodegroup.name, min_nc=nodegroup.min_node_count,13:36
brtknr               max_nc=nodegroup.max_node_count)13:36
ttsioutsbrtknr: checking13:37
brtknrlooks like the change was made in ng-2 and went unnoticed13:37
*** ykarel is now known as ykarel|afk13:51
ttsioutsbrtknr: yeap you are right..13:53
brtknrttsiouts: 1 more thing, if i do `openstack coe cluster resize k8s-flannel --nodegroup default-worker 2`, both default-master and default-worker enter UPDATE_IN_PROGRESS state13:53
ttsioutsbrtknr: yes. default NGs are in the same stack13:54
brtknrbut if i do this to individual nodegroups, only the nodegroup is affected13:54
brtknrah i understand now :)13:54
brtknrsorry my bad13:54
ttsioutsbrtknr: no worries13:54
brtknrttsiouts: i cant find any more faults with it tbh13:55
brtknrthe only thing i'd like to test is autoscaling13:55
brtknrbut i am not totally sure how13:55
brtknrdo you have any pointers?13:55
ttsioutsbrtknr: I could add the fix for the exception in ng-813:55
ttsioutsbrtknr: it's a one liner fix13:56
brtknrttsiouts: sounds good to me13:56
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-8: APIs for nodegroup CRUD operations  https://review.opendev.org/64779213:56
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-9: Driver for nodegroup operations  https://review.opendev.org/66709013:56
ttsioutsbrtknr: ^^13:57
brtknrthanks13:57
ttsioutsbrtknr: the autoscaler is missing functionality at the moment13:57
*** spiette has quit IRC13:58
*** ykarel|afk has quit IRC13:58
dioguerrabrtknr: long time ago, when i tested autoscaling only worked for default NG13:58
ttsioutsdioguerra: brtknr: yeah this what I think too13:59
*** spiette has joined #openstack-containers14:00
brtknrokay we can leave that for NG v214:00
brtknrdioguerra: can you tell me how to simulate workload to activate autoscaling?14:01
brtknrttsiouts: any thoughts on this: https://review.opendev.org/#/c/667089/8/magnum/drivers/heat/swarm_fedora_template_def.py@8214:09
brtknrmore line 81 than 82 actually14:10
*** goldyfruit has joined #openstack-containers14:16
*** ykarel has joined #openstack-containers14:32
*** goldyfruit has quit IRC14:46
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-8: APIs for nodegroup CRUD operations  https://review.opendev.org/64779214:58
openstackgerritTheodoros Tsioutsias proposed openstack/magnum master: ng-9: Driver for nodegroup operations  https://review.opendev.org/66709014:58
openstackgerritTheodoros Tsioutsias proposed openstack/python-magnumclient master: Add nodegroup CRUD commands  https://review.opendev.org/64779314:58
brtknrttsiouts: why did you remvoe image?15:01
*** ttsiouts has quit IRC15:13
*** ttsiouts has joined #openstack-containers15:15
*** ttsiouts has quit IRC15:15
strigaziwe discussed it and image should be taken from the cluster-template15:17
strigazibrtknr: we will try to improve it further for upgrades15:17
strigazibrtknr: thoughts? doesn't it belong better to CT?15:18
brtknrstrigazi: i was imagining NGs would be the perfect way to deploy custom images per NG15:18
*** goldyfruit has joined #openstack-containers15:18
brtknre.g. a GPU NG will have an image with drivers baked in for example15:19
*** ttsiouts has joined #openstack-containers15:20
brtknrthis seemingly takes away that advantage15:20
strigazibrtknr: i'll be back online in a bit15:20
brtknrunless there is another way to install gpu drivers at run time15:20
strigazibrtknr: for the meeting15:21
strigaziwill you be here?15:21
brtknrthe fcos one?15:21
*** openstackgerrit has quit IRC15:21
strigaziyes15:21
brtknryeah15:21
brtknrfor 30 mins15:21
strigaziok, see you later then15:21
brtknrokay15:21
*** ttsiouts has quit IRC15:24
dioguerrabrtknr: just schedule pods so that the scheduller gets stuck on pending15:28
dioguerrapods*15:28
*** pcaruana has quit IRC15:30
*** ivve has quit IRC15:45
brtknrdioguerra: thanks15:55
strigazibrtknr: for installing driver it is possible from a container15:56
strigazibrtknr: for installing GPU drivers it is possible from a container15:56
strigazibrtknr: for the image, what do you think?15:57
*** itlinux has joined #openstack-containers15:57
strigazibrtknr: we can leave it and pass it on cluster upgrade too?15:57
strigazibrtknr: we can improve that after15:57
brtknrstrigazi: hmm okay if thats the case then i'm happy with it... ideally we dont want to build custom images either15:58
brtknrcustom OS images*15:58
strigaziwe don't want to build qcows15:58
strigazibut a need might exist15:59
strigaziI don't know15:59
brtknrthe only problem i foresee is if we are using non fedora atomic image15:59
brtknrwhich dont do everything in containers15:59
strigazimaybe for them is useful16:00
strigazibrtknr: let's keep it?16:01
brtknrkeep --image?16:01
strigaziwe can have mixed cluster like this, eg with atomic and coreos when we upgrade16:02
strigaziyes16:02
brtknrsounds good16:02
strigaziand then gradually drop the atomic nodes16:02
strigaziuser will love this IMO16:02
strigaziusers will love this IMO16:02
strigaziwe tried this before:16:02
brtknris it possible to replace image in default-master and worker?16:02
strigazicreate a cluster in stein, upgrade magnum to train, add a nodegroup (make sure you don't mix very old k8s versions though)16:03
strigazibrtknr: replace no, but we can add more nodegroups and drop old ones16:04
strigazibrtknr: this won't work for master nodes now, but we can do it16:04
strigazibrtknr: for workers it works with the current patches16:04
*** ykarel is now known as ykarel\afk16:04
brtknri thought image was immutable for default-worker16:05
strigazithe image is immutable for all nodes at the moment anyway, no?16:06
brtknrare you saying that even the default-worker instances could be rebuilt?16:06
strigazior replaced with another NG16:06
strigaziwhich would be better16:06
strigaziso, for the --image +2 ?16:07
brtknrwe cant currently delete default-worker NG, i guess that is just a technicilty16:07
strigaziyeap ^^16:07
brtknrstrigazi: yep i already did16:08
strigazihe can rebase and your +2 automatically will come back16:08
brtknrcool16:08
strigazibrtknr: break and we meet again in fcos?16:08
brtknrsure, my head hurts from all the meetings today16:09
strigazime too, when we had the IRC meeting three people were inmy office discussing ingresses16:10
strigazis/me/mine16:10
*** goldyfruit_ has joined #openstack-containers16:12
*** goldyfruit has quit IRC16:15
*** ykarel\afk is now known as ykarel16:23
*** jmlowe has quit IRC16:24
*** ykarel is now known as ykarel|away16:37
brtknrstrigazi: are you coming to #fedora-meeting-116:37
*** ramishra has quit IRC16:43
*** jmlowe has joined #openstack-containers16:50
*** ivve has joined #openstack-containers16:53
*** henriqueof has joined #openstack-containers16:54
*** henriqueof1 has quit IRC16:55
goldyfruit_I'm facing an issue related to Neutron/Magnum17:04
goldyfruit_When Neutron attach a foating IP to the port, it goes to the SNAT and then it goes to the qrouter where the instance is running17:05
goldyfruit_(we are using DVR)17:05
goldyfruit_During the time the FIP is bind in the SNAT, it sends a request which makes the MAC appears on the switch and then when the FIP is detached from the SNAT to the qrouter an another request is sent to the router with a different MAC17:06
goldyfruit_Which means the router has 2 MACs for the same FIP which prevent the instance the get Internet access and download the images, etc...17:06
goldyfruit_There is an issue in neutron for sure about not cleaning his shit in time but I guess we could prevent this by doing the right orchestration in Heat for Master/Nodes17:08
goldyfruit_For example, we could first create the Neutron port, then create the instance and then attach the FIP to the instance/port17:08
goldyfruit_From here https://github.com/openstack/magnum/blob/master/magnum/drivers/k8s_fedora_atomic_v1/templates/kubeminion.yaml#L440-L491 is seems we are creating the instance then the port and then attach the FIP to the port17:09
goldyfruit_If a port is created without been attached and if a FIP is assigned to it then the FIP goes to the SNAT17:10
goldyfruit_If a port is created and attached to an instance and then a FIP is assigned to the port then the FIP goes directly to the qrouter17:11
goldyfruit_Which avoid the duplicate MAC address because the FIP has been plumbed only once17:11
*** mrodriguez has joined #openstack-containers17:15
goldyfruit_mrodriguez, o/17:16
mrodriguezgoldyfruit_: o/17:17
*** ykarel|away has quit IRC17:19
*** udesale has quit IRC17:19
*** jmlowe has quit IRC17:35
*** jmlowe has joined #openstack-containers17:48
*** henriqueof1 has joined #openstack-containers17:53
*** henriqueof has quit IRC17:53
goldyfruit_We opened a bug related to Neutron: https://bugs.launchpad.net/neutron/+bug/184536017:58
openstackLaunchpad bug 1845360 in neutron "ARP advertisement issue with DVR" [Undecided,New]17:58
*** ricolin has quit IRC18:55
*** openstackgerrit has joined #openstack-containers19:13
openstackgerritMatthew Fuller proposed openstack/magnum master: PDF documentation build  https://review.opendev.org/68443619:13
goldyfruit_So basically Neutron guys said: "Can the workflow be changed to first attach the port then assign the floating IP?"19:33
*** flwang1 has quit IRC20:53
openstackgerritMatthew Fuller proposed openstack/magnum master: PDF documentation build  https://review.opendev.org/68443621:07
openstackgerritMatthew Fuller proposed openstack/magnum master: PDF documentation build  https://review.opendev.org/68443621:09
*** flwang has joined #openstack-containers21:38
flwangbrtknr: around?21:38
*** mrodriguez has quit IRC22:06
flwangbrtknr: as for https://review.opendev.org/678458 you need to checkout the branch and install it to 'register' the fedora coreos driver22:15
*** rcernin has joined #openstack-containers22:15
brtknrflwang: hi, again almost bedtime here22:34
brtknrSo it’s not enough to restart magnum services?22:34
brtknrWhat is the command for installation? Is pip install -e . enough?22:35
flwangbrtknr: no, it's not enough22:35
flwangjust pip install -e .22:35
flwangyou're good to go22:35
brtknrCool okay I’ll try that tomorrow :) I tested your status changed on top the latest master but seemed to return None as status rather that unknow22:36
brtknrs/changed/rebased22:37
brtknrCould you verify it works for you too?22:37
brtknrAlso got the same outcome without rebase22:38
brtknrflwang^22:39
flwangbrtknr: sure, will do, thanks for the review23:09
*** goldyfruit_ has quit IRC23:12
openstackgerritFeilong Wang proposed openstack/magnum master: Improve log of k8s health status check  https://review.opendev.org/67551123:20
*** dave-mccowan has joined #openstack-containers23:28
*** ivve has quit IRC23:28
*** dave-mccowan has quit IRC23:33
brtknrflwang: are you happy to +2 these backports https://review.opendev.org/#/q/status:open+project:openstack/magnum+branch:stable/stein+topic:stein-8.1.023:33
brtknrI’d like to push for cutting a new stein release tomorrow23:33
brtknrWould help to make stein more stable with fa2923:34
flwanghttps://review.opendev.org/#/c/648935/ doesn't look good for backport, why do you need it?23:35
flwangbrtknr: ^23:36
brtknrWithout that patch, Traefik is broken because it automatically downloads version 2.0.0+ containers since we didn’t fix it23:36
brtknrMake sense?23:37
brtknrThe api has changed in the new version23:37
brtknrflwang^23:37
brtknrV2 was released 9 days ago23:40
flwangok, fair enough23:40
brtknrflwang: thank you, going to sleep now! Have a good day:)23:43
flwangbrtknr: thank you, have a good night23:46

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!