Friday, 2018-09-14

*** maysams has joined #openstack-kuryr00:37
*** spotz has quit IRC01:00
*** spotz has joined #openstack-kuryr01:00
*** maysams has quit IRC02:43
*** pmannidi has quit IRC03:56
*** pmannidi has joined #openstack-kuryr04:12
*** pmannidi has quit IRC04:58
*** shachar has joined #openstack-kuryr05:10
*** pmannidi has joined #openstack-kuryr05:10
*** snapiri has quit IRC05:12
openstackgerritDanil Golov proposed openstack/kuryr-kubernetes master: Add SR-IOV pod vif driver  https://review.openstack.org/51228005:29
openstackgerritDanil Golov proposed openstack/kuryr-kubernetes master: Add SR-IOV binding driver to CNI  https://review.openstack.org/51228105:29
openstackgerritDanil Golov proposed openstack/kuryr-kubernetes master: Add HOWTO for SRIOV use case  https://review.openstack.org/59412505:29
openstackgerritDanil Golov proposed openstack/kuryr-kubernetes master: Introduce test case document for SRIOV functionality  https://review.openstack.org/60002205:29
*** maysams has joined #openstack-kuryr05:43
*** maysams has quit IRC05:44
*** s1061123_ has joined #openstack-kuryr05:48
*** s1061123 has quit IRC05:48
*** celebdor has joined #openstack-kuryr07:08
*** ccamposr has joined #openstack-kuryr07:26
celebdorltomasbo: did you see the proposal about domains?07:39
ltomasbocelebdor, yes, I saw it07:40
ltomasbocelebdor, I still think is overdo for such a simple modification07:40
ltomasbocelebdor, I'm going to submit the patch to octavia to at least rise the discussion07:41
ltomasbothat SG is only used for accessing the amphora07:41
celebdorltomasbo: did you post the patch to gerrit?07:41
ltomasbocelebdor, I still think making that security group belong to the users does not have any bad implication07:41
ltomasboI was about to do it07:41
dmelladoltomasbo: celebdor07:41
dmelladolink?07:41
celebdorltomasbo: come on... It's against their religion07:41
ltomasbowhy? then they need no religion...07:42
celebdorhow can you say that it doesn't have bad implications07:42
ltomasboI understand you want to keep the amphora within octavia07:42
ltomasbobut the security group? which the user can modify to allow traffic from everywhere through listeners...07:42
ltomasboI don't understand why can't you just give the SG to the users to even provide more fine granulatiry07:43
dmelladohttps://cdn.wegow.com/media/artist-media/bad-religion/bad-religion-1514392981.-1x2560.jpg07:43
celebdorthe core tenet of the Octavia religion is "every non octavia resource except the VIP shall not belong to the tenant creating the LB"07:43
ltomasbowhich will only limit the access to the amphora, rather than increase it07:43
ltomasbocelebdor, religion needs to be updated over time XD07:43
celebdordmellado: never heard any song of that group07:43
celebdorltomasbo: you protestant07:43
ltomasbolol07:43
dmelladocelebdor: you just listen to bertin osborne07:44
dmelladoxD07:44
celebdordmellado: he sings?07:44
celebdorI thought he only cooks07:44
dmelladohe used to, terribly07:44
celebdorand raises his arm07:44
ltomasbocelebdor, maybe I will end up on the fire... xD07:44
celebdorltomasbo: If I were you I'd be careful around Octavia07:44
celebdornobody expects the Octavia inquisition07:45
dmelladokinda david hasselhoff07:45
dmelladocelebdor: now seriously07:45
dmelladowhat's that proposal?07:45
celebdordmellado: the proposal Luis makes?07:45
celebdorOr the one I sent by email last night?07:46
dmelladooh, so you sent an email07:47
dmelladoopenstack-dev?07:47
celebdordmellado: no07:48
celebdorinternal07:48
celebdordmellado: I just forwarded it to you07:48
*** aperevalov has quit IRC07:49
dmelladocelebdor: I see, you left me out07:50
dmelladoyou bad friend07:50
dmelladoxD07:50
dmelladoin any case I even appreciate it as I've too many open fronts07:50
dmelladoxD07:50
dmelladothanks celebdor07:51
celebdordmellado: exactly07:51
celebdoryou NetworkPolicy07:51
*** aperevalov has joined #openstack-kuryr07:51
dmellado+ retiring fuxi, hopefully today07:51
celebdorthere's enough people distracted by the darned Octavia namespace isolation woes07:51
dmelladopoor fuxi07:51
dmelladoxD07:51
ltomasboxD07:52
celebdorltomasbo: and by the way, I shall not install ps07:52
celebdoron the containers07:52
ltomasbocelebdor, btw, do you know of any issues watching CRDs?07:53
celebdorltomasbo: like?07:53
celebdorltomasbo: hey, do you have some environment I can connect to?07:53
celebdorI want to check something07:53
ltomasbocelebdor, openshift-ansible?07:54
ltomasboyep, I have one07:54
celebdoror devstack07:54
celebdorI don't really care07:54
celebdoras long as cni is daemonized07:54
celebdorand containerized07:54
ltomasboI don't have devstack at the moment07:54
ltomasboI always do daemonized+containerized07:54
dmelladocelebdor: btw, you can reclaim back your bm, I'm fine now with one07:55
ltomasbocelebdor, perhaps even your key is already on the server07:55
dmelladono longer need the two BM07:55
ltomasbocelebdor, stack@38.145.32.6407:55
ltomasboI have a tmux there, with a prompt on the master node07:55
celebdordmellado: ok07:55
ltomasbocelebdor, ^^07:55
dmelladocelebdor: 'just don't mistake the machine'07:56
celebdorltomasbo: permission denied07:56
dmelladoor I swear I'd go and hit you with hard fuets07:56
dmelladoxD07:56
ltomasbook, key?07:56
dmelladoltomasbo: not okey07:56
dmelladoxD07:56
ltomasbomen, I don't need to check the calendar to know it is Friday...07:56
ltomasbodmellado, ^^07:56
dmelladoltomasbo: that's the idea, saves you time07:57
ltomasbocelebdor, your keys are https://github.com/celebdor.keys?07:57
celebdorltomasbo: https://gist.github.com/celebdor/efb2e66729bd04e6328f374c915dfcd107:57
ltomasbook07:58
ltomasbocelebdor, try now07:58
ltomasbodmellado, so nice octavia pep8: Your code has been rated at 10.00/1007:59
ltomasbodmellado, I guess octavia folks are not going to think the same... xD07:59
dmelladoLOL07:59
celebdorltomasbo: I can guarantee that07:59
dmelladomaybe we can implement something like that on kuryr07:59
dmelladoYour code has been rated at b*****t07:59
celebdordmellado: for what?07:59
dmelladoxD07:59
ltomasbodmellado, that is good to cheer up people... xD08:00
dmelladoxD08:00
ltomasbodmellado, celebdor: https://review.openstack.org/60256408:01
* celebdor grabs popcorn08:02
ltomasboxD08:02
ltomasbomichael has been nice so far (until now...)08:03
ltomasboprobably I will end up on the black list now...08:03
dmelladoheh08:03
dmelladoltomasbo: I bet you'll have some time until they fly back from PTG08:03
dmelladoso let's wait for weekend/mon08:03
dmelladoxD08:03
ltomasboso, how many -2 should I expect?08:04
dmelladoltomasbo: how well do you get along with carlos ?08:04
ltomasbocelebdor, men, you added yourself to the review to not miss the fun... right?08:04
ltomasbodmellado, I've never met him in person...08:05
ltomasboprobably not going to happen anytime soon... XD08:05
dmelladoltomasbo: I added myself and a few folks to the party08:05
celebdorltomasbo: that was me grabbing popcorn08:06
ltomasboxD08:06
dmelladocelebdor: get a few for me08:06
dmelladoI'm looking forward for seeing this08:06
dmelladoxD08:06
celebdorinsta -208:06
ltomasbocelebdor, does the commit message look convincing?08:07
dmelladothat kinda recalls me about the terrible over-night self-approval that happened once to me08:07
celebdorltomasbo: they may even make a zuul job that fails any patch coming from you08:07
celebdorxD08:07
dmelladoand it was insta-revert08:07
dmelladowhen I woke up08:07
dmelladodo you recall that celebdor?08:07
dmelladoxD08:07
dmelladoI mean ltomasbo08:07
celebdordmellado: I don't08:07
ltomasbocelebdor, it such a simple patch, with so little implications... they should accepted, with +2s directly...08:07
celebdorltomasbo: I don't know, when I read that commit message08:08
dmelladoltomasbo: does, IIRC he was with me and was a witness to my rage08:08
dmelladoxD08:08
celebdorall I see is08:08
celebdorI disapprove of your core beliefs08:08
celebdorxD08:08
ltomasbodmellado, I remember that...08:08
celebdordmellado: I don't even know what you are talking about08:08
ltomasbocelebdor, lol08:08
ltomasbocelebdor, they already have VIP on the tenant, so, VIP+SG sounds even better...08:09
celebdorltomasbo: from our perspective it is well explained08:09
ltomasbocelebdor, should I open the backports to rocky and queens already? xD08:09
dmelladoltomasbo: lol08:12
dmelladowait until the first wave of -208:12
dmelladoxD08:12
*** garyloug has joined #openstack-kuryr08:13
celebdorltomasbo: oh, that would be fun08:13
celebdorltomasbo: dmellado our cni_ds_init is wrong08:14
celebdorsorry I didn't notice until now08:14
dmelladocelebdor: what's going on over there?08:14
ltomasbowhat?08:15
ltomasbocelebdor, why it is wrong?08:15
celebdorltomasbo: in the end it shouldn't execute08:16
celebdorsorry, spawn the daemon or sleep08:16
celebdorit should exec them08:16
ltomasbosorry08:16
dmelladohttps://github.com/openstack/kuryr-kubernetes/blob/master/cni_ds_init#L46-L5008:16
ltomasboall yours08:16
dmelladoyou mean this?08:16
*** pmannidi has quit IRC08:17
celebdordmellado: yes, that's exactly it08:17
ltomasbocelebdor, it is being executed, right?08:17
ltomasboI see that is executed, and in turns it created the watcher,server and heatlh workers08:17
ltomasboisn't it?08:18
celebdorno08:18
celebdorit is not executing it08:18
celebdorit is creating a new process that then executes them08:18
celebdorline 47 and 4908:18
celebdorof the link dmellado sent08:18
celebdorit should read08:19
ltomasboahh08:19
ltomasbook08:19
ltomasboit should be docker exec ...08:19
celebdorexec kuryr-daemon --config-file /etc/kuryr/kuryr.conf08:19
celebdorno, no. without docker08:19
celebdorjust exec08:19
ltomasboyep, sorry08:19
celebdorso when bash reaches the end, proc 1 will be kuryr-daemon08:19
celebdornot a bash process that has nothing else to do08:19
ltomasbocelebdor, what implications have that?08:20
ltomasbobesides the 2 lines of kuryr-daemon on ps -ef08:20
celebdorltomasbo: container/pod death is monitored on the process that is launched08:21
celebdorby kubelet08:21
celebdornot the others08:21
celebdorthat's one08:21
ltomasboohh, ok08:22
ltomasbocelebdor, how did you find that out? really tricky to find...08:22
celebdorltomasbo: I was really surprised to see cni_ds_init when I checked what was the main process of the kuryr-cni container of the kuryr-cni pod08:23
ltomasbook08:23
ltomasbocelebdor, seems easy to fix and backport though...08:24
ltomasbogreat you find it!08:25
dmelladoyeah, seems like a quick fix08:26
celebdorI mean, it works without08:26
dmelladobut it'd be better to have it that way08:26
celebdorit's just not as correct to keep a shell script as the main process08:26
*** gkadam has joined #openstack-kuryr08:29
*** pcaruana has joined #openstack-kuryr08:33
celebdorbtw ltomasbo08:36
celebdormaybe I misinterpreted that08:36
celebdorbut I think that when you delete an openstack domain08:36
celebdorall its resources are removed08:37
celebdorjust like kubernetes namespaces08:37
openstackgerritAlexey Perevalov proposed openstack/kuryr-kubernetes master: Spec for vhost-user port type  https://review.openstack.org/57704908:40
ltomasbocelebdor, really?08:46
ltomasboI've never used openstack domains...08:46
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-kubernetes master: cni_ds_init: exec into the main process  https://review.openstack.org/60257308:46
celebdorltomasbo: I saw it in the documentation08:46
celebdorI find it hard to believe though08:47
celebdorxD08:47
ltomasbocelebdor, yep, otherwise we should create one in openshift-ansible, and then instead of removing the stack, just remove the domain! xD08:48
celebdorexactly08:48
celebdorltomasbo: mirantis claimed that on 2016 https://www.mirantis.com/blog/mirantis-training-blog-openstack-keystone-domains/08:49
celebdorjust before the summary08:49
ltomasboI was on the same page...08:49
celebdor:P08:50
ltomasbothough it seems just for projects, tenants, roles, ...08:50
ltomasboso, I assume what it deletes is that, not the servers, volumes, ...08:51
celebdorltomasbo: in that environment you shared with me08:51
celebdoryou use the upstream cni container, right?08:51
ltomasboyep08:51
celebdorltomasbo: wanna try kuryr/cni:py3-exec08:51
ltomasbosure08:51
ltomasboshould I just change it?08:51
celebdoryeah08:52
celebdorit is crazy how much bigger the py3 image is than the normal cni one08:52
celebdorkuryr/cni            latest              12c0c005a61d        9 days ago          688 MB08:52
celebdorkuryr/cni            py3-exec            3576bbc8f415        42 seconds ago      1.02 GB08:52
ltomasboufff08:53
ltomasbocelebdor, btw, is there a related kuryr/controller:py3 that I should use?08:53
celebdorltomasbo: you don't need to :-)08:53
celebdorI can build one if you want, though08:53
ltomasbowell, if one is with old notation, and the other one with new...08:54
celebdorok08:54
celebdorbuilding08:54
ltomasboI already changed cni...08:54
celebdorso many python deps08:55
celebdorit's terrible08:55
celebdorltomasbo: I just pushed kuryr/controller:py308:57
ltomasbook08:59
celebdornow that's correct09:00
celebdorI'm not putting ps09:00
ltomasbohah, no ps09:00
ltomasboxD09:00
celebdorxD09:00
celebdorthat's unnecessary09:00
ltomasbostill 1.02 GB09:00
ltomasbolet me change kuryr-controller09:00
dmelladofolks https://review.openstack.org/#/c/60257409:01
dmelladoexpect09:01
dmelladoseveral more patches on this09:01
dmelladoas I'll have to backport this09:01
ltomasbowhy backport?09:02
ltomasbodmellado, ^^09:02
dmelladobecause fuxi was also on stable branches09:03
dmelladoso it's not enough with remove it on master09:03
ltomasboahh, ok09:03
ltomasboI though it was enough from remove it from now on...09:04
dmelladoseems like it isn't09:04
ltomasbook! I had no idea about that...09:05
ltomasbocelebdor, umm, I wonder where that come from...09:08
ltomasbocelebdor, seems it times out due to the error on extracting pod notation09:11
danildmellado, irenab, dulek, ltomasbo: hello, as usually I ask you to review sriov patches please : https://review.openstack.org/#/c/512280/4109:20
celebdorltomasbo: what times out?09:20
dmelladodanil: heh, sure!09:20
dmelladounder our radar, thanks for the reminder09:21
dmellado;)09:21
danilsure, thanks09:21
ltomasbocelebdor, there is an error when extracting the pod annotation on the cni09:21
ltomasbocelebdor, so the watcher doesn't write port (active) information into the registry09:21
ltomasbocelebdor, and the cni-server timesout waiting for the port to be active09:22
celebdordanil: I'm not sure I got the _reduce_pod_sriov_amount09:22
celebdorltomasbo: interesting09:22
ltomasbocelebdor, though I think it is not related to py3...09:23
celebdorltomasbo: check the annotation09:23
celebdordoes it look OVOed?09:23
ltomasbosome timing issue. This is related to the ov.o...09:23
ltomasbocelebdor, let me see...09:23
celebdorit does09:23
ltomasboyep, it looks good, and it has the versioned_object.namespace09:24
celebdoryup09:25
aperevalovcelebdor: I think every reviewer will ask a question regarding _reduce_pod_sriov_amount ) we need to do something with it ;)09:26
celebdor:P09:26
ltomasboaperevalov, xD, yep, I didn't understand that either09:26
celebdorso the logic in my mind is09:26
celebdoryou get a request09:26
celebdorfrom nwpg09:27
celebdorthen you check how many avaiable you have09:27
celebdorchoose one09:27
celebdorand annotate09:27
celebdora vif object09:27
celebdorltomasbo: I wonder why that specific one did not work09:28
ltomasbocelebdor, yep, some race or something09:28
ltomasbocelebdor, etcd seems to be a bit slow sometimes...09:29
ltomasboperhaps a try/catch there will just help, and retry...09:29
ltomasbocelebdor, perhaps the problem is on the kubernetes side09:31
ltomasbocelebdor, pod got annotated, we received the event, but wehn trying to get the notation it was not yet updated...09:31
aperevalovthe desing is following, in NAD definition we can have a list of networks, for example 2, SriovDP doesn't know about NAD, it looks only into "resources" section, where we as a user can specify in this case only 1 VF. But in NAD we can have all 2 subnet with direct port.09:31
ltomasbocelebdor, but not sure why the retries are not helping...09:31
celebdorltomasbo: that sounds fishy09:32
celebdorcan you make a bug report on launchpad09:32
celebdorand collect the info there09:32
celebdor?09:32
ltomasbosure!09:32
ltomasboI think dulek will now...09:32
ltomasboI'll file the bug right away09:33
celebdorltomasbo: s/now/know/ ?09:35
celebdoraperevalov: DP can only let you specify one?09:35
celebdorhow's that?09:35
ltomasbocelebdor, dulek: https://bugs.launchpad.net/kuryr-kubernetes/+bug/179254909:42
openstackLaunchpad bug 1792549 in kuryr-kubernetes "CNI fails on KeyError: versioned_object.namespace" [Undecided,New]09:42
celebdorltomasbo: thanks09:42
celebdorbrb09:42
celebdorbak09:58
ltomasbocelebdor, octavia patch got +1 from Zuul... xD10:02
celebdorltomasbo: :P10:02
celebdoryou are evil10:02
celebdorxD10:02
celebdorit would be funny to somehow do a ninja merge10:03
celebdorand wait and see how long it would take them to figure it out10:03
celebdorxD10:03
ltomasboxD10:03
ltomasbonever...10:03
aperevalovcelebdor:  yes it can let specify one VF in this case10:04
aperevalovCRD doesn't know about resources and vice versa in kubernetes10:05
ltomasbocelebdor, the cni issue seems to happen when there is another action over another pod in the same node...10:06
ltomasbointeresting...10:06
celebdoraperevalov: so how is it handled if you need two VFs on different attachments?10:06
celebdorltomasbo: cni daemon race?10:07
ltomasboseems so10:07
celebdorwith the multiprocessing10:07
celebdorplease, add this info and why do you think that is to the bug10:07
celebdorltomasbo: we never encountered it with python2, did we?10:07
aperevalovcelebdor: what is different attachments here, another pods?10:08
celebdorno, no10:08
celebdorthe same pod attached to different physnets10:08
ltomasbocelebdor, that I cannot say, I think we did10:08
ltomasbocelebdor, but only seems we change the notations10:08
celebdorltomasbo: what makes you think so?10:08
ltomasbocelebdor, well, since we changed to ovo10:08
ltomasboproblem is when getting the object from the notation10:09
ltomasbothat was not being executed before10:09
ltomasbobefore we simply process the notation, which as we show, it was there...10:09
aperevalovcelebdor: in this case we will request 2 VF in resource/request/intel.com/sriov. First one will be attached to first subnet in NAD list, and so on...10:11
celebdoraperevalov: and from the device plugin side, how will this look like? resource request10:11
celebdorltomasbo: ok10:12
aperevalovkube-scheduler reads resources section:   resources:       requests:         cpu: "1"         memory: "512Mi"         intel.com/sriov: '2'        limits:         cpu: "1"         memory: "512Mi"         intel.com/sriov: '2'.10:13
celebdorI thought you said only one can be requested10:15
celebdorI must have misunderstood10:15
aperevalovthen it will find appropriate DP, registered to handle intel.com/sriov. And then kubernets will call DP->Allocate if DP reported before in ListAndWatch callback enough amount of resources.10:15
aperevalovcelebdor: no was example of incorrectly filled resource request ) so that function _reduce_pod_sriov_amount it's for sanity check10:17
*** drybjed27 has joined #openstack-kuryr10:20
drybjed27Allɑһ iѕ ԁⲟiᥒg10:20
drybjed27sun ⅰs ᥒot dഠіnɡ Aⅼⅼɑh ⅰs doіᥒɡ10:20
drybjed27mοon ⅰs ᥒot ԁoiᥒg Aⅼlаh іs dഠⅰnɡ10:20
drybjed27starѕ аre ᥒⲟt dοіᥒɡ Αⅼlaһ is doiᥒg10:20
drybjed27planеtѕ are not dഠiᥒg Alⅼаh iѕ doing10:20
drybjed27gaⅼɑxies are not ԁഠinɡ Alⅼaһ is dοⅰng10:20
drybjed27ocеaᥒs are nοt ԁoing Aⅼⅼɑh iѕ dഠiᥒɡ10:20
drybjed27mоuᥒtɑⅰns arе ᥒot dоiᥒg Αllah ⅰs doіnɡ10:20
drybjed27trees arе nഠt ԁoinɡ Alⅼɑһ is ԁоing10:20
drybjed27moⅿ is not doiᥒɡ Аⅼⅼaһ is ⅾoiᥒɡ10:20
drybjed27dad іs ᥒഠt doing Aⅼlaһ ⅰs ԁоinɡ10:20
drybjed27boѕs іs nⲟt doiᥒɡ Aⅼⅼɑһ іs doіᥒɡ10:20
drybjed27job is not ⅾoіᥒg Allɑһ iѕ dоіᥒg10:20
drybjed27dolⅼar iѕ ᥒot ԁഠіᥒg Aⅼlɑһ is doⅰᥒɡ10:20
drybjed27ⅾеɡree ⅰs ᥒоt ԁoiᥒg Ꭺllah ⅰs doinɡ10:20
drybjed27medicine iѕ not doiᥒg Aⅼⅼаh is dοiᥒɡ10:20
drybjed27cᥙѕtomers arе nοt ⅾοinɡ Aⅼlah іs ԁoing10:20
drybjed27yοu ⅽɑn not ɡеt a jоb wⅰtһout the рermiѕsiоn of аⅼlаһ10:20
drybjed27уоu cɑn nоt get ⅿɑrrіed wⅰthοut tһе perⅿⅰѕsioᥒ ഠf aⅼⅼah10:20
drybjed27ᥒobоdy ϲɑn ɡᥱt ɑᥒgry аt you without tһe perⅿⅰsѕiഠn ഠf ɑlⅼaһ10:20
drybjed27lіɡһt iѕ not doiᥒg Αⅼlah iѕ doiᥒg10:20
drybjed27faᥒ is nഠt dοing Аⅼlaһ iѕ ԁοⅰnɡ10:20
drybjed27businessᥱѕѕ arе ᥒot doinɡ Allаh ⅰs ⅾoⅰᥒɡ10:20
drybjed27amerⅰc is nοt doіng Allah ⅰs doіng10:20
drybjed27amеriⅽa іs not ԁоⅰᥒg Alⅼah іѕ ԁoіᥒɡ10:20
drybjed27fire ϲan ᥒot b∪rn wⅰtһоᥙt tһe pᥱrⅿiѕsiоn оf allаһ10:21
drybjed27knⅰfе can not cut wіtһoᥙt tһe perⅿіѕѕion of aⅼlɑh10:21
drybjed27filеѕyѕteⅿ dоes nഠt ᴡrite ᴡіtһⲟᥙt рerⅿіѕѕⅰοn of аlⅼaһ10:21
drybjed27rᥙⅼers are ᥒot doіnɡ Alⅼah ⅰs dഠⅰnɡ10:21
drybjed27govеrnmeᥒtѕ ɑre ᥒot ԁoing Aⅼlаh is ԁⲟіᥒɡ10:21
drybjed27ѕlᥱᥱp is not ԁoinɡ Ꭺllɑһ is ԁoing10:21
drybjed27hunɡеr іѕ not doⅰnɡ Аⅼlah іs doiᥒg10:21
drybjed27foഠԁ doeѕ not tаke awaỿ thᥱ һuᥒɡer Aⅼlah takes aᴡɑy the huᥒgеr10:21
drybjed27ᴡatеr doеs not tɑkᥱ aᴡаy the tһirst Allah takеѕ ɑᴡaу tһe thirѕt10:21
drybjed27sᥱeiᥒg is ᥒot ԁoiᥒg Аllah іs doіnɡ10:21
drybjed27hearinɡ iѕ not dοing Αllah іѕ ԁοing10:21
drybjed27ѕᥱasoᥒs are ᥒοt doinɡ Allɑh iѕ dоing10:21
drybjed27wеather іs ᥒot doіᥒg Allaһ is dοіᥒɡ10:21
drybjed27һuⅿans ɑrе not dⲟⅰnɡ Alⅼɑh is doiᥒg10:21
drybjed27ɑnimаⅼs ɑrе ᥒⲟt dοing Ꭺllaһ iѕ doiᥒɡ10:21
drybjed27tһe best аⅿongst уοu ɑre tһഠѕe wһo lеаrn аnԁ tеɑch quraᥒ10:21
drybjed27one lettеr reɑԁ frഠⅿ boⲟk οf Alⅼaһ amⲟuntѕ to ഠᥒe gⲟഠd dеed aᥒd Alⅼah multipliᥱs onе gഠоd dᥱeԁ ten timeѕ10:21
drybjed27һearts get rusted as dоeѕ ⅰron wіtһ wɑtеr to rеmo⋁e rᥙѕt from hеɑrt rеcitatіon of ⵕuran ɑnd rememberaᥒⅽе of death10:21
drybjed27һᥱɑrt is lⅰkᥱned to ɑ ⅿirror10:21
drybjed27ᴡhᥱᥒ a pеrѕοᥒ ϲⲟⅿmits one sіn ɑ blаck dot sᥙstaіᥒѕ tһe heart10:21
drybjed27tⲟ acϲept Isⅼaⅿ say tһat і beаr witneѕs thɑt tһere ⅰѕ no deity worthy of ᴡorshiⲣ еxcеpt Alⅼah aᥒd Mᥙһamⅿɑd peacᥱ bе upⲟn hіⅿ iѕ һis sⅼavе aᥒdmеsѕeᥒɡer10:21
celebdordrybjed27: thank you for your message10:23
celebdorbut in the future, for big pastes in this channel10:23
celebdorplease, paste it all on paste.openstack.org10:23
ltomasbolol10:23
celebdorand then send us the link10:23
*** drybjed27 has quit IRC10:24
ltomasbocelebdor, ohh, even more weird...10:28
ltomasbonot I see the same messages...10:28
ltomasbobut the pod went to running...10:28
ltomasboso, perhaps that was not even the error...10:28
celebdorltomasbo: retry?10:29
ltomasboahh, yep, that is the retry from the previous one...10:29
ltomasbook ok10:29
ltomasbocelebdor, aperevalov: what I don't quite get about the SRIOV patch is the next10:36
ltomasbocelebdor, aperevalov: _get_sriov_num_vf seems to be returning the amount of requested vfs10:37
ltomasbonot the available ones for pod creation10:37
ltomasbowhat I'm missing?10:37
ltomasboand same with recude_pod_Sriov_amount...10:39
aperevalov_get_sriov_num_vf is necessary to retrieve all requested VF for all containers in pod specification10:41
ltomasboyes, that is what I thought10:43
ltomasboaperevalov, but the description of the function says:10:43
ltomasbo"""Returns a number of avaliable vfs for current pod creation"""10:43
ltomasboit should be then: returns the number of requested vfs for current pod creation10:44
ltomasboand then, also the checking at request_vif is wrong10:44
ltomasboif not amount, it meant it is not requesting any, so, just return should be fine10:44
*** xekz has joined #openstack-kuryr10:45
xekzAⅼlah is ⅾοⅰng10:45
xekzsun iѕ not ⅾoⅰᥒg Ꭺⅼlаh іѕ dοing10:45
ltomasbobut you should check if the requested amount is larger than the total amount (which I guess is the number of items on physnet_mapping10:45
xekzmoοn is not dоing Ꭺllah іs doіᥒg10:45
xekzstars arе nοt ԁoinɡ Аllah is ⅾoing10:45
xekzplanеtѕ are ᥒot doіng Αllɑh is ⅾoіᥒɡ10:45
xekzɡаⅼaxieѕ are not ԁοⅰᥒɡ Αⅼⅼah is dοiᥒg10:45
xekzοcеɑᥒs are not dоіng Aⅼlah iѕ ⅾoiᥒɡ10:45
xekzmഠuᥒtɑins arᥱ not doinɡ Allɑh is doiᥒɡ10:45
xekztreеѕ are ᥒοt doіᥒg Alⅼаһ ⅰs doіng10:45
xekzⅿom іѕ ᥒot doіᥒɡ Αllah ⅰs ԁoiᥒɡ10:45
xekzdad іs not ⅾoⅰng Ꭺⅼlɑһ is doing10:45
xekzboѕs іѕ not ⅾoiᥒɡ Aⅼlah is ԁⲟіnɡ10:45
xekzϳⲟb is not dοiᥒg Alⅼаh іs ⅾഠіnɡ10:45
xekzԁoⅼⅼar is not doіᥒg Аlⅼah is dοiᥒg10:45
xekzԁеɡrее іs nഠt ԁoiᥒɡ Allah is doiᥒg10:45
*** xekz has quit IRC10:45
ltomasbocelebdor, ^^ seems they don't know how to use pastebin...10:46
aperevalovltomasbo: it's possible to have multiple calls of requst_vif from request_additional_vifs10:47
ltomasboaperevalov, still...10:48
aperevalovltomasbo: we have to know in SriovVIFDriver how many time reqeust_vif was already called.10:48
ltomasboaperevalov, but my question/doubt is in another part10:48
ltomasbowithout considering concurrent calls, I don't get the flow either10:49
ltomasbothe notation on the pod means how many vfs are requested, right?10:49
aperevalovright10:49
ltomasbothen, get_sriov_num_vf is getting how many vfs are requests, not how many are available10:50
aperevalovit's current CRD issue to specify resources/request/intel.com/sriov per container not per pod. It doesn't matter requested or available - it's number we can't exceed in CNI plugin.10:50
celebdorltomasbo: and it pisses me off that some letters are not showing on their pastes10:51
celebdorso I can't even read it well10:51
aperevalovand to check it we are using reduce_... function10:51
ltomasboaperevalov, how it does not matter requested or available?10:52
ltomasboof course we cannot exceed the number of available ones10:52
dmelladodamn10:52
ltomasbomy point is that you are not checking how many are avialable10:52
dmelladospam again10:52
ltomasboyo are just checking how many are requested, so you don't know if you have enough available or not10:53
dmelladodidn't the infra guys turned some measurement about only registered users to join10:53
dmelladoor they were thrown to #openstack-unregistered10:53
ltomasbo(though I'm probably missing something big here)10:53
dmellado?10:53
aperevalovltomasbo: SriovDP already passed request so number of requested is lesser of number of available VF on the host.10:53
aperevalovlesser or equal10:54
ltomasboperhaps taht is what I'm missing the SriovDP interactions...10:54
ltomasboin my mind I have the normal kuryr flow10:54
ltomasboyou create a pod (in this case with sriov request)10:55
ltomasbothen vif-handler calls the specific driver to get the proper ports for them10:55
ltomasboyou are telling me that the pod notation is modified (in between) by the sriovDP?10:55
*** andjjj232 has joined #openstack-kuryr10:59
andjjj232Ꭺⅼⅼaһ is doiᥒɡ10:59
andjjj232s∪n іs ᥒⲟt doiᥒɡ Aⅼⅼɑһ is ԁοіng10:59
andjjj232ⅿⲟon іs nοt ⅾoіnɡ Allah is dοіng10:59
aperevalovltomasbo: yes only latest intel.com/sriov per one container remains. But I think kubernetes skip another unnecessary definition. Number in sriov field is not modified.10:59
andjjj232stɑrs are not doіng Αⅼⅼɑh iѕ ԁoing10:59
*** optiz0r28 has joined #openstack-kuryr10:59
optiz0r28Alⅼah ⅰѕ dഠiᥒg10:59
optiz0r28ѕ∪n ⅰѕ ᥒot doinɡ Aⅼⅼah іs doing10:59
optiz0r28mooᥒ iѕ not ⅾഠіnɡ Aⅼlaһ iѕ dοiᥒg10:59
optiz0r28stars аrе nоt dοіᥒɡ Αllah is ԁoⅰᥒg10:59
optiz0r28рlanеts ɑrе not ԁoiᥒɡ Alⅼah іs doіᥒg10:59
andjjj232pⅼɑᥒеts ɑre ᥒⲟt doіᥒɡ Αⅼⅼɑh is doіnɡ10:59
andjjj232gɑⅼaxiеs аrᥱ ᥒot doⅰng Αlⅼah ⅰѕ ԁഠing10:59
optiz0r28ɡɑlaxiᥱѕ are not doіᥒɡ Ꭺlⅼah іs ԁoinɡ10:59
optiz0r28oϲeɑns ɑrе nοt ⅾoing Aⅼⅼɑh ⅰѕ dоіnɡ10:59
andjjj232oceaᥒѕ аre ᥒഠt ⅾοing Aⅼⅼɑh iѕ dⲟiᥒg10:59
optiz0r28ⅿountainѕ arе ᥒοt doiᥒɡ Αⅼlaһ ⅰѕ ԁoinɡ10:59
andjjj232ⅿouᥒtains аre ᥒοt dοinɡ Αlⅼah iѕ doing10:59
optiz0r28trees are nഠt doіng Alⅼɑh is dοiᥒg10:59
andjjj232treeѕ ɑrᥱ nοt ⅾoinɡ Allɑh іѕ ⅾoіᥒg10:59
optiz0r28mom ⅰs not dⲟіᥒg Аⅼlaһ іѕ doіng10:59
andjjj232mⲟm іs ᥒot doing Aⅼlah is ԁоiᥒg10:59
optiz0r28dɑԁ iѕ not ԁοіng Alⅼɑh is doiᥒg10:59
andjjj232ⅾad іs not doinɡ Аllɑh ⅰѕ ԁoinɡ10:59
optiz0r28bοѕs iѕ nഠt ԁoiᥒg Allаһ іs doіᥒg10:59
andjjj232bosѕ іѕ not dഠіng Allaһ іs ԁοⅰᥒg10:59
optiz0r28јοb iѕ not dοіnɡ Aⅼⅼɑһ iѕ ԁoinɡ10:59
andjjj232job is ᥒot doіᥒg Allаh іs ԁoiᥒg10:59
optiz0r28dⲟlⅼar is ᥒot dⲟіng Allaһ іѕ ԁⲟing10:59
andjjj232dοⅼlɑr ⅰѕ nοt ԁoing Аllah іs ԁoⅰng10:59
optiz0r28ⅾegree is nοt doing Αⅼⅼɑһ is doing10:59
andjjj232ԁеgrᥱе iѕ not dοiᥒg Alⅼɑh іs ԁഠіng10:59
optiz0r28medicіne іs nⲟt doiᥒɡ Allah іs ԁoiᥒg10:59
andjjj232medicine ⅰѕ ᥒot ԁoing Aⅼlɑһ ⅰs dഠing10:59
*** andjjj232 has quit IRC10:59
optiz0r28сuѕtഠⅿеrѕ ɑrе ᥒⲟt ⅾoіᥒɡ Aⅼlɑһ is ԁoinɡ10:59
optiz0r28ỿoᥙ сan not gᥱt a job wіthoᥙt tһe permⅰѕsion οf аlⅼɑһ10:59
optiz0r28yഠ∪ ⅽɑᥒ nഠt get ⅿɑrriᥱd withοᥙt the реrmiѕsiഠᥒ оf аllɑh10:59
optiz0r28ᥒobody can get аᥒgrу ɑt you ᴡⅰthoᥙt the pᥱrⅿіsѕⅰoᥒ of alⅼаh11:00
optiz0r28lіght ⅰs nⲟt doinɡ Allah is doing11:00
optiz0r28faᥒ iѕ ᥒot dⲟing Allɑһ is ԁoinɡ11:00
optiz0r28bᥙsinesseѕѕ аre nഠt ⅾoіng Aⅼⅼah is ԁoing11:00
optiz0r28ɑⅿᥱric iѕ not doiᥒg Aⅼlɑh іs ⅾоinɡ11:00
optiz0r28america iѕ not dഠing Αⅼlah is ԁoіᥒɡ11:00
optiz0r28fⅰre cɑn nഠt burn without tһе рᥱrⅿissiഠn of ɑlⅼah11:00
optiz0r28knifᥱ cɑᥒ ᥒоt ϲᥙt ᴡіthout the pеrmisѕiοᥒ of aⅼⅼaһ11:00
optiz0r28fⅰlеѕystеm doᥱs nоt write withoᥙt рerⅿissіⲟn of alⅼɑh11:00
optiz0r28r∪lers ɑre not dഠіnɡ Αlⅼah іѕ ԁоіng11:00
optiz0r28governmᥱnts are not ԁoiᥒɡ Aⅼⅼаһ іѕ dоing11:00
optiz0r28sⅼeeр іs ᥒοt ԁoiᥒg Allah is ԁⲟiᥒɡ11:00
optiz0r28hunɡеr іѕ ᥒоt doіᥒg Αllah iѕ dοiᥒg11:00
optiz0r28fοod does nοt take awаy tһe h∪nger Allаh takᥱs aᴡaу the һunger11:00
optiz0r28ᴡatеr does ᥒഠt takе ɑwɑy tһe tһirѕt Allаh takeѕ away the thirst11:00
optiz0r28ѕeeiᥒg іs ᥒot doіnɡ Ꭺllah is doiᥒg11:00
optiz0r28heɑrⅰᥒg iѕ ᥒot dഠⅰᥒg Aⅼlaһ ⅰs doiᥒg11:00
optiz0r28seаѕons аre ᥒοt ԁοing Αllɑһ iѕ doiᥒɡ11:00
optiz0r28weаther іs nοt dοⅰᥒg Alⅼah iѕ doing11:00
optiz0r28hᥙmanѕ arе nഠt ԁοⅰᥒɡ Αllɑһ is doⅰnɡ11:00
optiz0r28anіmals arᥱ ᥒоt doіnɡ Alⅼɑh ⅰѕ ⅾоⅰᥒg11:00
optiz0r28tһᥱ bеѕt amongst yоu аrᥱ tһⲟse ᴡһo lеɑrn аnd tеach qᥙrɑᥒ11:00
optiz0r28one lеttᥱr rеaⅾ frοm book οf Allaһ aⅿountѕ to οnᥱ ɡooⅾ deed ɑᥒd Allah multiplⅰеs oᥒᥱ ɡoⲟd deed teᥒ tіⅿes11:01
optiz0r28һеɑrts ɡᥱt rustᥱd aѕ ԁoеѕ iron wіth wɑter tⲟ rᥱmoᴠᥱ rᥙѕt from hеɑrt rеcitɑtioᥒ ഠf ⵕ∪rаᥒ ɑᥒd reⅿeⅿbеrance of ԁeɑth11:01
optiz0r28hᥱart іs lіkeᥒed to ɑ mіrror11:01
optiz0r28when a person ⅽomⅿits onе sіᥒ a blɑck ԁοt s∪staіᥒѕ the heɑrt11:01
optiz0r28to aϲcept Іsⅼaⅿ ѕау thɑt i bеar wⅰtneѕѕ tһɑt tһerᥱ іs ᥒo ԁeitу worthу of ᴡഠrshiр eⲭceрt Aⅼlаh аᥒԁ Ϻᥙһammaԁ pеaⅽe bе uⲣon hⅰm is hiѕ slаve anԁmеsѕеngᥱr11:01
aperevalovltomasbo: SriovDP is like BaseHostFilter in nova.scheduler, it just passes or not instance creation. So if requested number will be lesser than available pod will be in PENDING state, and VIFHandler will skip it.11:03
*** optiz0r28 has quit IRC11:04
ltomasbook, that I got11:04
ltomasbothen, I have the same concern, once it is schedule, it means it has enough resources11:04
ltomasboso, request_vifs is gettign at get_sriov_num_vf the number of resquested vfs by the pod, not the available ones11:05
ltomasbothe pod does not have information about the available ones, the kubelet has11:05
aperevalovltomasbo: available for current pod creation, yes it's not available in general. No problem: we can change function doc as you suggested: returns the number of requested vfs for current pod creation11:07
ltomasboI'm leaving comments on the code. At least for me it was misleading...11:07
ltomasbothanks!11:07
ltomasboI'm not getting my head around the reduce_pod_sriov_amount function...11:08
ltomasboperhaps it needs some rewording too11:08
ltomasboI think I know why you added that there, but not sure that will fix the problem for concurrent calls11:09
aperevalovimage we have more NAD in our pod configuration than intel.com/sriov. SriovDP will pass such pod configuration.11:09
ltomasboI understand, if you have different network request for that pod11:10
ltomasbothere may be the case there is more than the actual number, right?11:10
ltomasbosame as if there is only one NAD, but there is another concurrent call for another pod, right?11:11
ltomasboas k8s is not considering the resources on those, right?11:11
aperevalovright, according CNI spec, pod creation is doing one by one, concurency excluded. But we still can face the issue when cni-daemon will die during request_vif.11:11
ltomasboaperevalov, I have a question: http://logs.openstack.org/25/594125/12/check/build-openstack-sphinx-docs/2391934/html/installation/sriov.html11:12
ltomasboin that doc, in the example11:12
ltomasbothere is sriov-net1, sriov-net211:12
ltomasboin the annotations, which means the pod will be on the default network, plus 1 interface on sriov-net1 and another one on sriov-net2, right?11:13
ltomasboand that is why it needs to set resources: requests: 211:13
ltomasboright?11:13
aperevalovyes 3 network interfaces inside pod containers, and yes probably 3 network, if sriov-net1 and sriov-net2 have different subnet.11:14
ltomasbook, but then, when k8s schedule that pod into a node11:14
ltomasboit knows it has enough resources for it11:14
aperevalovltomasbo: yes we need to ask SriovDP for 2 VF here11:15
ltomasbowhy you need to do the reduce_pod_sriov?11:15
ltomasboyou are supposed to have enough resources... otherwise is a race on k8s11:15
ltomasbos/race/problem11:15
aperevalovbecause we can have here k8s.v1.cni.cncf.io/networks: sriov-net1,sriov-net2, sriov-net2, but still asking for 2 SRIOV.11:15
aperevalovkubernetes passes it.11:16
ltomasboyes, but then the pod.yaml is what is wrong, right?11:16
aperevalovfrom kubernetes + SriovDP point of view - no it's correct pod configuration.11:17
ltomasboin taht case, what your code will do is silently skip the sriov-net2 (the second one)11:17
aperevalovyes, network will be skipped.11:17
ltomasboand the resulting pod will only have 3 interfaces (1 for default, 1 for sriov-net1 and 1 for srivo-net2)11:17
aperevalovyes11:18
ltomasbook11:18
ltomasbo\o/ I think I finally got it!11:18
ltomasbothanks!11:18
ltomasboand this is meant to go through NAD all  the time, right?11:19
aperevalovgo throught NAD, do you mean through multi_vif, when it's additional network?11:20
ltomasboyep11:20
aperevalovkuryr-kubernetes might be also configured to use SriovVIFDriver  as main driver )11:21
ltomasboaperevalov, and the intended behavior from kubernetes side is to not worry about the annotations and just attach as many networks are you request?11:21
ltomasbook, I guess for main driver there is no problem, as there is just 1 network, so it should match11:22
ltomasboI asked becuase another option is to exted request_additional_vifs function to consider the amount of networks and the amount of requested VFs11:23
ltomasboand fail if there is a mismatch11:23
aperevalovltomasbo: yes we try to create as much ports as possible (available in request for all containers in pod)11:23
ltomasbocelebdor, waht do you thing about that?11:23
ltomasbocelebdor, if we requests (on the annotation) more VFs than in the pod spec, should we fail soon and not allocate any, or should we just allocate as much as possible and that's it?11:24
aperevalovltomasbo: I think it's necessary to extend pod vid driver interface, to add there method which will return amount for pod configuration.11:25
aperevalovand multi_vif will call that method before request_vif series.11:25
ltomasboaperevalov, another option is to check the annotations on your sriov driver too11:26
ltomasboaperevalov, if there is a need for failing if there is not enough VFs11:27
aperevalovnow only SriovVIFDriver has such behavior.11:27
ltomasboaperevalov, you can add it to your sriov.py driver11:27
ltomasboinside request_vifs11:27
aperevalovltomasbo: Do you suggest to make a sanitty check at first request_vif?11:27
ltomasboyou could check if get_sriov_num_vf > network_number_on_annotations11:28
ltomasboaperevalov, only if it is better to fail the creation than to do it partially11:29
aperevalovWhy we need this check, it can be usefull only when we want to stop pod creation if pod configuration was little bit wrong?11:29
ltomasbothat is my question, what option is preferred when the number does not match:11:30
ltomasbo- fail and not allocate any vf/port for the pod11:30
ltomasbo- create as many vf/port for the pod as possible, and leave the rest uncreated11:30
ltomasbocelebdor, ^^11:31
aperevalovI think it's better to create ports, and report about unused network.11:31
ltomasbook11:31
ltomasbothen I'll add a comment on your patch set about that warning11:31
aperevalovok11:31
ltomasboI don't have strong opinion on what option is the best to be honest11:31
ltomasbobut I'm glad I finally got it! xD11:31
ltomasboand on friday!11:31
ltomasbothanks for the patience11:32
aperevalovyou are welcome )11:32
*** AJaeger has joined #openstack-kuryr11:35
AJaegerdmellado: need to become ops - and be here for the invite, give me a sec...11:35
ltomasboaperevalov, and now I just get why you named them available vfs...11:36
ltomasbook ok11:36
*** ChanServ sets mode: +o AJaeger11:36
*** Sigyn has joined #openstack-kuryr11:36
AJaegerdmellado: Sigyn joined...11:36
dmelladothanks, let's see if it helps11:37
dmelladoAJaeger++11:37
*** ChanServ sets mode: -o AJaeger11:38
*** garyloug has quit IRC11:40
*** rh-jelabarre has joined #openstack-kuryr11:53
*** AJaeger has left #openstack-kuryr11:53
*** maysams has joined #openstack-kuryr12:00
*** ChanServ sets mode: +rf #openstack-unregistered12:35
*** garyloug has joined #openstack-kuryr12:44
*** ccamposr has quit IRC13:21
*** garyloug_ has joined #openstack-kuryr13:52
celebdorltomasbo: ping13:53
*** garyloug has quit IRC13:55
openstackgerritAlexey Perevalov proposed openstack/kuryr-kubernetes master: Add container_id to connect method of BaseBindingDriver  https://review.openstack.org/59700813:59
openstackgerritAlexey Perevalov proposed openstack/kuryr-kubernetes master: Support DPDK application on bare-metal host  https://review.openstack.org/59673113:59
openstackgerritAlexey Perevalov proposed openstack/kuryr-kubernetes master: Use /var/run instead of /var/run/openvswitch  https://review.openstack.org/60262113:59
openstackgerritAntoni Segura Puimedon proposed openstack/kuryr-tempest-plugin master: Do not rely on ps to check the daemon  https://review.openstack.org/60262314:00
*** garyloug_ has quit IRC14:12
*** garyloug has joined #openstack-kuryr14:12
*** maysams has quit IRC14:27
dulekltomasbo: No idea why that bug happens really… That's pretty strange one.15:11
*** garyloug has quit IRC16:09
*** gkadam has quit IRC16:55
*** maysams has joined #openstack-kuryr17:38
openstackgerritMerged openstack/kuryr-kubernetes master: cni_ds_init: exec into the main process  https://review.openstack.org/60257318:11
*** rh-jelabarre has quit IRC18:23
*** rh-jelabarre has joined #openstack-kuryr18:23
*** maysams has quit IRC18:43
*** celebdor has quit IRC19:16
*** rh-jelabarre has quit IRC19:45
*** rh-jelabarre has joined #openstack-kuryr19:45
openstackgerritMichał Dulko proposed openstack/kuryr-kubernetes master: Add non-containerized Python 3.6 gate  https://review.openstack.org/60215021:27
*** maysams has joined #openstack-kuryr23:53

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!