Monday, 2016-11-28

*** limao has joined #openstack-kuryr00:46
openstackgerritLiping Mao proposed openstack/kuryr-libnetwork: kuryr-libnetwork rally test job error log  https://review.openstack.org/39995000:57
*** tonanhngo has joined #openstack-kuryr01:13
*** tonanhngo has quit IRC01:16
*** yedongcan has joined #openstack-kuryr01:41
*** tonanhngo has joined #openstack-kuryr02:11
*** tonanhngo has quit IRC02:12
janonymousyedongcan: ping02:52
yedongcanjanonymous: pong02:56
*** tonanhngo has joined #openstack-kuryr02:57
yedongcanjanonymous: Hi, are you planning to rebase your mock test code?02:58
*** tonanhngo has quit IRC02:58
*** tonanhngo has joined #openstack-kuryr03:03
*** tonanhngo has quit IRC03:07
*** limao_ has joined #openstack-kuryr03:10
*** limao has quit IRC03:13
janonymousyedongcan: yeah03:17
janonymousyedongcan: i was thinking to do one by one to avoid merge conflicts...03:18
yedongcanjanonymous: Thanks, I see that most conflicts are caused by generate uuid.03:38
janonymousyedongcan: yes :(03:41
*** yamamoto_ has joined #openstack-kuryr04:05
*** yuanying has quit IRC04:57
*** janki has joined #openstack-kuryr05:21
*** shashank_hegde has joined #openstack-kuryr05:32
*** shashank_hegde has quit IRC05:40
*** shashank_hegde has joined #openstack-kuryr05:42
*** tonanhngo has joined #openstack-kuryr05:43
*** tonanhngo has quit IRC05:45
*** yedongcan has quit IRC05:49
*** yuanying has joined #openstack-kuryr06:02
*** yedongcan has joined #openstack-kuryr06:02
*** yedongcan1 has joined #openstack-kuryr06:20
*** yedongcan has quit IRC06:22
*** yedongcan1 has quit IRC06:40
*** oanson has joined #openstack-kuryr07:00
*** pmannidi has quit IRC07:01
*** yuanying has quit IRC07:04
*** yamamoto_ has quit IRC07:11
*** yedongcan has joined #openstack-kuryr07:19
*** irenab has quit IRC07:46
*** irenab has joined #openstack-kuryr07:47
*** janki is now known as janki|lunch07:49
*** irenab has quit IRC07:55
*** yamamoto_ has joined #openstack-kuryr07:55
*** irenab has joined #openstack-kuryr07:55
*** yedongcan has quit IRC08:00
*** dimak has joined #openstack-kuryr08:02
*** tonanhngo has joined #openstack-kuryr08:02
*** tonanhngo has quit IRC08:04
*** irenab has quit IRC08:12
*** irenab has joined #openstack-kuryr08:13
openstackgerritMerged openstack/fuxi: Update flake8 ignore list  https://review.openstack.org/37825008:14
openstackgerritMerged openstack/fuxi: Add py34 for fuxi test  https://review.openstack.org/39334708:23
*** yedongcan has joined #openstack-kuryr08:31
*** shashank_hegde has quit IRC08:37
*** shashank_hegde has joined #openstack-kuryr08:37
*** shashank_hegde has quit IRC08:44
*** shashank_hegde has joined #openstack-kuryr08:46
openstackgerritMerged openstack/fuxi: Fix typos in rootwrap.conf  https://review.openstack.org/37358508:48
openstackgerritMerged openstack/fuxi: Show team and repo badges on README  https://review.openstack.org/40253808:52
*** shashank_hegde has quit IRC08:55
*** apuimedo has quit IRC08:55
*** gsagie has joined #openstack-kuryr08:55
*** apuimedo has joined #openstack-kuryr08:55
apuimedovikasc: did you find out what's the problem with the config?09:02
vikascapuimedo, yes, and pushed fix as well09:03
vikascapuimedo, https://review.openstack.org/#/c/403325/09:03
openstackgerritMerged openstack/kuryr: Show team and repo badges on README  https://review.openstack.org/40253909:04
vikascapuimedo, in server.py controller's import happens before config file is read09:04
apuimedoheh... That's what I said on Friday :P09:04
vikascapuimedo, this issue was noticed before split as well but then got missed somewhere :)09:04
apuimedobut I got the place wrong09:04
apuimedo:-)09:04
vikascapuimedo, yep, you suggested import order in controllers.py09:05
vikascapuimedo, and i tried to address your concern about import fail if binding driver is not configurd vlan on vlan driver patch09:06
apuimedook09:06
apuimedovikasc: I saw irenab minused it09:06
vikascapuimedo, yeah, irenab raised concern on that, so i tried to explain09:07
vikasclimao_, "I mean if you do not seperate the them, when you just want to initial the vlan ids cache, you will aslo allocate one free vlan from them."09:09
limao_vikasc: hi09:09
vikasclimao_, sorry, i could not get your comment, would you mind please rewording it09:09
irenabvikasc: sorry for not following your intention on this patch. I just want to keep code as much as possible close for future modifications, so since we already have drivers model, I do not think the manager should have if/else per driver type09:09
limao_vikasc: I'm thinking this case: if kuryr-libnetwork is rebooted, coe(kuryr-libnetwork) need to re-initial the available_local_vlans in SegmentationDriver, how do you do this?09:11
limao_vikasc: if you call allocate_segmentation_id, it can initial the available_local_vlans, but it will also allocate one more vlan in this function.09:12
vikasclimao_, PTAL https://review.openstack.org/#/c/402462/09:13
vikasclimao_, this is kuryr-libnetwork side work by ltomasbo09:13
vikasclimao_,  SegmentationDriver is imported in controllers.py, so available_local_vlans will get initialised soon after kuryr-libnetwork is up09:14
ltomasbo limao_, yes, I included a check_for_vlan_ids function wheb kuryr-libnetwork is rebooted09:18
ltomasboso that the already used vlan ids are passed to segmentationDriver functions09:18
limao_ltomasbo: vikasc: yeah, reading that part, thanks  :)09:19
ltomasbogreat! :D09:19
vikasclimao_, :)09:19
ltomasboreviews on the patch are welcomed!09:19
vikascirenab, thinking on hw can i make it better09:20
limao_ltomasbo: vikasc: I think the action should be in initial phase, not in ipam_request_address09:21
irenabvikasc: you should get the SegDriver created only when required09:22
limao_ltomasbo: vikasc: Think about the case which I mentioned, if we restart kuryr-libnetwork, then delete a existed container, the available_local_vlans has not initial in this case(I see it is initial in ipam_request_address)09:22
ltomasboin ipan_request_address we call the get_segmentation_ids09:24
*** lmdaly has joined #openstack-kuryr09:24
ltomasbobut the port_vlan_dic is initialized when the kuryr-libnetwork is rebooted09:24
*** janki|lunch is now known as janki09:24
ltomasboso, it should have all the vlan ids being used at the reboot, right?09:24
limao_ltomasbo: _get_segmentation_id will also pass the vlan ids into SegmentationDriver09:25
ltomasboyes, the current set09:25
limao_ltomasbo: oh, it may not have impact... (if available_local_vlans in SegmentationDriver is out of date) , you have latest in port_vlan_dic09:27
ltomasbobut yes, we can change the SegmentationDriver so that it gets initialized with a current set, without returning a vlan id (when kuryr-libnetwork is rebooted)09:27
ltomasboand then no need to sent the current set09:27
ltomasboyes, the idea is that you always have port_vlan_dic with the latest one09:28
limao_ltomasbo: yes, that is what I mean, I think maybe seperate init and allocate vlan id will be better09:28
irenabltomasbo: I wonder if we need to keep nested case handling on the top Controller level09:28
ltomasboirenab: what do you propose instead?09:29
irenabwe have drivers model, but still keep patching the code with if/else driver_type is XXX09:29
limao_vikasc: ltomasbo: but it depend on you, currently is also ok. For me, if seperate, it will be more clear :-)09:29
irenabltomasbo: there are number of alternatives, for example you can have deafault Noop driver09:30
irenabltomasbo: you can subclass Controller for nested case09:31
ltomasboactually, I think controller.py is getting too big and it could be good to split it to keep concepts clearer09:32
irenaband also keep in mind that bare metal libnetwork support can be used with neutron before trunck port API was added09:32
irenabltomasbo: I agree. Maybe worth to subclass or delegate nested case handling to some sort of ‘handler’ of the controller09:35
vikasclimao_, " I think maybe seperate init and allocate vlan id will be better" . As  I understand, init is already seperate from allocate_vlan_ids, i still dont get your point :)09:36
limao_vikasc: the initial here means initial available_local_vlans in SegmentationDriver09:37
ltomasboirenab: I'll think about how to better do it, but please do leave a comment about it on the patch, so that other can also see/comment on it09:38
limao_vikasc: we can only initial it by call "allocate_segmentation_id" right now, but if you call it, you will also allocate one vlan.09:38
vikasclimao_, its not the case09:38
irenabltomasbo: sure09:39
vikasclimao_, just by "rom kuryr.lib import segmentation_type_drivers as seg_driver" initialisation is done.09:40
vikasclimao_, no need to call "allocate_segmentation_id" to initializee vlan ids09:40
limao_vikasc: but at that time the vlan id in segmentation_type_drivers may be not right09:41
vikasclimao_, or you meant something else?09:41
vikasclimao_, why/when vlan id will not be right?09:42
limao_vikasc: the right allocated_ids is passed into segmentation_type_drivers at the first time when you call "allocate_segmentation_id", right?09:42
* vikasc rading09:43
vikascs/reading09:43
limao_vikasc:  "def allocate_segmentation_id(self, allocated_ids=set()):"09:44
vikasclimao_, yes09:44
vikasclimao_, coe should determine already allocated ids before calling allocate_segmentation_id09:45
limao_vikasc: currently it is done in "ipam_release_address" in https://review.openstack.org/#/c/402462/09:45
limao_vikasc: yes, coe should decide it, but what if user restart the kuryr-libnetwork, then user just do the delete container(at this time you never called allocate_segmentation_id)09:47
limao_vikasc: ltomasbo mentioned, he already latest data in port_vlan_dic , but available_local_vlans in SegmentationDriver is out of date at that time.09:49
vikasclimao_, let us try to understand with an example09:50
vikasc:)09:50
limao_vikasc: OK, if you have an container on the vm which use vlan 100.09:51
vikasclimao_, when coe started, allocated_vlan_id = "1-100"(assume)09:51
limao_vikasc: OK09:51
vikasclimao_, now after this container is using 10009:51
vikasclimao_, available_vlan_ids = 1-9909:52
limao_vikasc: yes09:52
vikasclimao_, container is running and using 10009:52
vikasclimao_, now what happens?09:52
limao_vikasc: if we restart kuryr-libnetwork09:52
vikasclimao_, let me tell09:53
vikasclimao_, wait09:53
limao_vikasc: OK09:53
vikasclimao_, on coming up , first of all seg_driver initializes available_ids_to 1-10009:54
limao_vikasc:yes09:54
vikasclimao_, then coe should read trunk ports and init already_allocated={100}09:54
vikasclimao_, now ?09:55
vikasclimao_, next what event happens, where it goes wrong09:55
limao_vikasc: in coe, the already_allocated={100}, in seg_driver it is still 1-100 now09:56
vikasclimao_, which use case will get affected by this?09:56
vikasclimao_, now let say one more container is launched, coe will pass "100" and so seg_driver allocates 9909:57
limao_vikasc: at this time, if you delete the container09:58
vikasclimao_, ok let say the one using 99, ok?09:58
limao_vikasc: ok09:58
vikasclimao_, so on deletion, seg_driver will all 99 to available making it back to "1-100"09:59
vikasclimao_, and coe has allocated_ids=100, which is being used by alredy runnning container10:00
vikasclimao_, all looks good to me so far :)10:00
limao_vikasc: Yeah, if you never started a new container10:01
vikasclimao_, :O10:02
limao_vikasc: the available_ids will be 1-100, and you will re-add 100 into it . (it is set, it should be ok, just a little bit strange for me, free a vlan already in available_ids)10:02
limao_vikasc :( if you never started a new container , just delete an old one which is using vlan 100)10:03
*** yedongcan1 has joined #openstack-kuryr10:03
ltomasborelated to that, I just realize I forgot to call release_segmentation_id in the controller!10:03
ltomasbogoing to update it!10:03
vikascltomasbo, you were reading the story :D10:04
vikasclimao_, functionality wise it seems fine.10:04
ltomasbowas in a call, but I think it should work either way10:04
ltomasbobut perhaps it is more intuitive the way limao_ is proposing10:04
limao_vikasc: yes, functionality wise it is fine:)10:05
vikasclimao_,:)10:05
ltomasboalthough the controller.py will get the allocated ids at reboot time, so the allocation should be working anyway10:05
*** yedongcan has quit IRC10:05
limao_ltomasbo: vikasc:  yes, well , each of way ok for me now, thanks to let me more clear :)10:06
vikascthanks limao_ :)10:06
ltomasbothanks limao_, I discover the missing call to release_id just by thinking about your example!10:09
ltomasbo:D10:09
limao_ltomasbo: :D10:10
irenabvikasc: ltomasbo limao_ , looks like white screen design :-)10:11
vikasc:P10:11
limao_;-)10:12
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-libnetwork: Nested-Containers: trunk subports management  https://review.openstack.org/40246210:16
*** limao_ has quit IRC10:20
*** yedongcan1 has left #openstack-kuryr10:34
*** neiljerram has joined #openstack-kuryr10:41
*** pcaruana has joined #openstack-kuryr10:50
*** irenab has quit IRC11:11
*** irenab has joined #openstack-kuryr11:12
openstackgerritdengshaolin proposed openstack/fuxi: Retrieve volumes from cinder with matched metadata  https://review.openstack.org/39044811:16
*** oanson has quit IRC11:30
*** oanson has joined #openstack-kuryr11:31
*** irenab has quit IRC11:34
*** irenab has joined #openstack-kuryr11:37
*** rock has joined #openstack-kuryr11:40
*** irenab has quit IRC11:42
*** irenab has joined #openstack-kuryr11:42
*** irenab has quit IRC11:47
rockHi. I am new to Kuryr. So I want to know the 1)kuryr usecases . 2)Who is using kuryr mostly. 3)How can we use make kolla, magnum and kuryr all together.11:49
rockCould you please provide me answers for questions.11:49
*** janki has quit IRC11:50
apuimedorock: sure11:50
apuimedothe use case is to use Neutron and Cinder from Docker11:50
apuimedothat is, the networking that the containers get is provided by Neutron ports11:51
apuimedorock: we have contributors from quite a few companies (in no particular order): Huawei, Cisco, Nec, Intel, Mirantis, Red Hat, AT&T11:52
apuimedokolla can deploy kuryr-libnetwork so that docker containers will be able to use Neutron networks11:52
apuimedothere is a PoC from portdirect that uses kubernetes to deploy openstack on Neutron networks https://github.com/portdirect/harbor11:53
apuimedoso kolla-kubernetes could do the same with a bit of work11:53
apuimedoand we are now finishing support for Neutron subports in kuryr11:54
*** irenab has joined #openstack-kuryr11:54
apuimedoso that Magnum clusters will be able to use them11:54
apuimedoin that way, you'll get one Neutron subport per container in the nova instances that magnum uses for its deployments11:55
rockapuimedo : Thank you. I also new to kolla and magnum.  Just I have a look on https://www.openstack.org/videos/barcelona-2016/containers-and-openstack-mapping-the-landscape. I came to know some information about kolla,magnum and kuryr. But I have some questions yet. Can I ask them now?11:57
apuimedorock: I'm the one on the right!11:58
apuimedorock: sure, go ahead. I'll try to answer them as quickly as I can, but I'm preparing some presentation and lunch time is coming in 5 minutes11:58
rockapuimedo: Hmmm. thank you very much. Can you please give the main use cases of kolla and magnum.12:01
apuimedo:-)12:01
apuimedokolla deploys Openstack with each of the OpenStack services (Neutron, Nova, etc) running inside containers. It can do it with Docker and Ansible or with Kubernetes12:02
*** irenab has quit IRC12:02
apuimedoMagnum is one of the services that Kolla can deploy12:02
apuimedoand what it does is create Docker swarm or Kubernetes clusters running on Nova created VMs12:03
*** janki has joined #openstack-kuryr12:03
*** limao has joined #openstack-kuryr12:03
rockapuimedo: Kolla, magnum and kuryr , among these three which techonology used by industries mostly?12:05
apuimedowell, they are quite new, I suppose Magnum and kolla, since they are older, are more common12:06
apuimedorackspace uses Magnum and I think kolla too, for example12:06
apuimedobut I don't really have industry usage data12:06
apuimedo:-)12:06
rockWhat was scope and focus of kolla, magnum and kuryr respectively.12:06
apuimedoMagnum: Container clusters as a service12:07
*** limao_ has joined #openstack-kuryr12:07
apuimedoKolla: Openstack deployed inside containers12:07
apuimedoKuryr: Containers using Openstack resources (networks and storage)12:07
*** irenab has joined #openstack-kuryr12:08
rockkolla, magnum and kuryr implemented by which language?12:09
gsagiepython12:10
*** limao has quit IRC12:10
gsagielike any other OpenStack project (i think)12:10
rockapuimedo: Can I choose like 1) I have this type of requirement so I can use kolla (or) magnum (or) kuryr12:11
rockgsagie : thank you .12:11
apuimedogsagie: nice to see you12:12
gsagieapuimedo: :)12:12
ivc_irenab, apuimedo, ping12:12
apuimedorock: yes, you can use whichever you want12:12
apuimedoand as many as you want12:12
apuimedoivc_: pong12:12
irenabivc_: hi12:12
gsagierock: each one of these projects serve different goal12:12
irenabivc_: I updated the devref yesterday12:12
ivc_apuimedo, irenab, FYI some CNI design decisions: https://dl.dropboxusercontent.com/u/12482084/CNI.xmind (note the comments on some nodes)12:13
irenabtake a look and if its ok, can convert it inot rst12:13
ivc_irenab, ok12:13
rockapuimedo/gsagie : Thank you very much.12:13
irenabivc_: internet does not like me today, seems except for irc nothing is working properly. may take some time to review12:13
apuimedorock: you're most welcome12:14
* apuimedo loves his fiber!12:15
apuimedoirenab: you get dsl righ?12:16
apuimedo*right12:16
irenabtoni, do not even start. I spent 30 mins with the tech support who tried to explain me how I can know if wifi is working ….12:16
apuimedolol12:17
rockapuimedo/gsagie: Is the following statement is correct ?  "Magnum community decided to limit the scope to the management of Container Orchestration Engines and leave the management of containers to a new project which is now called Zun. "12:17
apuimedoirenab: relevant xkcd https://xkcd.com/806/12:18
gsagierock: are you writing a book? :)12:18
apuimedorock: that's right12:18
irenabapuimedo: cannot browse …12:18
apuimedoirenab: cellphone!12:18
apuimedoxD12:18
gsagieirenab: need a proxy ? ;)12:18
rockgsagie : No. Just saw in https://www.openstack.org/videos/barcelona-2016/magnum-is-not-the-openstack-containers-service-how-about-zun12:18
apuimedoirenab: mmm... Maybe you just have DNS screwed then12:19
apuimedoivc_: nice doc12:20
apuimedo:-)12:20
irenabwill try to safe proven method, just reboot everything ….12:20
rockapuimedo  : Hmmm. so Magnum is going to deprecate? Zun will come inplace of magnum right with the same and some advanced features right?12:20
apuimedono, they serve different purposes12:21
rockapuimedo : So what was the main usecase for zun?12:22
apuimedobetter ask hongbin12:22
apuimedo:-)12:22
apuimedoirenab: https://drive.google.com/file/d/0B19pY94Ttis4WnN0dENuTnhNdms/view?usp=sharing12:22
apuimedoI put the xmind into a pdf for you12:22
apuimedomaybe you can open with the phone12:22
* apuimedo -> lunch12:24
rockapuimedo : hmmm. thank you.12:25
rockapuimedo: have a nice lunch12:25
apuimedoivc_: for the ipdb netns stuff, check https://github.com/svinota/pyroute2/issues/29012:26
apuimedothanks12:26
gsagierock: Zun mission is to expose an API to manage containers from OpenStack (regardless of which orchestration engine or container type you might be using), Magnum is focusing on deployment of containers orchestration enviorments inside OpenStack, when the user uses the orchestration engine he picked to manage the containers12:26
apuimedoivc_: I agree with the decisions12:27
apuimedobut I'll need more explanations on the Handler controls Exit12:27
ivc_apuimedo, there's no problem with NetNs atm (tho it does not work with pyroute2 from stable/newton)12:27
ivc_apuimedo, regarding Exit, the idea is that CNI has to block until VIF.active == True and that is how 'exit' (or more specifically unblocking CNI) is a responsibility of Handler12:28
*** dimak has quit IRC12:28
*** oanson has quit IRC12:29
ivc_apuimedo, so there are several options that i've covered. one option was to call CNI.success (which would print() and exit(0)) directly in Handler, but imo it would be confusing12:30
rockgsagie: Ok. Thank you. Which type of container management ZUN will do?12:31
ivc_apuimedo, irenab, btw i've tested it with binary (golang) CNI 'bridge' driver with some tweaks and i can confirm that everything works now :) but the binary CNI 'bridge' can not be used outside PoC12:32
irenabapuimedo: thanks, suceeded to open it12:32
*** oanson has joined #openstack-kuryr12:33
ivc_irenab, apuimedo, thats a screenshot, right? so the comments are not visible12:33
rockgsagie:  what are the different container management methods delivered by zun?12:33
irenabivc_: do you want to add the diagram in the devref?12:34
irenabwe can keep discussion there12:34
*** dimak has joined #openstack-kuryr12:34
ivc_irenab, eventually yes, but not necessary for the first iteration of the devref12:35
irenabI mean for the current discussion for you diagram12:35
*** lmdaly has quit IRC12:35
irenabivc_: I will try to add you editting permissions12:36
irenabivc_: done12:37
ivc_irenab, you mean the pdf on google-drive?12:37
irenabyes12:38
ivc_irenab, there are some comments i've left in the .xmind (you can see them as 'yellow notes' to the right from the text on nodes)12:38
irenabI cannot load them currently …12:39
ivc_apuimedo, btw, i recommend you update to XMind 8 :) its UI finally does not look like its from the previous century12:39
ivc_irenab, yeah you need the original .xmind file and the XMind itself. can you install XMind? i'll try to upload the original file to GoogleDrive12:41
irenabivc_: I am still trying to download the file you shared, for some reason google drive keeps working for me12:42
ivc_irenab, https://drive.google.com/file/d/0By4a5Rs0KiTYVldRMUtLQ2VvWTg/view?usp=sharing12:43
irenabnow just need to get xmind ..12:46
*** yamamoto_ has quit IRC12:47
*** janki has quit IRC12:52
*** tonanhngo has joined #openstack-kuryr12:53
*** tonanhngo has quit IRC12:55
*** lezbar has quit IRC13:10
ivc_irenab, i've reviewed the doc. looks good overall, i've left some comments13:21
ivc_irenab, most importantly the last diagram is the most valuable part of the whole doc.13:22
ivc_irenab, imo to get the best of it we should show CNI/Controller on the same diagram. also take a look on the CNI PoC (i've added the link to the doc) to get an idea of the watch/plug flow13:23
irenabivc_: thanks! taking a look now13:26
apuimedoivc_: there are comments?13:33
*** yamamoto has joined #openstack-kuryr13:33
ivc_apuimedo, yes :) click on those 'yellow notes' icons13:33
apuimedocool13:34
apuimedo:-)13:34
apuimedoI use xmind 7.513:34
apuimedono 8.0 in arch linux yet13:34
apuimedo:P13:34
ivc_:(13:34
apuimedoI'll check if I can update the pkgfile13:34
ivc_irenab, i've uploaded a sketch for ControllerPipeline diagram13:47
ivc_apuimedo, btw, what DE do you use on Arch (Xfce? KDE? Gnome? *box?) :)13:48
*** tonanhngo has joined #openstack-kuryr13:51
*** tonanhngo has quit IRC13:52
*** gsagie has quit IRC13:52
irenabivc_: I sort of like the usage you did for the Bridge CNI13:53
openstackgerritLuis Tomas Bolivar proposed openstack/kuryr-libnetwork: Nested-Containers: trunk subports management  https://review.openstack.org/40246213:54
irenabivc_: for resource handlers ist realy ugly that it gets watcher to stop13:55
ivc_irenab, well i can think of 3 options (check .xmind comments) and this one is the 'least evil' imo13:56
irenabneed to get into into special EventHandler , like the Retry, but the opposite :-)13:57
*** garyloug has joined #openstack-kuryr13:57
ivc_eventually we could maybe add 'unsubscribe' to Handler (similar to RxPy)13:57
ivc_irenab, that would be ugly too :) using an exception for 'success' i mean:)13:58
*** lmdaly has joined #openstack-kuryr13:58
irenabivc_:  have some issue to download exmind, hope to resolve soon13:59
irenabivc_: hopw we find more elegant way to solve it, I really do not think resource handler should know about the watcher14:01
irenabit should signal somehow that its job is done and the one who resitered it, can just unregister14:02
*** oanson has quit IRC14:04
*** irenab_ has joined #openstack-kuryr14:04
ivc_irenab, well the 'best' solution is 'unregister' but it requires some massive (expected) changes, so for now i want to use Watcher.stop, but later we'll add 'unregister' i think14:06
*** irenab has quit IRC14:06
ivc_thats the plan for now i mean14:06
*** irenab_ has quit IRC14:08
*** irenab has joined #openstack-kuryr14:12
*** yamamoto has quit IRC14:12
*** lezbar has joined #openstack-kuryr14:18
openstackgerritMerged openstack/kuryr-libnetwork: Show team and repo badges on README  https://review.openstack.org/40253714:23
*** hongbin has joined #openstack-kuryr14:23
*** yamamoto has joined #openstack-kuryr14:29
*** yamamoto has quit IRC14:31
openstackgerritvikas choudhary proposed openstack/kuryr: Nested-Containers: vlan driver  https://review.openstack.org/36199314:52
ivc_mchiappero, btw when you go through https://docs.google.com/document/d/1hExm0TNp_OMWY_XMVRYpSvg5G8kFrL5hcrT9M_NDCwo/edit?usp=sharing , do not hesitate to leave comments if you find something confusing/missing :) we really need a 'fresh eye' to look at it now before it gets into gerrit14:56
mchiapperoI'll try, even though I'm not familiar with the codebase14:57
mchiapperoI'll let you know :)14:57
alraddarla_apuimedo: is there an existing bug in launchpad for the mox to mock refactor?14:58
*** oanson has joined #openstack-kuryr14:58
apuimedoalraddarla_: there is14:58
apuimedoalraddarla_: there's also some patches started14:59
apuimedohttps://review.openstack.org/#/c/395453/14:59
apuimedoso take the rest of the work14:59
janonymousalraddarla_, apuimedo: some rebases left on those :P https://blueprints.launchpad.net/kuryr-libnetwork/+spec/drop-mox14:59
ivc_mchiappero, thats the point - to see how helpful our devref is because you are not familiar with the codebase :)15:00
apuimedojanonymous: there is also some unit tests that are not on the patches with merge conflict too, right?15:00
janonymousapuimedo: no, only one left is of test_kuryr which i was waiting for other patches to get merged before submitting to avoid merge conflicts15:01
janonymousapuimedo: i was doing it file wise, last 2 patches i had in my local repo were test_kuryr.py and base.py (removal of several functions)15:02
janonymoushttps://review.openstack.org/#/q/status:open+project:openstack/kuryr-libnetwork+branch:master+topic:bp/drop-mox15:02
*** tonanhngo has joined #openstack-kuryr15:03
*** tonanhngo has quit IRC15:04
apuimedojanonymous: please, coordinate it with alraddarla_15:04
apuimedoso you can both split the work and be rid of mox soon :P15:04
janonymousapuimedo: reabases would be very helpful  :P15:05
apuimedojanonymous: you mean that alraddarla_ would take over your patches and rebase them?15:06
janonymousapuimedo: ahh..i was thinking of rebasing : https://review.openstack.org/#/q/status:open+project:openstack/kuryr-libnetwork+branch:master+topic:bp/drop-mox these patches and finding if any test is left in them15:07
*** yamamoto has joined #openstack-kuryr15:10
janonymousapuimedo: as per my record only 2 files are remaining test_kuryr.py and base.py... alraddarla_ could take base.py if its fine ?15:10
alraddarla_janonymous: Would that be the only file I would need to update? Or do you need help with any of the other rebasing?15:11
janonymousalraddarla_: i am fine with both15:13
alraddarla_janonymous: Okay. Just so I don't accidently mess something up, just let me know exactly which file(s) you would like me to refactor from mox to mock and if I need to rebase any of the old patch sets.15:16
alraddarla_don't want to accidently work on the same things you are working on and duplicate work :)15:17
janonymousalraddarla_: :)15:18
janonymousalraddarla_: https://review.openstack.org/#/c/395453/ try rebase on this15:18
*** hongbin has quit IRC15:20
alraddarla_sounds good. thanks janonymous15:22
janonymousalraddarla_: yw!15:22
*** yamamoto has quit IRC15:47
*** hongbin has joined #openstack-kuryr15:54
*** fkautz has quit IRC16:01
*** david-lyle has quit IRC16:01
*** HenryG has quit IRC16:01
*** janonymous has quit IRC16:01
*** david-lyle has joined #openstack-kuryr16:02
*** HenryG has joined #openstack-kuryr16:02
*** fkautz has joined #openstack-kuryr16:06
*** janonymous has joined #openstack-kuryr16:06
*** dimak has quit IRC16:16
*** limao_ has quit IRC16:24
*** lezbar has quit IRC16:36
*** lezbar has joined #openstack-kuryr16:40
*** irenab has quit IRC16:49
*** lezbar has quit IRC16:50
*** shashank_hegde has joined #openstack-kuryr16:50
*** irenab has joined #openstack-kuryr16:54
*** lezbar has joined #openstack-kuryr16:55
*** lezbar has quit IRC16:59
*** lezbar has joined #openstack-kuryr16:59
openstackgerritDarla Ahlert proposed openstack/kuryr-libnetwork: Unitttests with mock  https://review.openstack.org/39545317:02
*** lezbar has quit IRC17:04
alraddarla_janonymous, patch set rebased. let me know if i need to fix anything.17:06
*** lezbar has joined #openstack-kuryr17:07
*** pcaruana has quit IRC17:19
*** garyloug has quit IRC17:32
*** shashank_hegde has quit IRC17:51
*** tonanhngo has joined #openstack-kuryr18:06
*** lmdaly has quit IRC18:09
*** tonanhngo has quit IRC18:30
*** lezbar has quit IRC18:34
*** tonanhngo has joined #openstack-kuryr18:44
*** tonanhngo has quit IRC18:45
*** shashank_hegde has joined #openstack-kuryr18:46
*** jgriffith is now known as jgriffith_away19:02
*** irenab has quit IRC19:05
*** tonanhngo has joined #openstack-kuryr19:30
*** lezbar has joined #openstack-kuryr20:14
*** lezbar has quit IRC20:16
*** jgriffith_away is now known as jgriffith20:18
*** oanson has quit IRC20:36
openstackgerritHongbin Lu proposed openstack/fuxi: Add .testrepository/ to .gitignore  https://review.openstack.org/40391921:51
openstackgerritHongbin Lu proposed openstack/fuxi: Fix incorrect key on formating logging message  https://review.openstack.org/40392121:55
openstackgerritHongbin Lu proposed openstack/fuxi: Separate unit tests from fullstack tests  https://review.openstack.org/40393122:44
*** pmannidi has joined #openstack-kuryr23:21
openstackgerritHongbin Lu proposed openstack/fuxi: Separate unit tests from fullstack tests  https://review.openstack.org/40393123:25
openstackgerritHongbin Lu proposed openstack/fuxi: [WIP] Add volume tests  https://review.openstack.org/40394123:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!