Sunday, 2015-03-22

*** Marga_ has joined #openstack-containers00:26
*** dboik has quit IRC00:28
*** jay-lau-513 has quit IRC00:44
*** suro-patz has joined #openstack-containers01:25
*** achanda has joined #openstack-containers01:28
*** suro-patz has quit IRC01:32
*** suro-patz has joined #openstack-containers01:32
*** suro-patz has quit IRC01:34
*** achanda has quit IRC01:40
*** achanda has joined #openstack-containers01:47
*** achanda has quit IRC01:50
*** dims has quit IRC02:15
*** dims has joined #openstack-containers02:16
*** erkules has joined #openstack-containers02:16
*** erkules_ has quit IRC02:19
*** dims has quit IRC02:20
*** Marga_ has quit IRC02:31
*** achanda has joined #openstack-containers02:38
*** Marga_ has joined #openstack-containers02:44
*** Marga_ has quit IRC02:50
*** Marga_ has joined #openstack-containers02:50
*** Marga_ has quit IRC03:05
*** achanda has quit IRC03:27
*** sdake_ has joined #openstack-containers03:52
*** sdake has quit IRC03:53
*** sdake has joined #openstack-containers03:53
*** sdake_ has quit IRC03:57
*** raginbaj- has joined #openstack-containers03:57
*** raginbajin has quit IRC03:58
*** raginbaj- is now known as raginbajin03:59
*** adrian_otto has quit IRC04:03
*** vilobhmm has joined #openstack-containers04:07
*** hongbin has joined #openstack-containers04:08
hongbinsdake: The new image didn't seem to work.... I wrote the details on the bug https://bugs.launchpad.net/magnum/+bug/1434468 .04:10
openstackLaunchpad bug 1434468 in Magnum "Magnum default images used for kubeclt should be upgraded" [Critical,Triaged] - Assigned to Steven Dake (sdake)04:10
*** hongbin has quit IRC04:12
*** sdake_ has joined #openstack-containers04:15
*** sdake has quit IRC04:19
* sdake_ groans04:21
sdake_hongbin looks like the minions are registering04:24
sdake_I suspect a problem with the template04:24
sdake_perhaps a binary not being started that should be04:24
sdake_do you have ssh access to the machine?04:25
*** achanda has joined #openstack-containers04:34
*** hongbin has joined #openstack-containers04:36
hongbinsdake_: yes I have04:40
sdake_systemctl | grep kube | fpaste04:41
sdake_on both nodes pls04:41
sdake_My dev env is busted atm04:41
sdake_which is why I can't test04:41
sdake_and super busy releasing kolla04:41
sdake_I plan to get into my magnum blueprint next week04:41
hongbinsdake_: will do that04:43
hongbinfirst I see an error on console04:43
hongbin[   59.959326] cloud-init[841]: <resource>node_wait_handle</resource>2015-03-22 03:37:56,708 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts) ci-info: +++++++++Authorized keys from /home/minion/.ssh/authorized_keys for user minion++++++++++ [   60.092661] cloud-initci-info: +---------+-------------------------------------------------+---------+---------------04:43
sdake_systemctl | grep failed04:43
hongbin$ sudo systemctl | grep failed ● cloud-final.service                                                                       loaded failed failed    Execute cloud user/final scripts ● flanneld.service                                                                          loaded failed failed    Flanneld overlay address etcd agent04:44
*** funzo has quit IRC04:45
hongbinsdake_: will go to sleep soon04:45
sdake_if flanneld is busted magnum will not work i think04:45
hongbin....04:45
*** funzo has joined #openstack-containers04:45
sdake_enjoy04:45
*** funzo has quit IRC04:45
sdake_without an overlay network nothing works i think04:46
hongbin...04:46
hongbinI will get back to it tomorrow04:47
sdake_ok04:47
hongbinsee you04:47
sdake_debug the failed services and see what you can find04:47
*** hongbin has quit IRC04:47
sdake_the template is probably out dated04:47
*** sdake_ has quit IRC04:54
*** sdake has joined #openstack-containers04:55
*** dims has joined #openstack-containers05:08
*** sdake_ has joined #openstack-containers05:20
*** sdake has quit IRC05:24
*** sdake_ has quit IRC05:26
*** dims has quit IRC05:39
*** vilobhmm has quit IRC06:55
*** dims has joined #openstack-containers07:24
*** Marga_ has joined #openstack-containers07:32
*** Marga_ has quit IRC07:56
*** dims has quit IRC07:58
*** oro has joined #openstack-containers08:18
*** dims has joined #openstack-containers09:24
*** achanda has quit IRC09:41
*** dims has quit IRC09:58
*** oro has quit IRC10:15
*** oro has joined #openstack-containers10:26
*** dims has joined #openstack-containers10:55
*** dims has quit IRC11:28
*** Marga_ has joined #openstack-containers11:51
*** Marga_ has quit IRC12:26
*** dims has joined #openstack-containers12:37
*** jay-lau-513 has joined #openstack-containers12:45
*** dims has quit IRC12:59
*** Marga_ has joined #openstack-containers14:36
*** Marga_ has quit IRC14:53
*** hongbin has joined #openstack-containers15:26
hongbingood morning15:26
*** sdake has joined #openstack-containers15:30
sdakemorning15:37
hongbinsdake: Figuring the image out15:40
sdakeI think the image is correct15:40
hongbinThis is the flannel service log http://pastie.org/1004539415:40
sdakejust needs to be setup properly15:40
hongbinpossibly15:41
sdakerpm -qa | grep etcd15:41
sdakeon master15:41
sdakerather15:41
sdakesystemctl | grep etcd15:41
hongbin$ rpm -qa | grep etcd etcd-0.4.6-6.fc21.x86_6415:42
sdakeuse journalctl on flanneld15:42
sdakeit will give a full log15:43
sdakesystemd status only gives a partial log15:43
hongbink. doing that15:43
sdakeI believe the syntaix is journalctl -xl -u flanndeld.service15:43
sdakebut not certain15:43
sdakeone thing that looks wierd is your network is "novalocal"15:44
sdakeI have only seen novalocal with nova networking, not with neutron15:44
hongbinI running on a VM on a cloud15:45
hongbinThe log looks empty...15:45
hongbin$ sudo journalctl -xl -u flanndeld.service -- Logs begin at Fri 2015-03-20 14:54:29 UTC, end at Sun 2015-03-22 15:45:39 UTC. --15:45
sdakespell it right :)15:46
sdakeflanneld.service15:46
sdakeI had a typo I think15:46
hongbinyeah. get the output15:46
sdakefpaste /etc/flannel-docker-bridge.conf15:46
hongbinpasting them15:46
sdakejust pipe through the fpaste command, its faster ;)15:50
hongbinThere is not such file /etc/flannel-docker-bridge.conf15:50
sdakefind it15:50
sdakeits on the filesystem somewhere15:50
sdakesudo updatedb15:50
hongbink15:50
sdakelocate flannel-docker-bridge.conf15:50
sdakeI feel like i have been working 7 days a week - because I have!15:51
hongbinhere u go15:52
hongbinhttp://pastie.org/1004542915:52
hongbinpasting the output of the log15:52
sdakethere is this cat reviewing on kolla15:52
sdakeI swear he must put every commit through a spell checker ;)15:53
sdakeI don't know how he sees these things15:53
*** sdake_ has joined #openstack-containers15:54
*** sdake__ has joined #openstack-containers15:56
*** sdake has quit IRC15:56
hongbindon't have the fpaste command... scping them15:58
hongbinhttp://paste.openstack.org/show/194874/15:59
*** dims has joined #openstack-containers16:00
*** sdake_ has quit IRC16:00
sdake__bummer fpaste isn't in atomic16:00
* sdake__ groans16:00
sdake__your hostname looks greater then 64 characters16:01
sdake__and you are using nova networking16:01
sdake__you need to be using neutron imo :)16:01
hongbinhow to do that?16:01
sdake__neutron is setup via devstack16:02
sdake__the FQDN of a host must be less then 64 characters, or linux busts16:02
hongbinI am running the devstack according to the guide...16:02
sdake__maybe the guide is busted16:03
hongbinI see. Let me fix that16:03
sdake__I couldn't get the devstack guide to work last time I tried16:03
sdake__but I only tried for 2-3 hours16:03
sdake__paste your localrc?16:03
sdake__I am pretty certain when novalocal is in the domain name, nova networking is being used16:03
hongbinhttp://paste.openstack.org/show/194875/16:04
sdake__well I could be wrong on the nova networking thing16:05
sdake__if that is your localrc :)16:05
sdake__on your host ps -ef | grep nova | grep net16:05
hongbinhttp://paste.openstack.org/show/194876/16:06
hongbinOne thing is that my devstack may be a little outdated. It is a month ago.16:11
sdake__well lets not go messing that up atm :)16:14
sdake__can you boot he original image and make sure it works?16:14
sdake__(or have you recently)16:14
sdake__and see if the hostnames are super long16:14
sdake__if the hostnames are greater then 63 characters, networking doesn't work16:15
hongbinsure. Let me try that16:15
sdake__its a hardcode in the linux kernel16:15
sdake__it may be the new atomic has a  different hostname mechanism16:15
*** sdake has joined #openstack-containers16:20
*** sdake__ has quit IRC16:23
*** Marga_ has joined #openstack-containers16:31
hongbinsdake__: so far, the old image seems to work16:31
sdakerun the hostname command and show results16:31
hongbin$ hostname te-ewkbyzjh6uzy-1-a74torhnlmdn-kube-node-hce7dkyvysyt.novalocal16:32
sdakeok so we know its not the host name length16:38
sdakejournalctl the flanneld service16:38
hongbinold image?16:39
sdakeright16:39
sdakewhat we ar eafter is why flanneld starts on old not on new16:39
sdakeit is probably a configuration file problem16:39
hongbinhere u go http://paste.openstack.org/show/194892/16:42
sdakeis your new cluster still up?16:44
hongbinnow16:44
hongbins/now/no/16:44
sdakeyes16:44
sdakeoh16:44
hongbin.....16:44
hongbinI have to bring it down to get the new cluster16:45
hongbinBut I can use another VM to bring both up16:45
sdakeput that paste in the bug16:45
sdakethat is the working flanneld output16:45
sdakefind the configuration file16:45
sdakepaste it as well16:45
sdakeput it in the same comment16:45
hongbink. will go that16:45
hongbinExactly which config file?16:46
sdakelocate flannel } grep conf after running sudo updatedb16:46
hongbin$ sudo updatedb sudo: updatedb: command not found16:47
sdakefind /etc -name flannel\* -ls16:47
*** hongbin has quit IRC16:49
*** hongbin has joined #openstack-containers16:49
sdakefind /etc -name flannel\* -ls16:49
hongbinthis one? /etc/sysconfig/flanneld16:49
sdakeis that all there is?16:50
hongbinthis is the list http://paste.openstack.org/show/194901/16:50
sdakepaste that one in, and the flannel journalctl log above to the bug16:50
sdakeok, i'm pretty sure I know the problem16:51
sdakeflanneld has been updated16:51
sdakeit requires configring its configuration file16:51
sdakeoh wait16:51
sdake /etc/systemd/system/docker.service.d/flannel.conf16:51
sdakepaste that16:51
hongbink16:52
sdakeconfigure-flannel.sh creates the coreos.com keys in etcd16:55
sdakeI assume that only runs on the master node16:55
sdakehttp://pastie.org/1004539416:59
sdakeit says systemd timed out trying to start flanneld16:59
sdakeok kill old cluster17:00
sdakelets go back to new image17:00
sdakeand try to manually restart falnneld17:01
sdakethis will verify the master is creating the keys in etcd17:01
hongbink. will do that.17:03
hongbinNeed to get a lunch first17:03
sdakei have a haircut17:03
sdakeso I may be out  for a bit17:03
hongbink17:03
hongbinsee you later17:03
*** hongbin has quit IRC17:08
*** dims has quit IRC17:22
*** sdake_ has joined #openstack-containers17:31
*** daneyon has quit IRC17:43
*** Marga_ has quit IRC17:57
*** vilobhmm has joined #openstack-containers18:21
*** Marga_ has joined #openstack-containers18:37
*** hongbin has joined #openstack-containers18:43
*** achanda has joined #openstack-containers19:06
*** Marga_ has quit IRC19:09
*** Marga_ has joined #openstack-containers19:11
*** Tango has joined #openstack-containers19:12
*** Marga_ has quit IRC19:17
*** Tango has quit IRC19:26
*** daneyon has joined #openstack-containers19:39
*** achanda has quit IRC19:40
*** achanda has joined #openstack-containers20:07
*** vilobhmm has quit IRC20:16
*** dims has joined #openstack-containers20:21
*** achanda has quit IRC20:21
*** dims has quit IRC20:24
*** achanda has joined #openstack-containers20:29
*** sdake__ has joined #openstack-containers20:31
*** sdake has quit IRC20:35
sdake__hongbin did you make it back from lucnh yet20:51
hongbinsdake__: Yes, I am back20:52
hongbintrying to bring up another VM20:52
sdake__did you try manual restart of flannel (if your still working on the problem)20:52
hongbinTrying that20:52
sdake__sudo systemctl reset-failed flanneld.service20:52
sdake__sudo systemctl start flanneld.service20:53
hongbink20:53
hongbinThe old stack was blocked on delete_in_progress state. Trying to figure out why20:54
*** sdake has joined #openstack-containers20:56
*** sdake__ has quit IRC21:00
hongbinsdake: K. Those command works. The status of flanneld is on running now.21:09
hongbinsomehow, the minions were still on state NotReady21:10
hongbinmaybe need a restart21:10
*** dims has joined #openstack-containers21:15
*** achanda has quit IRC21:20
sdakei think they probably added a timeout to the systemd file for flannel21:40
sdakewhich is a bit of a problem since we start the master second21:40
sdakecan you see if  there isa timeout in the falnnel systemd file21:40
hongbinlet me check21:41
hongbinIt seems there is no timeout specified http://paste.openstack.org/show/195001/21:44
sdakewhen you star the cluster does the master come up first?21:45
hongbinDidn't pay attention for that21:46
sdakecan you check21:46
hongbinI remembered the minion first21:46
hongbink, let me bring the cluster down and bring it up again21:47
sdakewe need to change the order21:47
sdakeso that minion launching blocks until master is up21:47
sdakea depends_on in the minions should get the job done21:48
hongbinI guess that will result cycli dependency on heat21:48
hongbinsince the ip addresses are passed from minion to master21:48
hongbinso the minions have to go first21:49
sdaketry rebooting the master21:50
sdakethen rebooting the minion21:50
sdakeand see if kubectl works21:50
sdakewith a delay to finish up between reboot21:51
hongbinI restarted all process on minions21:51
hongbinit works21:51
sdakekubctl works?21:51
hongbinworks21:51
hongbinI means the restarted minion went to ready state21:52
sdakekubernetes needs to be able to handle a down master21:52
sdakeand retry21:52
sdakethe minions that is21:52
hongbinI am not sure if minion needs to talk to master.21:53
hongbinOn startup, for sure. Maybe not after that21:54
hongbinMaster periodically poll minions, according to the docs21:54
sdakelog into minion, run systemctl | grep kube21:55
hongbinThe cluster is down now....21:55
hongbinTesting the order21:55
sdakeok21:55
sdakeI want to see if there is a retry option in the kube services on the minion21:56
sdakeif not there should be ;)21:56
hongbinI think the problem is not retry21:56
hongbinThe problem seems to be21:56
hongbinThe flanneld process not success21:56
hongbinthe rest of kube process depends on flanneld21:57
hongbinso all of them not started21:57
hongbinSince, I saw the status of kubelet is not started due to dependency21:57
sdakewe can fix that21:59
hongbink21:59
sdakehttp://www.freedesktop.org/software/systemd/man/systemd.service.html22:00
sdakeTimeoutStartSec=022:00
sdakecan you figure out how to hack that into the systemd file for flanneld22:01
hongbinI can try22:02
sdakelet me try to make a new image while you try the heat template modification22:03
hongbink22:03
hongbinconfirmed. Minion first, then master22:03
hongbinI have a pull request, that can change the order https://github.com/larsks/heat-kubernetes/pull/1422:04
hongbinIf that is merged, master will go first.22:04
hongbinThat could be another option22:05
sdakedo the minions have a wait condition on the master?22:05
hongbin???22:05
sdakedoes it work with your patch applied?22:06
hongbinOh, yes that is22:06
hongbinI can try22:06
sdakeplease do, if does, i'll merge that change22:07
hongbink22:07
sdakedoes the curl request in your change dynamically register the minion?22:09
sdakehttps://github.com/larsks/heat-kubernetes/pull/14/files#diff-a7f24c17d7f1801ed844d6044b8194a4R3622:10
*** achanda has joined #openstack-containers22:11
hongbinYes, that did the work.22:12
hongbinI added a dependency on wait condition to ensure master go first22:12
sdakepatch looks good to me22:13
sdakei want to make sure it is verified with kue 0.1122:13
hongbinsure22:14
hongbinI am more confident if larsks have tested it22:14
hongbinanyway, it works with the old image in my prespective22:15
sdakethats good news, lets see if it works with new one  :)22:15
hongbinyes, bringing the cluster up22:16
sdakei really like your patch22:17
sdakenice work :)22:17
hongbinthx!22:18
*** dims has quit IRC22:19
*** sdake__ has joined #openstack-containers22:19
hongbinK. All the minions are Ready, testing pod creation22:20
*** prad has quit IRC22:21
sdake__how many minions do you have?22:21
hongbintow22:21
sdake__wfm22:21
*** sdake has quit IRC22:23
*** dims has joined #openstack-containers22:23
*** Marga_ has joined #openstack-containers22:23
*** dims has quit IRC22:24
*** vilobhmm has joined #openstack-containers22:27
*** vilobhmm has quit IRC22:27
sdake__hongbin is this accurate https://bugs.launchpad.net/magnum/+bug/1434468/comments/922:28
openstackLaunchpad bug 1434468 in Magnum "Magnum default images used for kubeclt should be upgraded" [Critical,Triaged] - Assigned to Steven Dake (sdake)22:28
sdake__earlier on irc you said the minion kube services were not running because of the flanneld failure to start22:29
hongbinI think it is correct.22:30
hongbinSince I cannot find kubelet in minion22:30
sdake__wierd22:30
sdake__it should say failed or something22:30
sdake__not just not start and give no output ;)22:30
hongbinyes, it should fail22:31
*** vilobhmm has joined #openstack-containers22:32
sdake__do the pods start?22:32
hongbinstarting: It is very slow on VM inside VM :)22:33
sdake__oh virt on virt22:34
sdake__yikes :)22:34
sdake__I run magnum on bare metal, but my system is in a constant state of bustedness because of devstack as a result22:34
hongbinsorry to hear that22:35
hongbin:)22:35
sdake__hook up jumper cables to your machines - give em more juice :)22:36
hongbinYes, somehow, I cannot run the devstack recently. Not sure why22:36
sdake__my workstation has 128g ram22:36
sdake__after I bought it I plugged 8 16gb ram chips in22:36
sdake__and the computer said "memory busted"22:36
sdake__I was like "OH NO"22:36
sdake__1700$ down the drain on busted memory22:36
sdake__but one wasn't seated properly22:37
hongbinyou have a lot of memory :)22:37
sdake__my 3 lab machines at my hosue have 32gb as well22:37
sdake__I use those mostly for kolla22:37
sdake__and my workstation mostly for magnum22:37
sdake__and if someone ever invents ironic for magnum, I'll use my lab machines for magnum too ;)22:37
hongbinsdake__: The pod creation seems to be pending forever22:43
sdake__is the pod assigned an ip?22:44
hongbinyes22:44
hongbin$ kubectl get pods POD                 IP                  CONTAINER(S)        IMAGE(S)              HOST                LABELS                                       STATUS redis-master                            master              kubernetes/redis:v1   10.0.0.5/           name=redis,redis-sentinel=true,role=master   Pending                                         sentinel            kubernetes/redis:v122:44
sdake__ssh into 10.0.0.522:44
hongbinin22:45
sdake__sudo docker images22:45
hongbinempty list22:45
sdake__ping 8.8.8.822:45
hongbin..........22:46
hongbinThat is the problem22:46
sdake__;)22:46
hongbinunpingable22:46
sdake__masquerade ftw22:46
sdake__turn on a masquare iptables rule22:46
sdake__on the host22:46
hongbinI did that22:47
hongbinwith this command sudo iptables --table nat -A POSTROUTING -o eth0 -j MASQUERADE22:47
hongbinno luck22:47
hongbinI remembered the old image can ping22:48
*** sdake has joined #openstack-containers22:48
sdakesudo route | fpaste22:48
sdakeinside the vm 10.0.0.522:48
sdakehongbin i'm not sure how iptables masquarade works with vm in vm22:49
sdakeI have never tried it22:49
sdakeI struggle to get neutron to work baremetal ;)22:49
hongbinhttp://paste.openstack.org/show/195027/22:49
hongbinI confirmed that iptables masquarade works22:50
hongbinin nested virtualization22:50
hongbinsince I made it before22:50
sdakeroute table looks good22:50
sdakeinside the first layer vm, run ip link show | fpaste22:51
sdakethe one where your devstack is running22:51
*** sdake__ has quit IRC22:51
hongbinhttp://paste.openstack.org/show/195028/22:51
sdakesudo ip addr show | fpaste22:52
hongbinhttp://paste.openstack.org/show/195029/22:53
sdakeI would htink you would have a bridge for 10.0.0.1 in your devstack vm22:54
sdakeand you do not22:54
sdakethe 10.0.0.5 vm is using a default gateway of 10.0.0.122:55
sdakeso the traffic is going to where?22:55
hongbinI think it go to somewhere inside my cloud provider22:56
hongbinwhich in turn route to internet22:56
hongbinThat is the cloud our lab built22:56
sdakedelete your bay22:57
sdakeboot the fedora-atomic original image22:57
sdakeand see if the routing works22:57
sdakeeg, can you ping 8.8.8.8 from the kubecluster22:57
hongbink22:58
sdakelets see if the rotuing is busted in half the cases or all cases22:58
sdakeif its busted in all cases, I think we should proceed with merging the changes and generating a gerrit review22:58
sdakeand find another cat ot test while you sort out the network22:59
hongbink22:59
sdakethe traffic shoudl be flowing out of your floating ips23:00
sdakenot the 10.0.0.1 network23:00
sdakebut somehow neutron takes care of that23:00
sdakethe floating ips shoudl be on your external network23:00
hongbinMy VM has internet access without floating IP23:01
sdakewhat is the ip of your vm?23:01
hongbinI am using a private IP through VPN23:01
sdakemy brain just imploded23:01
sdakei'll stop asking questions I dont want to know the answer to now :)23:02
hongbinYes, the network setting is not straightward23:02
hongbin:)23:02
sdakeit is possible someone changed a setting in the cloud lab that broke your ip connectivity in vm on vm23:03
sdakeso lets see if thats the case from a known good working version of a fedora image23:03
hongbink23:04
sdakethe idae is magnum connects floating ips from your external network to the vm23:05
sdakethen external stuff goes out the float23:05
sdakebut I dont know what happens in nested virt case23:05
sdakei am pretty sure your nested vm shoud lhave a 10.0.0.1 interface though23:06
sdakeyou may need to add one23:06
hongbinI see23:07
hongbinYes, both layer1 and layer2 use the 10.0.0.0 space23:07
hongbinwhich may cause problem23:07
sdakewell lets try debug your neutron second, and debug the image first :)23:08
hongbinI am bringing up the VM.23:08
sdakeis your baremetal machine lackingmemory?23:09
sdakeyou might have more fun if you run devstack on it :)23:09
hongbinit has virtualized bit disabled...23:09
sdakeyou can change that in bios no?23:10
hongbinI guess no, but that is out of my controll23:10
sdakeso your running nested qemu with nested qemu23:11
sdakeit must take forever to ge things done :)23:11
sdakesounds like 3 layers of virt to me23:12
* sdake shudders23:12
sdakemy grass is growing by the minute23:13
hongbinno. two layer only :)23:13
hongbinThis time I brought up the VM, but cannot SSH to it. Very strange23:15
hongbinLet me try again23:16
sdakewhen in doubt reboot23:16
sdake(your host machine)23:16
hongbinYou means I need to reboot the hosted VM?23:17
sdakeI would reboot the entire thing23:17
sdakeat the lowest layer23:17
hongbink23:17
sdakeso you have a hosted vm, and in that you run devstack, and that launches magnum, which launches new vms in teh hosted vm?23:19
hongbinyes, correct23:21
sdakeyou connect to the hosted vm how, via laptop?23:22
*** yuanying has joined #openstack-containers23:22
hongbinyes, via laptop23:22
sdakeyuanying you around?23:22
sdakehongbin that is where I'd run devstack ;-)23:22
yuanyinghi23:23
yuanyinggood morning23:23
sdakehey - mind running a quick test of magnum23:23
hongbin:)23:23
sdakemorning fine sir23:23
hongbinhey yuanying23:23
sdakeuse this image https://fedorapeople.org/groups/heat/kolla/fedora-21-atomic-2.qcow223:24
sdakehongbin can paste his magnum diff23:24
sdakehongbin has a busted network I suspect23:25
sdakeI'd like a second opinion ;)23:25
hongbinThis is the pull request https://github.com/larsks/heat-kubernetes/pull/1423:25
yuanyingIs Magnum broken at new fedora atomic image?23:26
sdakeyes23:26
hongbinsdake: confirmed. the old image cannot ping 8.8.8.8 as well23:26
sdakehongbin that is good news :)23:26
hongbinyup23:27
sdakeif yuanying could confirm it works without the crazy network nested virt setup you got rolling, that would be grand :)23:27
sdakehongbin/yuanying I just merged that pull request23:28
sdakehongbin can you make a master patch for magnum and git review23:28
hongbinyes, will do that23:29
sdakeafter your done, I'll add a second patch on top that depends on yours that adds the appropriate documentation23:29
hongbink23:30
sdakeboy I need some anestaphine23:32
hongbinsdake: I need to leave for a while to get dinner. Will get back to the git review23:32
sdakehongbin sounds good enjoy food23:33
sdakecan you paste your diff and I'll do the git review?23:33
*** oro has quit IRC23:46
openstackgerritSteven Dake proposed stackforge/magnum: Merge heat-kubernetes pull request 14  https://review.openstack.org/16666123:49
openstackgerritSteven Dake proposed stackforge/magnum: Modify documentation to point to kubernetes-0.11 atomic image  https://review.openstack.org/16666223:49
sdakeyuanying https://review.openstack.org/#/c/166662/23:53
*** julim has quit IRC23:54
*** sdake__ has joined #openstack-containers23:54
*** sdake has quit IRC23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!