突然有了很多闲暇时光,读完了一些累积了一两年的书,在前几天终于开始读《大国崛起》英文版,Why Nations Fail,才发现最近读的书堪称失败三部曲啊,先是When Genius Failed,然后是When New Technologies Cause Great Firms to Fail,最后是更泛化的Why Nations Fail。中间一本还好,第一本和第三本真是看得心焦,郁闷感和1月份的时候天天看大屠杀的感觉差不多。
在我年轻得不知肉味的时候,写过一篇赞美失败。而现在读多了,越发觉得失败的理由千千万万,成功的模式各有不同。
如果都是制度的问题,又何必说什么失败,总会把他弄正确的。
上周一直昏昏沉沉,回到朋友的地方就倒头昏睡。这周终于来了状态,每天7点就轱辘从床上爬起来,兴奋地去赶班车,Facebook HQ有可爱的蜂鸟在等着我!直到晚上8点多再坐车回家,倒头就睡。结果还没几天这样的好状态呢,就被强制赶回家休假了。
休假的日子就是这样:早上起来,上上网,到中午就吃越南米粉,然后走去金门公园,打开Kindle坐在湖畔看书逗鸟。临近黄昏,才慢慢散步回来,买杯冰咖啡带回住处。还是耐不住,前天一个人跑到26号码头,本来想看美女呢,被海风吹得头生疼。幸好Caltrain附近有好吃的Uni。
可是,上帝啊,我绕了大半个地球,不过就是想吃吃米粉在湖边看着鸭子晒着太阳读读书而已啊!
离家的时候最想家。当离家一年的时候,倒没有感觉的。去Harris Teeters买个西瓜,一块牛排,还有特别软特别软的芝士蛋糕。冰箱里总是积累了不少的MONSTERS或者Red Bull,仿佛显示着自己还是拼命的IT男。Safeway一年四季都有好吃的樱桃,楼下的草坪里总是少年们在和狗狗玩飞盘,而小孩在踢足球和放风筝。饿了的时候就去日本餐馆买个盒饭。想吃好吃的总是有西班牙菜在不远处。冬天有暖气,夏天有冷气。还有Steve Colbert每晚在电视上乱侃。
只有回到家,才记得,南方的中国是没有暖气的,所以那个烤炉才被北方的刘同学误认为是散热风扇。吃遍了本地的火锅和汤锅,发觉也不过如此,特别是在每晚酒精的刺激下,根本就无暇品鉴了。老妈总是说各种买房子的事,老爸决口不提股票了,舅舅天天在说各种升官发财,而最小的侄子总是想跑来玩我的Robosapien。奶奶总是身体不好,也站不得,也走不得,还是想活下去。
从明年开始,我要过我的第二个25岁生日,然后第三个,第四个,到第100个,200个,我不会变老。但是父母会。老爸喝不到半斤50多度的白酒就已经开始说胡话了。一家人曾经一顿喝掉三瓶茅台,现在一瓶五粮液都成问题了。这个冷飕飕的城市,没有一点风情。旧金山有带着海风的港湾,山景城有趣味的小吃,斯坦福有高大的棕榈树。哪怕Charlottesville,也不乏婀娜多姿的姑娘。
但是这里是我的家。可以整天窝在沙发上开开心心写我的博客。不用担心每天吃什么,妈妈总会做出各种美食。早饭稍微单薄,但是甜甜的汤圆怎么能不算好吃呢,它可是甜甜的,甜甜的馅啊!在书桌上泼墨写下“糖分”两个大字。晚上被老爸带去吃各种火锅、汤锅,当然还免不了喝酒。微醺的时候,新西游记又开始了。然后模模糊糊地,在12点准时进入了被窝。
趁着空闲开始读书,每晚的散步却可以醒脑。虽然不像以前那样没两天一篇日志,可是也积攒了不少呢。要看病总是不用担心,医生们都特别亲切,连拔智齿都是乐呵呵的,然后今天拔一颗明天拔一颗。脏兮兮的牙齿终于被弄掉啦。
然后就这么走了。总是没有勇气打破习惯的人哪,原来离开家的时候,才最想家。
在四年前,我曾经好玩写了一篇预测2012的文章,这种长期的预测总是漏洞百出,在现在2012年看来也是如此。但是也并不影响预测的乐趣。开心的地方总是在于发现自己一大堆的错误里面居然还猜对了一些。然而对于未来四年的预测,难点在于已经无法和2012年的预测类似,建立在经济基础平稳,政治安定的论断之上。未来四年的预测必须考虑到一到两年的经济停滞期和可能的一到两年政治动荡期对技术发展的影响,而与之相对的,还要思考技术发展对于经济基础重构和政治重组的作用。综合这些因素之后,以下判断对于后面的预测是相当重要的。
在未来四年,在量子计算机,人体基因改良,跨太阳系航天旅行,人工智能领域不会有突破性的进展。经济发展会有一到两年的停滞或倒退期,国际政治在接下去的一两年中会进入动荡期。战争的阴影有可能从太平洋上飘来,但是大的战争不会爆发,因此,战争对技术发展的影响可以忽略。那么,从现在开始,四年内到底会发生些什么呢?
家庭的互联网接入速度会有缓慢的提升,平均接入速度将会接近50Mbps,当然,这样的速度和过去4年按百分比相比是缓慢的(5Mbps到25Mbps左右),但是仍然不失为进步。更重要的是,主战场已经从家庭的互联网接入速度变为了随时随地的无线接入。超级WiFi等将会成为提高互联网接入速度的关键。在高速行进的私家车上上网,下载资料,视频通话再也不是幻想,更重要的是,一辆随时接入高速互联网的私家车已经是必须。室内无线网络的带宽和延迟问题终究会被解决,我们常常见到的设备,比如电视、冰箱、投影仪、游戏机、音响都不需要再用连接信号线了。一些新的高档的住宅甚至已经没有了电源线连接口。终于,特斯拉的梦想成为了现实,虽然现在使用了完全不同的科技。
3D电视的推广仍然困难重重,经过挣扎,大部分电视生产厂商放弃了给LCD电视加上3D功能的想法,而是集中精力给电视装上了各种定制的Android系统,否则,他们的市场要被Apple TV一口一口吃掉了。相反,如果家里要购买投影仪,不买个3D的实在说不过去。但是这仍然不是裸眼的3D,幸运的是,网上租借的电影只要有3D的,每个月之需要多花10块钱就可以和2D电影一样无限观看了。而逐碟租借的价格几乎和2D电影是一样的。或许还有电视的存在让某些人觉得不爽,但是由于网速的增加,打开电视看到的已经不再是每周按时播出的电视剧了。要么是24小时的直播,要么是随时随地的点播,定时守在电视机前看某集电视剧的过去已经一去不复返了。但是,至少我们还是可以编制自己的Playlist,把那些还没有发布的新剧都列上,谁叫他们提前6个月就开始宣传呢,不就是为了上我们自定义的Playlist么。
除了可怜的程序员,大部分人已经不用笔记本了。各种Pads已经成为许多人的第一台电脑,为什么不呢,主流的办公软件都有Pad Version了,而且还可以配键盘,和一台笔记本没有任何区别了,况且更加便携了,还可以无线地连接上液晶电视和投影仪。那么多的古董级行业软件也早该过时了,由于移动系统的普及,和当时的Windows一样,移动版本的行业软件不仅仅可以赚钱,而且比现在所使用的各类行业软件更加人性化(利润率更高,更面向终端用户的缘故)。诺基亚承认了自己的宿命,经过几轮裁员,成为了一家还不错的终端设备生产商。WP,Android和iOS仍然在厮杀,幸好我们有HTML5。由于各个平台上浏览器的进化比想像的还来得快,虽然还是有各种各样的本地程序做各种异想天开的应用,也有大量的程序只是HTML5打个包而已。
所有的文件都在云端了,除了少数的设计师和电影发烧友,已经很少有人在家里部署一个媒体存储中心了。当所有文件都在云端的时候,我们才发现,所有的文件,无非是图片,视频,音乐和文本罢了。既然如此,干吗乱糟糟地在一起存呢?我们所需要的,只是跨域的搜索,而不是在一起的存储。
很可惜,全电动车仍然没有成为主流。而自动驾驶系统,也只是刚刚起步而已。勇敢的尝鲜者已经用上了价格便宜的自动驾驶系统(40k左右),即使在复杂路况下,这些系统的表现已经非常令人惊喜了。只是很可惜,对于大多数人,这仍然太贵了些。不过,将他们的Pad插到汽车里已经不是新鲜事了。拜超级WiFi所赐,各种辅助驾驶软件层出不穷。他们可以根据实时路况修改路线,自动升级地图加入道路维修的提示信息,前方的车祸,乃至朋友的车辆位置和预估到达时间都可以通过Faebook Messenger来分享。可惜的是,大部分传统汽车厂商处于安全性的考虑都不愿意深度整合Pad,否则自动泊车这样的功能早就是50块钱就可以下载的软件了。
还记得Concorde么?新一代的Concorde在最后的最后终于投入商用了,从美国飞到中东只需要8个小时,到北京也只需要5个小时。虽然有这样快捷的方式,但是大多数的机票价格仍然上涨了。经济危机的阴影挥之不去,国与国之间发展的差距并没有由于互联网而得到缩小,相比于四年前,反而拉大了。那些依靠科技提升生产力,进而产生更多的工作被证明不过是大家都愿意相信的泡泡,失业率已经远远超过了警戒线(15%),不得已,各个国家提出了更苛刻的税收,并有一批的再教育计划提案在国会/议会/人民代表大会被辩论着。但是也得幸于生产力的提升,大家都还有饭吃,只是不满于不平均而已,以极端方式更改社会架构的运动并没有成为主流。
唯一稳定的朋友就是一直提升的计算机性能了。由于各种不同的架构提出,虽然编写程序更困难,但是性能的提升仍然让人目眩。虽然普通人已经被各种Pads隔绝在了追求性能的目标之外,但是这些提升仍然带给了我们各种花样繁多的智能软件。记住所有Facebook Friends的长相再也不是侵权,而是普通需求。谁都可以后期修复出赏心悦目的照片,即使原图各种模糊和曝光不清。重要的是随时随地都可以拍照和分享,而不是拿着单反去摆Pose。现实和虚幻已经因为家用的全身捕捉软件而变得模糊不清了。打开投影仪和Kinect,一晃眼,就是另一个时间地点,天啊,这些实时渲染和真实的有什么区别呢!可惜这并不是我们期待的立体电视会议,只是游戏罢了。电视会议,是打开Pad上的一个10块钱的软件就随时随地多方召开了,无线连上大屏幕,还怕不够清楚么。
由于仍然没有完全解决好传统电器生产商的惰性,我们的冰箱仍然不能自动下单去补充吃的。幸好还可以上网,订购各种Groceries,而第二天新鲜的有机蔬菜肉类鸡蛋就送到了门口。生活更加方便了,可惜只是对那部分越来越少的中产阶级而言。
谈过了各种用和行,也该谈谈吃住了。虽然有越来越多对解决饥饿做出贡献的转基因食品,由于媒体的误导,仍然有许多人愿意付更高的价格去买非转基因食品。改变这方面的习惯证明不是4~5年的事。即使全球来看,住房价格仍然没有崩溃,虽然大家都没有钱,但是看到自己的房子值钱,总是让人开心的。
或许就是这样,在经济危机和战争的威胁下,仍然享有短暂欢愉的2016,你喜欢么?
The Not-so-slow JavaScript face detector was written two years ago. Initially, it is a one-day-hacking to see if the state-of-art face detector technology is implementable at tolerable speed with JavaScript. That one day’s hack lived up years with many extensions and applications spreading on the web: a JQuery plug-in, a video face detector and a mustache demo. One interesting finding over years is that the JavaScript speed increased dramatically on both Google Chrome and Mozilla Firefox. When I was writing the face detector, a 800x600 image usually took more than 6 seconds on Firefox 3, but now with Firefox 10, it takes about 1 second. At around the same time, Google Chrome is improved from about 2 seconds to 1 second. This script alone witnessed the armed race between browsers and it is a good thing. But over years, although the source code is out there, how this worked is never explained. I did little comment in the source code, and the algorithm is not as well-known as HAAR classifier used in OpenCV.
The very basic instrument used in my implementation is called control-point feature (renamed to brightness binary feature to reflect that the implementation in ccv works only on brightness value). For a given WxH image region, one feature consists of two sets of control points, a[1], a[2], … a[n] and b[1], b[2], … , b[m]. To classify the given image region, a feature examines the pixel values at control points in group a and group b in relevant images (at original size, half-size and quarter-size). The feature only answers “yes” if all pixel values in group a is greater / less than any pixel values in group b. The details can be found in the original paper YEF: Real-time Object Detection and a follow-up High-Performance Rotation Invariant Multiview Face Detection. Long story short, the training program bbfcreate will create several strong linear classifiers from control-point features using AdaBoost.
The control-point feature is simple enough that after the generation of the image pyramid (a series of images that downsized from original WxH size image to W/2xH/2, W/4xH/4 …), there is no further image processing required. If the computation to generate such image pyramid can be negligible, for each control-point feature, it accesses fewer memory locations (n + m <= 5) than HAAR-like features (the one implemented in OpenCV, requires 6~9 memory accesses). This turns out to be a good improvement, and the ccv implementation in C achieved similar accuracy (82.97% with 12 false alarms V.S. 86.69% with 15 false alarms) comparing with OpenCV default face detector but 3 times faster (as a side note, this is still far from proprietary implementation which achieves ~90% with ~3 false alarms on the same data set, read more details). This is an even better news for the JavaScript implementation since the downsizing operation can be offloaded natively with HTML5 canvas’ drawing method. That’s the secret sauce in my not-so-slow face detector (implemented in line 200).
Once the image pyramid is generated, the detection process is just following the paper. The algorithm sweep over the whole image at different resolutions to check if a face exists there with control-point feature (line 290). I have no other tricks to improve speed-wise beyond this point. At the end of this process, it merges detected areas and returns that with confidence score.
OK, let’s reconfirm how fast it is:
This 2808x1805 image takes 6 seconds on Firefox with Web Worker off, and 10 seconds with Web Worker on. It takes 4 seconds on Google Chrome (Web Worker doesn’t work as smooth in Google Chrome).
Please let me know what else in this implementation you want to be explained in the comments.
如果韩寒是个女的,我立马抛家弃子就去追了。
90年代的时候,特别喜欢读各种作文选。出生在一个小城镇里,读别人的作文仿佛就是经历了别人的生活,比自己的更加热闹有趣。当时书店还是闭架售书,小姨在书店上班,于是可以在晚上的时候跑进去找书读。记得还曾经在角落里找到一本图集,有趣极了,快速翻动起来就可以看到书上的小猪跑来跑去。先是读二年级的作文选,又读四年级的,读一读便觉出了门道,八股文而已。既然知道是八股文,还不如真开始读古文呢!于是就翻起了史记,果然看不懂,只是强记了黄帝本纪,好和同级的炫耀。到小学快毕业的时候,同一个柜台又卖起了电子辞典。那段时间很少读书,全是打电子辞典上的游戏了,还和柜台的漂亮大姐姐比谁打的分数高。
大概就是在那个时候,才知道中学生也是可以出版小说的,比如《花季雨季》。那些中学生活对于小学的我好新奇,不过,不过也就那样了,因为后来还有《十七岁不哭》这样的电视剧,他们居然可以放学后约女同学一起看演唱会!大概就是在同一时候看到了《新概念作文选》。回忆起来那里面的主要内容也无非主要是中学生用各种不可能的方式谈恋爱。小资、奇幻、穿越这样的文风大概是从那个时候开始就逐渐成型的。唯有《书店》和《看病》觉得十分有趣,发现还是同一个人写的,那人叫韩寒。一直记得《杯中窥人》是二等奖,大概是因为个人就觉得这文章也就一般般,韩学长还真是不大适合写应试作文啊。
《三重门》是在很久后才看的了。到初中的时候养成了看无营养畅销书的习惯,就是讲什么老鼠啊,系鞋带啊,奶酪啊,卖鱼啊,推销员啊,花花公子啊,阁楼啊诸如此类的,反而是小说看得少了。大概当时读的《三重门》还是一本盗版,挺俏皮得,总之文章内容就是在意淫一个穿白色长裙的姑娘吧。《像少年啦飞驰》是在高中读的,当时读的小说还有诸如《第一次亲密接触》《榭寄生》什么的。对了,还有关于什么穿越的打星际战争的盗墓的打粽子的给出租房装摄像头末了还把自己指纹烫没的。唯一有营养的值得称道的阅读在高中时大概就是阿西莫夫的《I, Robot》,基地三部曲和《男人装》了。《像少年啦飞驰》是写一个枪手的吧。
到《长安乱》的时候就成铁杆粉丝了。在体制内长大,却冒险逃脱了几次。先是没中考跑到了上海,后来没高考又跑到了五道口技工,总觉得韩寒会是知音,虽然总找不到人介绍和偶像认识,而且自己还是一路读最好的中学和大学过来得。为了读《长安乱》,还特意买了几期的《萌芽》,等得眼珠子都快掉了。结果韩学长又虎头蛇尾了一会,开头特别有劲,结尾又仓促了。这样说,还是《兄弟》好看啊。想来是开赛车已经没什么时间好好思考写什么了,就跟我现在没空思考怎么写BLOG一样。本来还抱着希望韩学长在小说界好好耕耘,出人头地,结果看到《光荣日》之后,就彻底明白这家伙现在就是靠出书来补贴家用,在博客上牢骚了两句,还引来了高中时暗恋的姑娘的赞同。谁叫他是韩寒呢,我就喜欢他,出什么破书也得买。
真羡慕郭敬明啊,至少还有《上海绝恋》。
每过一段时间,总会发现自己的草稿箱里充满了各种各样未完成的稿子,但是想来又都是写不下去的了,于是倒不如发出来看个热闹,权当凑个趣。
劣鄙的公正,谈个人复仇的法律正义性 (2012-01-22)
在中国,我们常常被教育,“冤冤相报何时了”,“冤家易解不易结”,“一笑泯恩仇”。但在武侠小说中,也常常有“杀父之仇,不共戴天”的说法。复仇,尤其是在此谈论的以杀死造成自己亲近之人死亡的凶手为目的的复仇,虽然有层出不穷的格言(以牙还牙,以眼还眼。An eye for an eye etc.),并不是古已有之的。
恰恰相反,在远古时期,很少有复仇的经验,更通常的形式,是所谓的血钱(blood money)。以新几内亚岛民为例,在新几内亚,若两个敌对族群发生冲突,并造成一方死亡,其并不要求杀死凶手,而是寻求一定的经济赔偿。例如在六十年代,由于澳大利亚政府的介入,族群之前通过经济赔偿(通常是几头猪或者一些土豆)达成了谅解。而在卢旺达大屠杀之后所进行的本地Gacaca法庭对参与屠杀的犯人审判结果也通常是经济补偿或者社区活动。这些并不代表远古社会相较于现代社会更加宽容,相反,除开游牧社会外,大部分的远古农业社会都是相当残忍的。比如,新几内亚岛民女性在丈夫死后,会剁掉一个手指以示哀悼;这样从她被剁掉几个手指就可以看出她嫁过几任丈夫。而且,这些社会也并不是没有死刑的,只不过,通常死刑的目的不是复仇,而是消除隐患。因此,受到死刑处罚的主要是精神病(疯癫)和巫师。
现代以来,大多数国家已经废除了死刑。对于追求复仇为目的的受害者而言,却缺少了手段。废除死刑的支持者常用的一个论点是,由于法律程序总是存在缺陷,死刑导致不正确的裁决无法得到修正。换言之,强制机构对于公民没有处以极刑的权力是因为强制机构并非完全公正的。然而,每一次有证据确凿的连环杀手被捕落网的时候,就又会掀起一轮恢复死刑的舆论声浪。
正义,尤其是现代定义下的正义,往往不是单一的。我们通常认为对于个人罪犯,正义的有效行使在于使犯人服刑,现代而言,通常是监禁。然而,那只是一部分。从审判开始,正义就逐步得到了贯彻。其包括,对罪行的承认,一定的经济处罚,对于受害者的悼念,对于恶劣罪行的牢记,对于相似罪行的防止,以及对于罪犯的报复。诚然,让罪犯改过自新,重新融入社会是监狱功能的有机组成,惩戒和报复也是其的一部分,但常常这一部分的功能没有得到足够的重视。这也是有其原因的,强制机构实行报复的正义性总是让人生疑。因为这样的机构拥有远远大于自然人的能力,通过他实施报复实在有滥用的忧虑。
相对而言,个人的复仇是受到宽容的。在1925年Operation Nemesis的成员杀死了奥斯曼帝国的三位领袖,目的是为之前奥斯曼帝国的阿米尼亚大屠杀进行报复。然而,法院的裁决确实判其成员无罪。理由在于对于大屠杀,领导人负有个人责任,其所进行的复仇虽然是劣鄙的,却不乏正义性。在美国的庭审中,对于个人复仇的正义性认识也体现在了之中。
个人复仇其正义性从何而来?为什么说正义,却是劣鄙的?怎样才能让正义得到贯彻而不失原则?个人复仇的正义性在于,受害者有所有的权力要求罪犯作出补偿。
What is FIFO, The Design Principle (2012-01-02)
FIFO is: http://fifo.me/
I’ve just pushed a new version out that you can log in with your Facebook credential now. Unlike the previous version, it saves your list to server-side. FIFO is intended to be your deadly simple TODO list that works. Thus, first and foremost, it should be simple. In FIFO’s design, the operations you can actually perform on the TODO list is limited. There are only four: 1). add a task, 2). modify an ongoing task, 3). mark a task as completed and 4). move current task to the end of the queue.
The magic dew of FIFO is that it ticks. A task is ongoing not only because it is highlighted with the big, yellow “R” button, but also because the time elapses. A task in FIFO has been assigned an estimate completion time. Once the task is ongoing (highlighted), its timer starts. If you spend more than a INTERVAL (default to 1 hour) time, you will be automatically moved to the next task. The idea is derived from deadline scheduler in Operating System, its original design philosophy is to make sure that every task at least can make some progress. FIFO borrowed the same idea to make sure one can make some progress on every task in the list.
It certainly complicated FIFO’s UI, but also brought some simplifications. For one, you cannot uncheck a completed task. If you’ve completed a task, and want to finish up some last minute thing, the alternative is to click on that completed task which will copy the item into input box and from there, you can re-enter the task.
The Symbol with Power - Storytelling of a Kadaitcha Man (2011-12-08)
Introduction
When Timmy Payungka Tjapangati (ca.1940 - 2000) was a child, there was a severe drought in western Australia. His family traveled long distance from his birth place, west of lake Mackay. At South of Warburton, he met his further wife and father-in-law. In 1958, he and his extended family including his father-in-law Uta Uta Tjangala moved to Hasst Bluff, and later moved to Papunya settlement. Uta Uta Tjangala is a very knowledgeable man in rituals and stories. As a expressive dancer as he is, of many things, he is the foremost ritual authority.
Timmy Payungka Tjapangati inherited much of his knowledge of lands and rituals from his early traveling experience and his father-in-law. Geoff Bardon have suspected that Timmy was in fact a Kadaitcha man in his group. In indigenous group, Kadaitcha man was a secret position who would conduct persecution or curse. They are guardians of traditions and rituals. Therefore, a Kadaitcha man is always knowledgeable with their rituals and land stories. In 1970, school teacher Geoff Bardon had the idea to let indigenous people to decorate school doors with their original artistic style. Among the first participants, Payungka started his painting career from then.
Early Paintings
The early paintings of Payungka contained much of ritual practices and sacred symbols. At that time, western desert art haven’t started its departure from its origin as ritual performing elements into contemporary art. The art is a way for him to express his root culture as well as carry practical meaning. The Sandhill country west of Wilkinkarra, Lake Mackay (1972) by all means, is a painting of landscape. It shows the geography of a sacred site near Lake Mackay. The light color represents spinifex while the dark colored areas depict the sandhill. The arrangement of dots that overlay the painting is not arbitrary. Man as knowledgeable as Payungka knew how to vividly reproduce the effect of ancestral force on paper. The semi-invisible horizontal lines that all over the dot fields shows the vibration of the ancestral power in that site. The Tingarri Story is different. If Sandhill represents a departure from the traditional west desert painting, The Tingarri Story is a faithful replication. It is a well-known dreaming story among west desert aboriginal group.
My Country (Homeland) (1972)
Sandhill country west of Wilkinkarra, Lake Mackay (1972)
The Tingarri Story (1975)
Later Paintings
During 1970s and 1980s, Payungka produced large number of paintings that revolve around rituals, dreaming stories and lands. As he produced more paintings, the techniques get improved. Kangaroo and Shield is a much larger painting. The story is convoluted too. As much as we know, it is a sacred story about Kangaroo (in the top of the painting) and the shield man (on the tracking path). It also depicts several sacred sites around lake Mackay. However, there is a turning point in 1990s that Payungka dispensed all the sacred meanings in his paintings. Rather, his later paintings are more artistically plausible. In Untitled, it is very clear that all these symbols though seems traditional, but won’t have a one-to-one mapping to any sacred sites.
In the middle 1990s, some of his sacred designs were used without permission in a commercial carpet manufacture, which promoted him to paint without sacred designs but still have the same artistic themes.
Kangaroo and Shield People Dreaming at Lake Mackay (1980)
Untitled (1998)
Go Back to the Root
The parallel similarities can be found between Payungka’s painting and Uta Uta Tjangala’s are striking. However, it won’t surprise us once we learned that Uta Uta is the father-in-law of Payungka. In The Old Man’s Drawing, Uta Uta told a story of ancestral being referred as the Old Man.
The Old Man’s Drawing
Storytelling of a Kadaitcha Man
One should look no further than The Old Man’s Drawing to find the storytellings from a Kadaitcha man. The essential part of a story that Kadaitcha man interested in is the punishment. which is exactly what made the story of the Old Man’s Drawing interesting. [insert the story].
Timmy Payungka Tjapangati’s dreaming stories is not all about punishment if any of them is.
身体啊 (2011-10-19)
最近身体真是有点不好了,连续两天都感觉非常累,又没做太多事。果然是太胖的缘故。
On Core Competency (2011-10-14)
As naive as I am, or as biased as an engineer, I still think that the core competency in reforming any current physical activities lies in between engineering efforts and design. The core competency of an online banking is the engineering of optimizing
‘“Good enough” is a high bar’ (2011-07-14)
Most of the time, people won’t claim that they want to make a “just good enough” product. They pursue something great, disruptive and in all its merit, a game changer. It is hard, it is rare, it shapes the future and at the bottom, it makes a shit load of money. Except one thing: many of them failed.
(Why) do things one doesn’t like (2011-06-29)
It is famous to say, if you don’t like it, don’t do it. Such ignorant attitude is wrong because human brain fundamentally cannot the
The Value of Networking (2011-05-22)
You must knew one or two of these people. They are childhood best friends of some well-known names. They know people in high positions. They are some kind of legendary in the early 21st century. They are networking people.
THE BROKER
Brokers, some of you may not be familiar with the name any more used to be a common scene in stock market before 1980s. Their work was to match buy price to sell price and made the deal happen. However, there came the electronic broker system, and nowadays, most transactions are made with E-broker system rather than human brokers.
THE SOCIAL NETWORK SITE
Some old-fashion people would claim that Facebook made us loners. But believe or not, it is a great tool (let’s put a side the proprietary nature of it) to efficiently maintain your friendship. Virtual social networks would never fade, they will only evolve simply because they are useful.
NEW KIND OF INCUBATOR
When you consider Y-Combinator, the phenomenal startup incubator in Silicon Valley, it becomes obvious how the future is. It is a standardized procedure for application, it uses the standardized term sheet and most of all, it is scalable. Heck, it double the total number of funded startups every 6 months.
THE PUNCHLINE
Y-Combinator is many things, but for one, it is an experiment to eliminate the networking cost.
The Spirit of Entrepreneurship (2011-04-21)
After the last post, my mind cannot recover from thinking about common traits of entrepreneurs I liked. Let me clarify this: there is no common traits among entrepreneurs, thus, my analysis is flawed from beginning.
You and us, we are different - why machine intelligence is inevitable (2011-02-23)
The “imbalance of power” hypothesis is interesting not only because its biological implication, also have the
Application-Driven Development in CCV (2011-02-08)
In the set off post of ccv, I listed one property of it to be “modern”, which means rather than provides a truck-load of obsolete algorithms, ccv intended to provide best-of-its-kind algorithm among wide range of applications. Last September, I even went further and claimed that the first 4 applications for ccv would be: 1). object matching; 2). object detection; 3). text detection; 4). 3d reconstruction. These statements set the tone for ccv development known now as application-driven.
There are a lot of evidence in ccv code base to provide the actual usage of this method. ccv_sample_down was implemented when I was implementing BBF object detection, which requires the image pyramid. However, ccv_sample_up was not implemented until SIFT implementation needs to up-sampling the image in order to get better result. Until today, a very common feature for image processing, know as rescale is not fully implemented yet. ccv_resample function still lacks of scale-up option, because in all these applications I’ve implemented, there is no need for that.
With the application-driven development methods, some rarely noticed characteristics of computer vision library surfaced. For example,
The disruptive Technology (2011-01-19)
Most people take technology as an inevitable achievement of intelligent beings. Thus, a lot people would like to regard technology as a continuous improvement force to our daily life. For any technological sufficient race, the technology part should not be considered as an add-on for social structure. Rather, technology as a disruptive power
Darwin and Wallance (2010-11-08)
Darwin reached his initial enlightenment on Natural Selection quite early in his year. But he anticipated criticisms, mass of them. The idea of Natural Selection and inner-group struggling was so novel, he believed no one would hold the same idea. Besides, the two questions: the origin of life and the origin of man left unanswered. Darwin was not in hurry. He was experimenting in his yard, collecting evidences and talking to breeders. He aimed at perfection, a theory that answers all. And Charles Darwin had resources. He was an English gentleman and even before his returning from the Beagle, he became a well-known naturalist already. He shared the idea with few friends, but never rushed to publication, the evolution needs time, and so did he.
Wallace, on the other hand, was very excited on his own discovery of natural selection. Born in a middle-class family and spent most of his life time overseas, Wallace was an eager mind that struggles to be recognized by the scientific community. He was also a reader of Darwin’s Journal of Researches, but much out of Darwin’s surprise, he came to the natural selection idea by his own. The pressure was on Darwin’s side.
In some sense, the scientific community is very cruelty. The 2nd discoverer came with little glory. Charles Darwin knew that, but as an English gentleman, he did help Wallace while he could. But for Darwin, the pressure was obvious, he needed to publish now with all the evidences he had. He had read the old evolutionist publications, the Vestiges and Lamarck’s theory. He knew the problem with them: the lack of evidence. It was his life work to collect the evidences, and he didn’t have the problem. But he need to carefully organize them, to nail it down. So, Darwin avoided the harder problems: the origin of life and the origin of man. He wanted to nail down the answer that he most confident with, the origin of species.
So he started the writing in 1858, with a humble beginning of breeding. If breeding was possible, so was the variations. If human selection can create new species, so was natural selection. It was quick. Darwin had these ideas floating around for years. After all, it was his lifetime work. Darwin got all kinds of help from scientific community, partly because he was renowned naturalist himself, partly because the theory itself was very interesting. He also got a grateful letter from Wallace which was a real relief for him. After two month of proof-reading and corrections, the first edition of the Origin of Species was published. It is a 509 pages book, not too big to be totally unreadable. Actually, it was a popular reading, and the science inside was obvious. Charles Darwin only needed to sit and harvest all the criticisms he anticipated for years.
From today’s point of view, the 19 centuries scientific community was very interesting one. There were few interesting things going on in the case of the Origin of Species. The community was, surprisingly, quite open, I dare to say, even more open than current one. Wallace, even as hardworking as he was, could send his manuscript to a renowned naturalist by his own, it was fascinating. A scientific book could become a popular reading, that was unimaginable in today’s world. Even as well-known as Steven Hawkins, the book of the Grand Design is still tagged with popular science. The problem of a very open scientific community was lack of professionalism. With all my respect to Charles Darwin and Alfred Wallace, the Origin of Species deserved the criticism of lacking proof. The book lacks of footprint, and the experiments were not repeatable or lack of details on setup. Darwin made philosophical arguments more than scientific ones. The scientific publication criteria clearly changed after a century.
The race between Darwin and Wallace was certainly an interesting one. But from the reading, it wasn’t as deadly, cruelty as I imagined. It seems that the classes and the social status played a big role here. Even Wallace got every detail independently, it quite unlikely that he would collect enough evidences in time to proceed. As long as Darwin was willing to give him credit, Wallace seems quite happy as a defender to Darwin’s theory. That was, to some extend, less Dramatically as a dead race to scientific discovery. As gentleman as Darwin was, it was unlikely that without Darwin, the 19 centuries won’t discover the law of evolution. However, the coincidence shouldn’t down play Darwin’s contribution. As far as Wallace had got, it was an interesting theory, just as interesting as Lamarck’s or in Vestiges. Darwin set out differently. He didn’t want to get well-known because of a proposal of interesting theory, he was quite famous already. He wanted to establish the theory, a theory that needs no Creator. Thus, the ambitious goal put him into the role as the discoverer of Natural Selection.
On Betting the Big will Fail (2010-09-24)
I have the little hobbit of predicting the future, based on patterns. The recent fail of Cuil to Google and upcoming challengers of CollegeOnly and Diaspora to Facebook give me a chance to predict the pattern of small startup up against big player in market. Patent in its traditional meaning, is a government-protected monopoly. However, because it is a granted property right that covers wide range of transactions, questionable practices such as patent suppression are invented ever since. By actively applying for patents and passively asking for royalty fee, it now becomes a standalone business mode, companies like Intellectual Ventures and Digitude Innovations are actively seeking legal means to profit from the large patent pools they gathered through acquisitions and bankruptcy liquidations. These business practices failed to address the very principle of patent law, which is to reward patent inventors in order to better promote technology advancement. On the other hand, in today’s world, a competitive market shows It interesting because whether Cuil or CollegeOnly or Diaspora had chose a way to directly compete with these who already took a big share of market already. To me, these startups were founded on the idea that big players were too big to pursuit a small division of the market where can give these a niche.
Introducing LimPQ Service, Concepts and Ideas (2010-09-13)
limpq.com is my newest venture to help developers get rid of all the madness in implementing a web-based photo management application. The web interface is only my vision of how a web photo management app should look like. The whole idea behind limPQ is about providing simple and robust photo management system that can be easily customized and embedded. With limPQ, developer can create a photo management app in minutes by pure javascript API or RESTful API.
LimPQ photo management is a portfolio based system. A portfolio is the exclusive owner of any limPQ objects, includes photos, collections etc. Each portfolio has full privilege over the objects it owns. In current implementation, without an identified portfolio authorization, all the objects are publicly read-only. The privacy is only guaranteed by obscure UUID. A registered user can create several portfolios, the number depends on their plan setting.
Another important concept in limPQ is the floor. Floor is a generic property in all limPQ objects. One limPQ object can only have one positive integer number to identify its floor. Floor mechanism provides an easy way to implement privilege management level. In the future, we intend to embed access control within floor.
Collection, as the name suggests, is a container of references to photos. A collection can have as many photos as you want, and one photo can belong to several collections. If only collection reflects the relationship between photos, to implement one yourself is trivial. But collection here is powered by NDQI. A collection contains three parts: a query string, a list of included photos, a list of excluded photos. With the help of NDQI, collection can reflect the changes in the portfolio. A new uploaded photo can be automatically assigned to collections (Not real-time in current implementation). In that sense, it is dynamic.
The Third Thing (2010-08-24)
I am a tech person, which means that I spend a lot time on technology stuffs: neat ideas, implementations and visions. When diving into the details of implementation, a thing scared me. That, by obsessing small details, I am losing the sight to the bigger picture. Once a year, I usually spent one week or more, to sit back and think, think really hard about, how, and what the future will be. Even being naively optimistic, I still believe that future is unpredictable. Paradigm shift comes and goes, and that breaks everything. Also, even from utilizationism point of view, the think is more about contrive fantastic words to sell ideas. It is, in selfish way, to fulfill the needs of visionaries throughout the history of civilization.
When the history comes to a turn point, very few had a clear picture of how big it is. However, stripping out the context, all modern innovations are magic out of void.
The Fallacy of Military Occupation (2010-04-28)
The march to Baghdad in April, 2003 symbolized a triumph of the United States military force. However, the premature occupation policies of Iraq caused more chaos and uncertainty. Conventional wisdom suggests that the longer the military occupation is the more chaos it will create. Contrarily, “The Surge”, Bush’s new way forward which essentially meant to send more troops to Iraq harvested many positive effects in the past three years. To many politicians, it seems now that sending more military force can solve the problem. However, the successes in Iraq only proved that enforcing more police power and regulations can improve local security situation. It cannot justify the necessity to send more troops under similar situation in the future.
The military force is not the best police power to enforce law and order. It can only be used as a police power for a very short time. The Allies had successfully used military forces to maintain order in after-war Germany and Japan. However, in Germany and Japan case, the Allies forces handed the security tasks back to local police after a short period of occupation. However, in Iraq case, the too-soon-ended war put the strategists in Washington to a situation that the after-war plan was not fully thought out and need to be implemented. During the debaathification, most local police forces were dismissed; it took much longer to assemble an effective security force. In the meantime, the U.S. military got more involved to the local security matter. The U.S. soldiers on ground were incompetent to recognize civilian interests and protect them from terrorists. The soldiers are trained to quickly recognize the threat and took action to it. They tended to be frightened in all situations. The recent video from Wikileaks showed how the frightened soldiers killed BBC reporters under very poor judgment to the situation. It is reasonable for soldiers to kill military-like target in war, but it became an unacceptable action to civilians in rebuilding process. The soldiers are not policemen. It is expensive in term of time and cost to enforce law and order with military force.
The local resident involvement was a more effective factor in the improvement of the Iraq security situation. During the time of “the Surge”, the U.S. government settled an agreement with Sunni militias in the east to secure the border with Syria. The Iraq government force also won an important battle with Mahdi Army in that time. These events featured the establishment of the Iraq local police force. Many Iraqis felt that the U.S. occupation, besides the military existence, brought very few benefits to Iraqi in general. The civilian casualties caused by crossfire further imposed the U.S. occupation into a bad position. Even it had many positive effects to the overall security situation, sending more troops in Iraq to enforce regulations was harder to be accepted than the local police by the regular Iraqis. More than 65% Iraqis objected to the U.S. occupation which arguably, further weakened the effect of “the Surge”.
If the success of “Surge” was built upon the reinforcement of police power, for the future similar operation, we only need to send more security forces instead of raw troops. If the U.S. would like to deploy a more aggressive foreign policy, it should form a new bureau, which is much like the Homeland Security to the domestic security issues, to deal with foreign security issues. Instead of sending battle troops to “the Surge”, we can deploy people from the new bureau who are trained to deal with security issues. In such way, we may eliminate many negative effects brought by sending more soldiers to a foreign land and avoid hate against the U.S. occupation.
Someone may suggest that the new battle troops are better in dealing with terrorism, thus, the positive effect of “the Surge” was more contributed by the fact that the new troops effectively defeated terrorist’s threats. Maybe local police force is not the best option to deal with terrorist’s threats, but in my opinion, the U.S. cooperation with Sunnis was a larger contributor to the drop down of foreign terrorist activities. Many foreign fighters, namely those worked for AQI were believed to enter the nation through border with Syria. By cooperating with Sunni militias to secure the border with Syria, the U.S. had the chance to stop new incoming foreign fighters and defeat the terrorists in Iraq. The fact is that during the time of “the Surge”, more troops were dispatched for jobs like securing street and public gathering place, which are just like what normal police force will do.
An Unnecessary War (2010-04-14)
In 1991, after the liberation of Kuwait, the U.N. army chose not to invade Iraq and left Saddam’s regime untouched. 12 years later, when it was crystal clear that the International sanction would not swing Saddam’s regime, police makers in Washington started to worry what would happen if the deterring measure fails. The consequent threat of WMD (Weapons of Mass Destruction) as a failure of deterrence urgent the United States to take action. However, the answer from President Bush’s administration is a war on flimsy foundation [1]. Instead of a full military invasion, the United States could overthrow Saddam’s regime by leveraging International politic & military pressure and providing actual military support to revolutionists within Iraq.
To overthrow Saddam’s regime, the U.S. could have played more active role in Iraq by enforcing democracy reform. After the Persian Gulf War, Saddam would do anything to avoid a direct military confrontation with the U.S. forces. During the Persian Gulf War, Saddam didn’t launch biomedical /chemical weapons against coalition forces; instead, he moved his military forces from Kuwait rather quickly. In 1991, Saddam already realized that there is no point to confront with the only superpower in the world. Just before the Ultimatum, Saddam Hussein permitted the nuclear weapon inspector’s work in Iraq and reached a “limited” agreement [2]. It was a positive signal that Saddam Hussein would obey the U.S. request if it came with a face-saving way. However, the quest in the Ultimatum [3] which urgent Saddam and his family to leave Iraq within 48 hours was a too aggressive move. There is no way that Saddam would be willing to give up all his power and embarrass himself by leaving Iraq. If only the U.S. could have put effort on a more strategic way to weaken the current regime first by wisely using the politic and military pressure in polite way, it would be much easier to remove Saddam’s regime from inside.
To overthrow Saddam’s regime, the U.S. should have supported local revolutionists instead of dispatching our own troops. Though the U.N. forces didn’t support the uprising of Shi’a and Kurd in 1991, the two groups were still against Saddam and his regime. However, because of the short supply of food, water and medicine, they were unlikely to organize an effective revolution. With the U.S. support and continuous weakening of the current regime, a bottom-up revolution would likely have happened and a new leader would have emerged from the revolution. Actually, immediately after the military invasion, a Shi’a group of rebellions were quickly organized under the al-Sadr (“The Mahdi Army”) [4]. It reveals that once Saddam’s regime weakened, a grass-root revolution would have happened. A revolution within Iraq would have saved the U.S. from a lot of troubles for the reconstruction of Iraq.
The full military invasion to Iraq would terrify neighbor nations in the Middle East and the exercise of preventative war made a bad example for the rest of the world. The action showed that the U.S. as the only superpower can use military means to remove any disliked regimes in the world. Iran, which was also part of the “axis of evil” [5], would like to be prepared for the potential U.S. invasion. North Korea would act more insanely in term of military and more actively seek nuclear weapons. A major International military confrontation without prove of the U.N. Security Council will further weaken the role of the U.N.S.C. in important International issues. By bypassing the U.N.S.C., it may be quicker for the U.S. to make decisions in a unipolar world, but it is not a wise move towards a multi-polar world in the new century. It would be very difficult to debate in the future if a powerful dictator cites the U.S. action (pro-act self-defense) in 2003 as a justification for his own invasion.
The popular perspective to the Iraq problem would argue that the military invasion was our last attempt to solve the Iraq problem since all non-military attempts to disarm Iraq have failed. However, it is not true. The Iraq problem caught major attention in 2002 was largely contributed by the “axis of evil” speech. Except Iraq problem specialists and long-term advocates, few people in the U.S. paid attention to the poor Middle East country before that. If we could put more resources in strategic way, we would realize that there are more than one solution to the Iraq problem. Military intervention is the hard way with casualties of U.S. solders’ lives. A soft way, which weaken and overthrow the current regime, would be less expensive in terms of human lives.
-
An Unnecessary War, John J. Mearsheimer and Stephen M. Walt. Foreign Policy, No. 134, pp 59 - 59.
-
BBC News: Timeline: Iraq Weapons Inspections. http://news.bbc.co.uk/2/hi/middle_east/2167933.stm
-
George W. Bush: The Ultimatum to Saddam Hussein. http://edition.cnn.com/2003/WORLD/meast/03/17/sprj.irq.bush.transcript/
-
Adams Trusner: 2005 in Iraq
-
George W. Bush: The 2002 State of Union Address. http://georgewbush-whitehouse.archives.gov/news/releases/2002/01/print/20020129-11.html
Religious Pitfalls in Baath Movement (2010-03-29)
The Baath movement in 20th century tried to solve the Arab world problem by introducing modern western ideas and practices in order to create one united Arab nation. This cross-nation political organization of the 1920s was first founded under three principles: Unity, Liberty and Socialism. These three were the shared pursuit in early 20th century among most people in the world. Under these principles, the Baath Party is should never be considered as a religion oriented political organization. The intentional ignoring of the religious aspect in Arab people’s life helped the cross-border movement spread to wider population, but it also seeded the potential divisions and conflicts inside and outside the entity itself.
The secular aspect of Baath Party attracted people from all walk of life and essentially buried the seed for later disruption. The pan-Arab nationalism, which has been the central focus since the foundation of the Baath Party, is a secular ideology as opposition to the Ottoman’s Turkish focus [1]. The direct inheritance of pan-Arab nationalism ideology as well as the secular aspect let the Baath movement could reach larger population in the Middle East. In the early days of Baath movement, they built more schools and hospitals than the Muslim Brotherhood, a religion-oriented organization, did in 20 years [2]. The Baath Party became quite popular in both Shi’ya and Sunni groups.
By putting aside the traditional Islamic religion structure, Baath Party went ahead and created their prolonged and inefficient management layer. For a fast-growing organization, instead of leveraging current establish religion structure, the process of creating new could be painful. The leadership of the original founders was questioned; the principles (social justice etc.) were argued; the leading role towards pan-Arab unity was still unclear[1]. The quick failure of the United Arab Republic (UAR) which was pushed hard by the Baath Party founders put all the disagreements up to the front. The later power division between Regional Commands and National Command was a direct result of the inability to construct efficient management structure in large organization. Notably, the founder Aflaq and Bitar never regain their power in Syria. These Regional Commands, which mostly ran by Military men, became a parallel structure. The National Command, which ideally should be the central control center, was isolated, degenerated to a symbol of Baath Party only.
The basic secular ideology of Baath Party was challenged in late 20th century. The principle of “unity” was proved to be unrealistic[3]. The failure of the UAR and Iraq’s unwilling to join the UAR showed that it was problematic to emerge existing governments and evenly distribute power among the rulers. The wrongness of socialism part became evident with the disassembly of the Soviet Union. When two (“unity”, “socialism”) out of three principles were broken, the Baath Party felt the need to find other principles to hold their existence. Saddam Hussein, the leader of Baath Party in Iraq, started to seek a way to turn Baath Party, and himself, as the leader of Islam world. In the hope of uniting Islam world, he also passed a law to encourage inter-marriage between Shi’ya and Sunni groups. The Iran revolution in late 1970s started by, and the leadership was transferred to a religious group. It enhanced the impression that a secular ideology was not persuasive enough in Arab world. The struggle to shift the Baath Party ideology into more religion-oriented one was transparent when Saddam Hussein painted himself as the direct decent of Muhammad [1]. At that time, the lack of religion aspect for the Baath Party in its propaganda was widely accepted.
The other theory may suggest that the inner struggle between different national ruling powers was the main contributor to the fail of the Baath movement. However, they failed to explain why the later Europe Union (EU) is successfully united even more nations. The culture similarity between Arab nations is much higher than the between EU nations, and in the Ottoman Empire period, they were physically united. The struggle of ruling powers is only one of many solvable barriers on the road to a united nation if only they can work under a unified religion framework.
The Baath movement had and exercised serial aggressive moves in the belief of creating one united Arab nation, but it failed. On the other hand, the Islam, a non-aggressive religion penetrated and thrived in the same area for hundreds of years. In the way to a united Arab nation, the doctrine from Koran can never, ever be forgotten.
[1] The Baath Party: Rise and Metamorphosis, John F. Devlin, The American History Review Vol. 96, No. 5
[2] The Baath Movement, Adam Trusner, Iraq War Course, Spring 2010
[3] Pan-Arab Nationalism: The Ideological Dream as Compelling Force, Burry Rubin, Journal of Contemporary History, Vol. 26, No. 3/4
[4] The Iran Revolution, Adam Trusner, Iraq War Course, Spring 2010
Review Vector Boosting with Bag of Words (2010-01-28)
While reviewing vector boosting, the idea is tempting. It is a way to vectorize a region super fast and has learning ability inside. Sounds familiar? Modern dense descriptors do exactly like that but without the learning ability. More than that, if we can learning a vector from a given region, why not just learn the bag of words model directly? When you consider the user case of vector boosting (for different pose of faces), the output vector are generally sparse and so the bag of words model. What a great fit.
There are several constraints that made the training a vector boosting model for interesting point detector/descriptor never be touched before. 1). Training a large vector is something prohibitive in time. A small usable codebook has tens of thousands classes, and today’s typical vector boosting technique only works on handful classes. 2). Semi/non-supervised boosting learning is needed. Ideally, we don’t want to rely on external detector to prepare the training data. If so, the trained system may be just sub-optimal than the external detector and thus useless. 3). Vector boosting requires orthogonal vector projection, for 10,000 classes, that means 10,000-d vector.
A new approach to the Internet (2010-01-14)
The hard part of censoring the Internet is that, every connected device is equally treated: it can connect to other devices AND it can accept connections from other devices. The connectivity nature of the Internet causes many troubles when one tried to block the information. Let’s imaging a world where every legal protocols is either censored or blocked, it is still possible to sneak out encrypted information through legal channel (forge request etc.). The dilemma is you cannot block every IP address outside one region. If do so,
Why China is not the Future (of Business) (2010-01-13)
Google is pulling out its China operation. Some posts are saying that it will damage Google’s long-term growth. However, the implication here is that China is the future of business, and by quitting Chinese market, you are losing the future.
未来十年 (2010-01-12)
中期的未来预测从来是件吃力不讨好的事情,准确的很少,而且容易受到不可预知的事件影响(比如对于航天业的发展严重受到苏联解体的影响),而且,中期预测和短期预测不同,个体无法根据中期预测进行战术性地调整,也使得这一预测实用性大打折扣。比如前年做的4年期预测,现在看来,至少有这么几点是错误的:1. 现在看来,要在两年内将CPU主频从3GHz提高到5~6GHz是不可能的了;2. 存储没有以家庭化为单位,而是以中心存储为单位,天平已经倾斜过去了;3. Nokia比想象得死得快得多,Android和iPhone现在看来发展得快得多;4. 没有人购买家用存储设备,也没有人生产这样的设备,因为存储无可辩驳地向中心存储转移了;5. Roomba没有遇到拐点,相反,这一市场发展愈发平稳。
然而,中期预测提供了对于人类未来的美好愿景,因此,这样的预测即使不讨好,也值得斗胆一做的。按照惯例,在进行接下去的技术愿景预测时,应该首先进行一些社会结构的保守预测。然而,就十年的时间跨度而言,这些保守预测显得不合时宜。因此,在接下去的预测中,将会涵盖主要的社会变革和科技变革。
在接下去的十年中,私有的航天公司将会是航天计划的主导力量。由于私有企业的参与,我们将会迎来下一个航天黄金时代的开端。太空作业的规模将会是现在的几倍,几十倍。也就是说,我们将有能力将大型的设备运上太空(分批次运输,轨道上组装)。由于生命探测技术的进步,人们将会在外太空发现单细胞生命,但是发现多细胞、复杂的、有智力的生命仍然遥遥无期。
在能源方面,电池容量的提升仍然很漫长,超级电容也只在备份电源方面找到了用处。但是,曾经受制于运输昂贵,造价不菲因而得不到发展的燃料电池迎来了春天。得益于核电和多种新能源发电的普及,电价已经廉价到电解氢变得经济可行了。因此,我们有了能续航1个多月的多媒体手持设备和一周加一次燃料的电动汽车。虽然航天业的价格已经降到了100万美元一吨左右,在航空方面仍然没有什么进步,甚至大客机仍然在使用石油。私人航空器听起来是个梦幻。
然而,这十年并不是太平盛世。各个主要国家将在一个第三方卷入一场资源和话语权争夺的战争。这将是一场克制的,精确打击,低伤亡的战争,但是会导致一些国家的政权更迭。虽然号称低伤亡,但由于各种高科技的运用,这却是有史以来最血腥,报道也最全面的战争。
谈创业团队 (2009-11-28)
最近有朋友跟我说要在互联网创业了。身边想做,已经开始做的朋友不少了,大家总是纠结在这几个问题,团队多大合适,怎么配置,股份怎么分配。
关于股份如何分配,我只能说,合理分配。因为这说到底是团队成员互相博弈的过程,外人是说不上话的,但是关于团队组成,我觉得还是有谈一谈的内容的。
首先,要肯定的是什么样的创业团队都是可能成功的。但是正如作为理性的人我们从不买彩票一样,我也只是挑选
Memory Efficiency is Important for Tree Structure (2009-10-29)
When implementing general tree structure, people sometimes just forgets how important the memory efficiency is. At least, I was. The first mistake that made I aware of this issue is in the implementing radix tree (prefix tries). Radix tree is a tree structure with more than 2 children (26 children / 16 children is the common pattern). That’s where the common knowledge is challenged. When implementing binary tree, we always allocate two pointers (left node/right node). If we follow the common knowledge and allocate 26/16 pointers, it becomes a huge waste. When a tree structure needs to hold millions of objects (which is common these days) in memory, a constant waste of 4*16 bytes is large.
Most time we think the tree structure is more memory efficient than hash table.
为何我不看好创新工场 (2009-10-09)
觉得要抽空写点关于李开复博士新创业项目的一些话。
题目说得很直白了,我不看好李博士的这个新项目。
Dr. Who or How I Learned to Stop Using Random Data and Love the Real World (2009-09-17)
The first pain with random data starts with the implementation of spill tree data structure.
Patterns of Business Success (2009-09-07)
Starting from my middle school, I was facinated by the puzzle of business success: people from all walks of life can be huge success in business perspective. Can I find any pattern of how these people succeed? There are books like Built to Last / Good to Great want to figure out pretty much the same thing.
From time to time, I realize that luck plays a big role in business success. However, that doesn’t necessary mean there is only randomness is the pattern of business success.
Target has been Vectorized and Quantized (2009-08-24)
A common method to do computer experiment is to vectorize something and then quantize them in order to get a good, discrete representation of the essence. The one convenient thing about multimedia is that all the digital form of media is naturally a vector and sort of quantized. That’s to say, to extract a better representation of media, we only have to apply different linear/non-linear transformations.
Vectorized data has many advantages. It easy to calculate, manipulate and visualize. Because of these advantages, vectorizing document to do comparisons/search etc. Quantization also served us well those days. After quantization, it is almost straightforward to apply semi-naive Bayes / histogram etc.
There are several missing parts that worth to mention. We generally believe that human eyes have the function to vectorize what it perceives. But what level of vectorization have we gain is a bit little vague. Our current digital imaging technology forms a vector representation of image based on per-pixel intensity. Though there are similarities in low-level (eye function), human can more efficiently break image into higher/more compact representation. Let’s assume it is still a vector representation, it should be weighted, dimension-reduced vector. An observation of human skimming skill suggests that human mayhave the ability to automatically verctorize document by skimming.
Some Misconceptions in Haar-like Feature Detection (2009-08-02)
Adaboost training is simple. HAAR-like feature is simple. And the method for fast-detection was discovered in the beginning of 21st century. What’s the big deal here? Well, because for a long time, I basically have some mis-concepts regarding the wide used fast-detection techniques.
The adaboost classifier is a linear classifier. However, a linear classifier is not neccessarily bad classifier especially for these in high-dimensional space. HAAR-like features lays in a very rich space. It is rebundant,
新资本家 (2009-06-16)
关于资本家的盈利模式,自由派经济学家和马克思是有重大分歧的。马克思认为价值是物品的固有属性,不会随着外部而变化。
进化的理性作用 (2009-05-31)
理性主义者常常宣称,进化已经产生了智能,因此完全可以不依赖于自然选择的进化而经由理性的逻辑推理进行设计,找出最适应的方法。这一想法的狂妄之处在于忽视了理性的逻辑推理不是独立于进化而存在的,他本身是由进化所产生,因此不会是永久可靠,也自然不会是“找出最适应”的最优办法了。同时,定义最优的约束条件本身就是困难的,这也使得由理性推导来获得最适应族群是更加困难的。
但是,这样的论断并没有否定理性决定最适应的可能性。
Everyone wants to be Steve Jobs (2009-04-02)
As Steve Jobs introduced the idea that design is the center of product, suddenly, everyone comes up with a great excuse.
Yesterday, I started to work on a new kind of TODO list, or a fancier way to call it, “a personal deadline scheduler”.
TODO list is easy to write. The idea is simple. Having list of items, and assigning each of them with the status (completed, in-progress, abandon etc.). A fancier one may have participants, milestones and deadlines. These ones are called task trackers. TODO lists tend to be small, they cover things that can be done in few hours most. Task trackers tend to cover multi-day ones. But neither of them “tick”. Thus, there is no way to prevent one item sit on your TODO list for a year and still have zero progress.
Speaking of progress, it is something subjective and in general hard to measure. For example, you can sit there for a whole day and still have zero line of code written. Many task trackers let you to estimate your daily progress and often suffer overshot or under. It is not a great way to meet deadline because it never saves you from be stalled. FIFO is different. FIFO aims at things that takes multi-hours but less than two or three days, such as a prototype of a feature, a test suite, a minor feature or few bug fixes. It doesn’t measure your progress, but in general, it will help you make progress on ALL the items.
The secret sauce is “ticking”. Once you started a FIFO item, the clock started to tick. For a item in FIFO, you needs to specify two things, the item name, and the estimate time required. In current implementation, you can specify the two with one sentence, like “CUDA on-device slab pool, 3 hrs”. Once the item is entered, it will start to record the time spent. You can pause the timer at any time or resume, but still, there is a timer that records time spent.
The beautiful part comes when you have several items at hand. Whenever you don’t want to work on the current item, you can click the “R” button, and that item will be put to the end of the list. Another way to move on is when the interval meets. The default interval is 1 hours. Thus, when you have spent more than 1 hours on this task, it will be automatically moved to the bottom. In other word, it works exactly like a deadline scheduler in your Operating System, which keeps you to make some progress on each of the items instead of having one or some of them sit there for a whole year.
At this time, I only have a demo at http://fifo.me/, but feel free to try it out and comment. Things like server-side persistence, Facebook frictionless sharing will come. Trust me, I will make some progress, because I am using FIFO now.
Patent in its traditional meaning, is a government-protected monopoly. However, because it is a granted property right that covers wide range of transactions, questionable practices such as patent suppression are invented ever since. By actively applying for patents and passively asking for royalty fee, it now becomes a standalone business mode, companies like Intellectual Ventures and Digitude Innovations are actively seeking legal means to profit from the large patent pools they gathered through acquisitions and bankruptcy liquidations. These business practices failed to address the very principle of patent law, which is to reward patent inventors in order to better promote technology advancement. On the other hand, in today’s world, a competitive market shows great success in allocating resources. If we can create a competitive market for patent through some minor modifications on current patent practice, it would be expected that many bad business practices due to our inefficient patent allocation would go away.
Patent is considered as a property. Thus, it comes with full property rights. In practice, that is usually means companies are the ones who will exercise these rights on patents. It makes sense because individuals usually lack of necessary means to exercise their rights on patents, especially when they need to collect royal fee, investigate patent infringements etc. However, Two differences between corporation and natural person make this situation easier for patent abuse. First, a company doesn’t hold morality value. The purpose of a company is to make profit. Thus, it will operate at the edge of what the law permits when that is to its advantages. Second, a company can be bought or restructured, which will turn its original purpose up side down. In many patent suppression cases, the patents involved are usually bought through bankruptcy liquidations or acquisitions.
The Essence of Competitive Market
A competitive market for patent requires two main components. First, patents that can be used to solve a class of similar problems should be sufficiently many. Second, there is no way for someone to ban a certain product due to patent infringement claims. The first requirement ensures that the price of a patent use (in the form of royalty fee) would be optimal. The second requirement ensures that there will be a price for a patent use (royalty fee) at all. To meet the two requirements, a patent institution should widen its acceptable range of patents and only a limited range of property rights should be granted with patent. Namely, the right to transfer and the right to exclude should be stripped out. It would feel strange that to create a competitive market, a set of stripped-down property rights are needed. The reason is that the competitive market we discussed is not about patent exchange, rather, it is a competitive market on the right to implement a certain patent (patent use).
Stripping-down property rights usually are problematic because many restrictions can be mitigated through specific contract arrangement. For example, if the right to sell a property is restricted, one can still sell the right to use a property through careful contract arrangement, and that normally has the same effect as the right to sell a property. This is not a case for patent because of its unique characteristic. Thus, the restriction on right to transfer can actually be enforced with the premise that the assignee is a natural person (thus, cannot be bought or restructured etc.). On the right to exclude, it is still possible to impose a royalty fee at unreasonable range so that effectively exclude someone from using it. However, such royalty fee would still subject to fairness principle (you cannot do arbitrary price discrimination). Due to the fact that the patent always granted to a natural person (thus, the person needs to license it to some companies in order to make profit), it is hard to implement an unreasonable price to someone that effectively exclude him/her from use the patent and still profit from the patent.
A competitive market means that one can “shop” patents (obtain permission to use with royalty fee) that required for the implementation of a specific product and there will be enough alternatives to pick and choose. For example, in software design, it means (if have to,) to choose a software patent that is more affordable but solves the similar problem. The market mechanism makes sure that the royalty fee for a too broad patent (the kind of patent that usually used for patent suppression) would be driven down significantly simply because there are too much competitions from other more specific patents. In a fully competitive market, the price would be driven down to its original cost (no profit margin), in the case of patent, that will be zero. Thus, for a patent has too many competitors (near fully competitive market), the royalty fee will be zero. Therefore, in a competitive market, low-quality patent will be eliminated (become irrelevant).
Three Proposed Changes to Patent Institution
As we discussed earlier, for a patent institution, granting a set of stripped-down patent rights to a natural person is sufficient to encourage the establishment of competitive market for patent. In following chapter, I will discuss why the three elements can lay the foundation of a competitive market for patent.
A natural person usually doesn’t possess necessary legal expertise in order to enforce royalty fee in case of patent infringement. A delegate is needed for him/her to fully exercise their rights on a given patent. The delegation mechanism may well be a for-profit organization that is capable of representing the interests of that natural person. However, the mechanism is different from previous for-profit organizations that are designed for patent suppression because they don’t actually own the patent, and the end goal of the delegation organization is to collect the royalty fee rather than the infringement settlement fee. In any circumstance, the natural person can always overturn previous judgment made by the delegation organization when their practice is conflicting with certain moral value that the natural person believes. In the essence, the delegation organization serves as a medium to minimize the cost of licensing and obtaining proper license for both parties. The analogy is similar to current patent pool structure we have, which certain companies formed an organization to manage the set of patents on a specific technology so that they can avoid the legal cost and labors of cross-licensing etc. It becomes more evident that patent pool is the only viable way to materialize a complex technology because nowadays, the development of a complex technology usually requires cooperation across various organizations (industry research labs, university research groups, national labs etc.).
Non-transferable requirement makes sure that the natural person can always retain the profit from his/her invention. Such requirement also avoids that certain delegation organization turns to patent suppression by obtaining the patent itself. The requirement of non-excluding serves the same purpose to restrict certain organization/person from patent suppression. Will non-excluding requirement be sufficient enough to avoid patent suppression alone? It seems that once a patent is licensed, with non-excluding requirement and fairness principle, it would be impossible to handcuff certain company from using that patent. However, without “natural person” requirement and non-transferable requirement, it would still be possible to use a patent without licensing it (e.g. the inventor produces instances based on the patent). In that case, they can effectively exclude anyone from using a patent by asking for a prohibitively high royalty fee. In this way, the three requirements I proposed are the essential minimal to prohibit patent suppression.
Prohibiting patent suppression is essential for a competitive market creation. Otherwise, one can always ban a certain license on a patented instance arbitrarily and damaging the consistency of such market. On the other hand, the enabling of new delegation organizations makes the competitive market preferable by offering incentives necessary to form such market. The market that formed by these delegation organizations is competitive on two grounds. First, the delegation organizations will compete for inventors and may provide terms more favorable to inventors depending on the value of patented inventions since the delegation organizations are working in a market that don’t have a particular competition niche to pursue other than legal and negotiation expertise (perfect competition). Second, the patent pool that the delegation organizations collected will also compete with each other if they have similar applications (much like the patent pool in software such as WCDMA v.s. CDMA2000 etc.). If more inventions are patented, more of them will have similar applications. This can be encouraged through lower the application fee and streamlining the application process by patent institution. By competing patent against each other, it will drive down the royalty fee and help to evaluate the value of such invention with market power.
Implication of Proposed Changes
The proposed changes to current patent institution are minor, but the impact can be profound. I will discuss several implications in the following chapter.
The “natural person” requirement exploited the fact that companies can be bought or restructured but not a natural person. Another difference between a natural person and a company that is not discussed in former chapter is the morality. Thus, a natural person possesses higher moral standard than companies. In pharmaceutical patent cases, which by nature operate at higher moral standard than most other patents, this change can be a useful feature. In particular, if a pharmaceutical patent is proved to be invaluable to poor countries, people can always petition to its inventor with expectation of gaining certain exempt terms. It exploited the fact that a person can be more easily persuaded than a company because a company is purely profit-driven. However, it doesn’t solve the problem that particularly interesting to pharmaceutical patents (how to promote health), but does hint a new kind of thinking when higher morality is required in certain patent cases.
Since a patent is bound to a certain “natural person”, would a certain company be incentivized to apply patents and always assign the “natural person” field with a particular person (i.e. CEO or chairman of that company)? For example, if the assignee cannot be Apple Inc. anymore, would it be incentivized to name assignee of all patents it filed to be Steve Jobs? The incentive is true and in such way, the company would be able to retain the patent itself and have a finer control over all the patents it filed. It likely will be the only incentive for large companies to still fund research because the return from research for the company itself is diminished with the implementation of these new modifications. This is the due-diligence responsibility of the patent institution to make sure that the right person is attributed.
Since large companies can neither exclusively produce certain product, nor directly collect royalty fee, the only way for them to still profit from researches they fund is to take a cut from royalty fee revenue of the inventor which would likely to be smaller than they originally can get with monopoly means. The diminished interests for large companies to fund research may negatively affect the research in general for a short time, but it also provides unique opportunities to the rise of independent research groups. An independent research group will also take a cut from the royalty fee revenue of the inventor, but because inventors in an independent research group will likely to license their patent to more people at a more favorable price, the revenue for independent research group will be at least as large as the revenue that large companies would get by funding researches themselves. University research groups and national labs should not be affected by these changes.
The traditional recognition of patent is that by granting patent to its inventor, it will create a temporary artificial monopoly that then will generate enough profit to incentivize the inventor in long-term to continue produce quality inventions. With these new changes especially the non-excluding rule, it is impossible to create artificial monopoly any more. However, the spirit of patenting retains. It is hard to quantitatively assess whether the revenue would decrease or increase with the new scheme to the inventors. But either way, that’s something left for the market to determine.
To make the market competitive on the ground of patent licensing, we should grant more patents than before. It requires some compromises in the way that we evaluate patents. Since we’ve waived the requirement for excluding, any inventions that have meaningful modification on public common should be patentable. It won’t drain the public common, because the royalty fee for patent that is too “broad” (in another word, too close to public common) will be close to zero. The competitive market approach ensures that less meaningful inventions will be ruled out eventually. The new evaluation makes sure that we can patent as much as possible. This process will greatly enrich the patents available on the competitive market, therefore, making the market more competent when determining the right royalty fee for patents.
The proposed new scheme for patent institution is not meant to solve all the problems. For example, this new scheme won’t significantly change the dilemma we have for pharmaceutical patents now. It only hinted that when individuals with higher morality hold such patents, it would be more easily persuaded to give up certain profit for greater good. However, there is no enforcement rule in patent institution side to promote health. But, to solve health problems in poor countries should not be a burden for patent institution. Patent institution couldn’t properly and shouldn’t be able to incentivize researchers to solve health problems. The view that health problems in poor countries are the fault of our current patent system is flawed. Besides patent suppression, current patent scheme incentivizes inventors in proportional to the profit of sales that made on their inventions. Thus, if these incentives are misplaced, it is a fundamental problem with current capitalism social structure for which we only reward people with money.
The proposed scheme may not work well with gene patents too because the proposed scheme makes assumption that there are alternative inventions to solve similar problems (WCDMA v.s. CDMA2000, LCD TV v.s. plasma TV etc.). Due to the unique encoding of a gene, for gene patent, it is less likely to be the case. However, lacking of applicability, the current gene patent won’t affect much on the new scheme since the proposed scheme focus on application rather than protection. As for the other invention that solves a problem no alternatives available yet, the scheme permits a higher royalty fee. But as long as it has been licensed, others will be able to obtain license indiscriminately (by the non-excluding rule). As for the inventor, he/she won’t be able to profit from his/her invention unless it gets licensed. Thus, he/she lacks incentives to protect his/her invention from public. In fact, he/she cannot even protect their invention for being reproduced once they filed the patent application (with specification). Once someone implemented their invention with the publicly available specification, they are forced to enter negotiation stage. With this new scheme, the true secret creation can only be protected as trading secret, which is more appropriate than passive patent protection.
Conclusion
In this paper, I proposed some minor modifications to current patent institution practice. The three modifications: assigning patent to “natural person”, non-transferable and non-excluding patent right focused on to make patented inventions viable to general public. As small as these modifications seem to be, through the discussion of this paper, they should have profound impact on our current structure around patent system as a whole. They will help to prevent patent suppression, eliminate passive patent protection, enabling a more efficient royalty fee negotiation process and more promptly rewarding to the original inventors.
After the Three Mile Island (TMI) accident and the more recent Fukushima disaster, nuclear energy as a viable clean energy alternative is questioned. Germany has planed to phase out nuclear power plants starting from 2008 while others, especially developing countries, are still seeing nuclear energy as a modern and cheap alternative comparing to wind power or solar power. Brazil still focus its renewable energy plan on nuclear power, and China, with recent controversies regarding Fukushima disaster, has quite a few nuclear construction projects under way. Although Chernobyl, TMI and Fukushima disasters highlight weaknesses in safety measures of some nuclear power plants, safety can be improved. The first traffic light prototype killed a policeman, well-designed twin tower can be stroke down and an earthquake can shake a nuclear power plant down. These accidents, though worrying people, but hardly can be a showstopper for nuclear power in general. However, the problem with nuclear waste was underplayed for many years may well be the killer switch for nuclear power installation. This survey intended to summarize recent developments of nuclear reactors, and hopefully, would also provide an insight into the possible future of nuclear energy development regarding some popular concerns.
Background
The nuclear waste issue was downplayed for a long time. During the first commercial nuclear power plant installation, manufacturer assured that they will collect and reprocess the nuclear waste so that can be used again. Although the technology hadn’t been developed yet then, they were expecting a time frame within 10 years that these wastes can be properly handled (accelerated radioactive decay or reprocessing). Soon after the first installation, to encourage the nuclear power usage, the Congress has passed a law to ensure that nuclear waste will be properly handled by The Federal government. But after more than 30 years, there is still no viable way to reprocess these highly radioactive wastes. The Federal government was bounded by the law to help nuclear power plant to handle the nuclear waste they’ve collected on-site for the last 30 years.
The Bush administration after years’ consideration came up with a proposal that using Yucca Mountain as a permanent burying site for nuclear waste, which in theory would last for 10,000 years. Although disposal is always an option, many details that involved make the proposal much less desirable. The criticisms mainly focus on the transportation safety and the option of sealing off the site forever after disposal without monitoring in place. Because of these concerns, The Obama administration has suspended the operation indefinitely.
If only newer nuclear plants can produce much fewer nuclear wastes, or we can expect nearly to none nuclear waste produced, the disposal solution may be viable again. After all, the nuclear wastes that we are sealing off are in finite amount, and we can be free of worrying after the burying.
In-Operation New Types
In this section, I will investigate several nuclear reactor types that based on new technology and are operational currently.
Pressurised heavy water reactor (PHWR) is one of the most used commercial nuclear reactor types to date. PHWR is categorized by its cooling system design and coolant within. A PHWR uses the same mechanic of a pressurised water reactor (PWR). A pressurised loop used to transport heat out of the core chamber. Since coolant in the loop was directly contacted with nuclear fuel in the chamber, it will be contaminated. Then, the coolant will heat up the stream generator that ultimately will generate electronic power. During the whole process, the coolant (heavy water) is kept inside the close loop to avoid any radioactive contamination to stream generator. The pressurised loop also keeps the boiling point of coolant high enough so that the system can be operated in higher temperature, thus, more energy-efficient. The “trick” of using heavy water is because heavy water won’t absorb nearly as much neutrons as light water (normal water), thus, natural uranium can be fed into the reactor without enrich process.
These early designs such as PHWR were only focused on the economy aspect (e.g. how to make a cheap, commercial-viable reactor without any consideration of environmental impact). As a result, even tough a PHWR can deliver energy with higher efficiency and cheap natural uranium deposit; it also can produce more plutonium and tritium than its light water counterpart.
Since PHWR is still the main type of commercial nuclear reactor to date [1], it shows that the primary goal of nuclear production still focus on safety (pressurised coolant-based reactor is considered to be safest in operation) and efficiency (both the efficiency in energy production and economic efficiency, e.g. the price of fuel). It is clear that the by-product processing was not a central concern in past because “it can be solved with technology advancement”.
In-Construction New Types
In this section, I will investigate several nuclear reactor types that merit some new technology development and currently are in construction.
Fast breeder reactor (FBR) is the kind of reactor that can “refuel” itself in theory. Many FBR prototypes were built for research purpose. Some ere built and operated but only shutdown. [2] In recent years, FBR picked up some traction and some are currently under construction. U.S., China, Japan, and India have several projects on FBR building or prototyping.
A FBR is mainly different from ordinary nuclear reactor because it uses fast neutrons for its reaction. Fast neutrons can produce far more neutrons in fissile reaction that in return can increase the concentration of Pu/U ration that is needed to sustain the chain reaction. Thus, a much higher breed ratio (the ratio of fissile atoms created per fissile event) can be obtained with FBR. The high hope on FBR is that with initial fuel, later, it can be fed on natural uranium or even depleted uranium. The hope of exploiting all power in fissile reaction with FBR was diminished when the price of uranium mine dropped and the process to enrich uranium became commercial-viable. It becomes interesting again in recent years because it turns out that the FBR process will produce much less plutonium and minor actinides (both are main nuclear waste component). Due to the heated debate around nuclear waste, researchers hope that based on FBR, they can construct a reactor that produces much less long half-life nuclear waste.
Proposed New Types
Some nuclear power reactor types are showing promising future and were considered in Generation IV International Forum [3], but for the time being, are still on paper or only research projects.
Integral Fast Reactor (IFR) in principal is much like FBR we discussed before. Due to the ability of “burning” long last nuclear waste, FBR is considered to be integrated an on-site electrowinning fuel-reprocessing unit, thus, resulted IFR. The electrowinning fuel-reprocessing unit could potentially recycle all the transuranics and uranium through electroplating, leaving only short half-life materials in the waste. To address concerns on safety, IFR also has a passive safety measurement deployed in reaction chamber. The IFR design promises to have high-efficiency (99.5% in theory) with minimal “safe” (half-life less than 20 years) by-product.
Pebble Bed Reactor (PBR) unlike most deployed nuclear power reactors, PBR uses gas cooling system that in principle enables it to operate under very high temperature. The high temperature enables the reactor to be more efficient at energy production (higher thermal energy to mechanic energy transfer ratio). Also, the fact that it uses a gas-cooling system to avoid many complexities that introduced by traditional water-cooling system (double loop design etc.). PBR system promises to have a small, compact and much efficient reactor that can even be deployed in home or on vehicles.
Conclusion
The survey of nuclear reactor types shows that a nuclear reactor with our desired property can be achieved (low nuclear waste production and high-efficiency). Unfortunately, the first few years’ research (1960s ~ 1980s) was incentivized by economic efficiency and complete left environmental consideration out of the equation. Nuclear power is our most productive new energy to date, if only we could incentivize research to the right direction, the problems around nuclear waste and safety can be solved within given time.
[1] International Atomic Energy Agency, Nuclear Power Reactors in the World (Reference Data), 2006
[2] Superphenix in France, SNR-300 in Germany, 1 in Enrico Fermi Nuclear Generating Station U.S.
[3] GIF, http://www.gen-4.org/