AI Superpowers — Kai Fu Lee
Kai-Fu Lee did his PhD in AI at CMU in the 80s, worked at the forefront of AI research, developed (one of the world's first?) voice recognition technology at Apple, founded Microsoft Research Asia (the top research institute in Asia), headed Google China, and now runs Sinovation Ventures, a Chinese VC. In AI Superpowers, he mainly tackles 3 important questions: (1) how China will be the biggest beneficiary of AI and stand with the US as the world’s only AI superpowers, (2) how AI will force us to rediscover what it means to be human, and (3) how AI has the potential to create widespread social disorder. Lee is perhaps the most qualified to answer these questions and he does not let the reader down. As one might expect from a computer scientist, the book is methodical in its structure and precise — and most importantly prescient — in its analysis.
AI and China
Lee calls the 2016 Go match, which was watched by more than 280m viewers in China, between Lee Sedol and DeepMind's AlphaGo, China's 'Sputnik Moment.' Understanding what he means truly impresses upon the reader how universal China's interest and passion for AI adoption is. It runs deeper than just the government attempting to eke out more economic growth, entrepreneurs trying to create the next mega unicorn, or the public looking forward to experience the latest tech; it's all three. But passion isn't why China is going to benefit most from AI, it's a little more nuanced than that.
Deep learning, the specific strain of AI that is en vogue, requires lots of data, strong algorithms, and a concrete goal. Its discovery in Hinton's 2006 landmark paper was a real breakthrough in AI. Lee goes so far as to say that since that paper he hasn't "seen anything that represents a similar sea [sic] change in machine intelligence." Because of this (the sporadic nature of breakthroughs) the biggest threat to China's AI strategy is if the next breakthrough is developed "within a hermetically sealed corporate environment" that isn't Chinese, e.g. Google, which currently hires more then half of the top 100 AI researchers.
Barring such an event, the fact that strong algorithms require good — but not necessarily the best/elite — engineers puts the ball firmly in China's court. Additionally, AI research tends to be open and experiments in the field are easily replicable (compared to say experiments in the physical sciences that might require a particle collider). Also, improvements occur almost weekly, so researchers are incentivized to publish ASAP for fear of their results being superseded. All of which is to say that China can benefit from discoveries in the US, but soon the situation may be reversed: Tsinghua as of 2016 was leading Stanford in AI citations, and this trend will only be exacerbated as China's new burgeoning crop of AI scientists hit the market.
More important is data, and China has that in spades. "Given much more data, an algorithm designed by a handful of mid-level AI engineers usually outperforms one designed by a world-class deep-learning researcher. Having a monopoly on the best and brightest just isn't what it used to be." China's data advantage comes not only from an encouraging regulatory environment that cares not for privacy, but also from its unique technological journey. In brief, China leapfrogged desktop computers for smartphones and never developed huge incumbent industries with isolated data islands such as America's credit card or insurance industries. Consequently, China's emerging online-to-offline (O2O) industry which includes WeChat activity and bikesharing, and predilection for merging disparate databases often including private information, will generate datasets with unprecedented depth and breadth. Advantage, China.
Lee terms these phenomena as transitions from the AI ages of discovery -> implementation and expertise -> data. But in the chapter that I found most interesting, he discusses another structural advantage that China holds: its entrepreneurs. Lee starts with America's liberal arts education and environment of abundance, which he argues, leads to "lofty thinking, to envisioning elegant solutions to abstract problems" and mission-driven startups, as opposed to the market-driven kind developed in China by entrepreneurs raised with a scarcity mentality and who face no moral quandaries over the issue of imitation. And lean methodology?, it can be a real burden for mission-driven (American) startups.
China's age of digital imitation is over, Lee tells us. To dismiss Weibo, Didi, and many others as American-copycats "relying on government protection in order to succeed blinds analysts to world-class innovation". The entrepreneur chinesus evolved from an era of ruthless competition. Take Wang Xing for example. He copied Friendster in 2003, Facebook in 2005 (Xiaonei, which was acquired by Renren), Twitter in 2007 (shut down over sensitive content), and Groupon in 2010. Copying Groupon's business model was easy, and hundreds of other discount aggregators emerged in China, but Wang Xing's success, which you may have heard of: Meituan-Dianping, emerged victorious thanks to a willingness to "go heavy." Meituan didn't just build a platform, it also recruited and managed a fleet of deliverymen and vehicles, and controlled payment amongst other things. In contrast, Silicon Valley startups prefer to go "light," building the IT, but leaving the messy on-the-ground logistics for others to handle. Compare Yelp and Dianping, Airbnb and Tujia (manages rental properties itself), Uber and Didi (purchased and operates gas stations).
Chinese companies did the legwork and stand to benefit from the dividends of their economies of scale. In the process, they may have burned lots of cash in marketing and promotions (e.g. Tencent taxi subsidies), but these only served to enhance O2O and mobile payment adoption, furthering China's data advantage.
On government. China's centralized, top-down approach breeds competition as regions and cities strive to outperform each other in order to earn performance incentives. Case in point, after a State Council directive to advance "mass entrepreneurship and innovation" (a phrase first coined by Li Keqiang at Davos 2014), a flood of subsidies created 6,600 new startup incubators, quadrupling the pre-existing total. Further, China's lack of government accountability lends itself to government-led technological bets more than the US, where political fallout from failure precludes such a possibility, e.g. Solyndra.
An AI Superpower, as Lee defines it, requires abundant data, tenacious entrepreneurs, well-trained (but not necessarily world-class) scientists, and a supportive policy environment. China beats or equals the US on all of these fronts.
Lee goes on to talk about the "4 waves of AI:" internet, business, perception, and autonomous. Internet AI is primarily about AI algorithm as recommendation engines (Google Search, Toutiao), China's greater supply of data will lead to an advantage in the future; business AI is about using industry data to create meaningful business outcomes, something the US' culture of consulting and structuring data in accordance with well-defined industry standards gives it a significant advantage in; perception AI is about blending offline and online with facial and audial recognition, etc.; and autonomous AI is self-explanatory. China's approach to autonomous AI, unlike the US', doesn't take the state of today or specifically its roads for granted. In Zhejiang, a superhighway is being developed with integrated sensors to increase speeds and charging stations to continuously charge vehicles, link. Cities too: Xiong'an will be the first city designed from the ground-up with autonomous vehicles in mind, link. So Lee predicts China will catch up to the US quick.
Some numeric predictions are provided:
Rediscovering what it means to be human
Throughout his life, Lee optimized his activities to maximize his impact. Then he was diagnosed with stage 4 lymphoma. Interesting aside about which he discusses how cancer stages are categories created for medical students (to ease learning) and how he discovered, after reading a paper that did a regression on the subject and evaluating the specific symptoms he had, that his survival rate was more likely 90% than the general 50% rate. Anyways, he goes to a Buddhist monastery where he learns that humans aren't meant to quantify and optimize everything, "it eats away at what's really inside of us and what exists between us. It suffocates the one thing that gives us true life: love."
Lee's touchy-feely stuff is well-written and surprisingly not too awkward to read. It's also extremely salient because as AI starts to reshape the world as we know it, if we continue to evaluate life by work and treat ourselves as "variables in a grand optimization algorithm," then we're doomed for failure, Related link. As Lee points out, AI-induced unemployment will be psychologically more devasting than any human-replacing technology that came before it, e.g. the loom. Those replaced by AI will be forced to slowly watch as software and robots begin to chip away at niche areas within their field (e.g. automating local sports reporting, link), and then outperform them at skills they've spent their whole lives developing and mastering.
The solution is love, Lee writes. Which is intuitive, because that's the one thing robots will never be able to understand. But really implementing Lee's advice to be more loving in our lives? That's pretty hard for the rest of us to understand the importance of.
AI isn't like any of the General Purpose Technologies (GPT, there were 3: steam engine, electricity, and ICT) that came before it. Not only is extrapolating from three data points fallacious in methodology, but because of this fact, basing the effects of AI on the unemployment fallout from previous GPTs is useless. AI is different because its implementation will be spread (theoretically) at the speed of light. But in general it'll be implemented faster for 3 reasons: it's digital, there exists a thriving global VC ecosystem, and China (i.e. 20% of humanity) will be part of the story for the first time.
Past estimates of AI-induced job losses used expert opinions on which occupations or which tasks within an occupation, were most at risk of displacement. What these miss, however, is what Lee calls ground-up disruption (as opposed to one-to-one, e.g. robotic warehouse shelver): products that satisfy the fundamental human need driving the industry itself, e.g. Smart Finance (lender with no human loan officers). Incorporating this, Lee comes to the conclusion that technically 40-50% of jobs in the US will be automatable within 10-20 years. Of course, social friction and other sources of inertia will slow down the actual rate of job loss. Lee has a useful graph/quadrant system to evaluate which jobs are most at risk.
In addition to collar-color-oblivious job losses (Lee notes AI hits blue as hard as white), the positive feedback loops inherent to AI businesses will create AI monopolies that US antitrust will find hard to dismantle since plaintiffs will have to prove existence of consumer harm while AI monopolists will likely deliver better services at lower prices. So the poor will get poorer and the rich get richer.
But the US and China stand to gain up to 70% of the economic gains from AI, which will lead to further divergence between the world's have-(AI) and have-not countries.
Silicon Valley solutions fall within 3 main categories: retraining workers, reducing the work-week, or redistributing income. Lee (perhaps more realistically than cynically) suspects that SV-led attempts at solutions such as a UBI are borne of self-interest, insurance against potential mob violence against the AI billionaires who instigated the disruptions.
Ultimately, Lee writes that it may not be so bad though. New jobs that emphasize human empathy and incorporate AI to extend our abilities will arise, such as what he calls the compassionate caregiver, a medical professional that combines the skills of a nurse, technician, social worker, and psychologist. They might be involved not only in operating diagnostic tools but also communicating with and emotionally supporting patients. Sounds like a pretty niche role though, and what's most worrying is to imagine if this is the silver lining on the 20-30 year timeframe, what'll happen on the 100+ year timeframe?
Lee suggests some other things such as a social investment stipend that rewards socially beneficial activities in categories such as care, service, and education. But defining what’s socially beneficial is itself an enigmatic task prone to potentially malicious distortion.