Competing Against Luck; Clayton M. Christensen, Taddy Hall, Karen Dillon, and David S. Duncan; Harper Collins, 2016, 262pp.
For most companies, “innovation is still painfully hit or miss,” write Clay Christensen and his co-authors/associates in their introduction to the Harvard Business School professor’s ninth book since his 1997 groundbreaking The Innovator’s Dilemma.
Elaborating, they assert “Companies are spending exponentially more to achieve only modest incremental innovations while completely missing the mark on the breakthrough innovations critical to long-term, sustainable growth.”
What’s gone wrong, they continue, is that “the masses and masses of data that companies accumulate are not organized in a way that enables them to reliably predict which ideas will succeed.” What they insist companies need instead is a means of learning why customers make the choices they do.
That leads to the book’s core: the Theory of Jobs To Be Done, or, “What job did you hire that product to do?” Understand your customer’s jobs, they explain, and “you’ll be competing against luck when others are still counting on it.”
Drawing upon the authors’ knowledge of many business and institutional successes acquired over the past 20 years, Competing Against Luck tells how to apply this theory to transforming a business model to discover what jobs your customers are seeking to fill, why “they hire and fire” products, and how to get your own product hired for the job.
Chapter 9 in the third section on how to structure your organization shows how the Jobs Theory works at organizations as diverse as the Mayo Clinic, the Consumer Financial Protection Bureau, OnStar, Intuit, Southern New Hampshire University, and others.
The book concludes that it’s time to topple “the tired paradigm” that innovation is about playing the odds. “Leave luck to the other guys.”
A Field Guide To Lies: Critical Thinking in the Information Age; Daniel J. Levitan; Dutton, 2016, 292 pp.
“There are many ways that we can be led astray by fast-talking, loose-writing purveyors of information,” writes neuroscientist and musician Daniel Levitan in his new book. Levitan is dean of social sciences at the Minerva Schools and a faculty member at UC Berkeley’s Haas School of Business.
Levitan’s Field Guide aims to help readers draw the right conclusions from the multitude of misleading “facts” in the surrounding avalanche of information. He does this by dividing the first two of the book’s three parts into the numerical and verbal ways in which those “fast-talking purveyors” can lead us astray.
“Evaluating Numbers” includes mishandled statistics and graphs; “Evaluating Words” includes faulty arguments, pseudo facts and dubious experts. In both parts Levitan provides steps for evaluating what we hear and read.
Part 3, “Evaluating the World” reviews the science that can help to separate the true from the false, including Bayesian reasoning. Four case studies, one from the world of magic, demonstrate the application of “careful, systematic reasoning.”
Levitan concludes by stating that “Following the steps in this Field Guide to evaluate the myriad claims we encounter is how we can stay two steps ahead of the millions of lies that are out there on the Web, and ahead of the liars and just plain incompetents who perpetrate them.”
Some readers might feel that reading Field Guide is “something akin to eating your vegetables,” Samuel Arbesman, Lux Capital scientist in residence, wrote in his review for The Wall Street Journal (Sept. 17/18, 2016).”But I’d recommend this vegetable eating—it will help you consume healthy information more regularly than the misinformation that is all around us.”
“Big Data and R&D Management”; Greg Holden; Research-Technology Management Sept.-Oct. 2016, pp.22-26.
Findings of an Industrial Research Institute Big Data research working group on digitalization of R&D are reported by Greg Holden, a staff member of the IRI, which publishes Research-Technology Management.
The report, Big Data and the Future of R&D Management: A Primer on Big Data for Innovation by Jeffrey Alexander (SRI International), Mike Blackburn (Cargill) and David Legan (previously Kraft Foods), is intended as a primer for R&D practitioners.
The Primer begins by explaining Big Data as “a confluence of a whole set of trends in computing, information processing, computational methods, and analytical tools.” It continues with sections on “Exploiting the Value of Big Data: Key Dimensions,” “Implications for R&D Management” (including how Big Data might disrupt R&D), and “Conclusions and Future Work.”
“Why Visionary CEOs Never Have Visionary Successors”; Steve Blank, Harvard Business Review Oct. 20, 2016, hbr.org. and “Why Tim Cook is Steve Ballmer and Why He Still Has His Job at Apple,” on www.steveblank.com, Oct. 24, 2016.
There’s an “eerie parallel” between Bill Gates, Steve Jobs and their respective successors at Microsoft and Apple, writes Steve Blank, the Stanford University adjunct professor who pioneered lean startup.
On hbr.org and more extensively on his blog, Blank asserts that when visionary founders depart, they are replaced by one of the skilled operating executives they had originally put in place.
One of the first things these new CEOs do is “to get rid of the chaos and turbulence in the organization” in favor of the stability, process and repeatable execution they value so highly.
“That’s great for predictability, but it often starts a creative death spiral,” says Blank. And that’s why he believes “the world is about to disrupt Apple in the same way that Microsoft under Ballmer faced disruption.”
One of the lessons he draws from this: “As soon as the market, business model, technology shifts, these execution CEOs are ill-equipped to deal with the change—the result is a company obsoleted by more agile innovators and left to live off momentum in its twilight years.”
“When Big Firms Are Most Likely to Innovate”; J.P. Eggers and Aseem Kaul, Harvard Business Review Oct. 19, 2016, hbr.org and “Motivation and Ability in Incumbents’ Pursuit of Radical Technologies: The Effect of Performance Above and Below Aspiration in Multi-Technology Firms”; J.P. Eggers and Aseem Kaul July 21, 2016, Available at SSRN: https://ssrn.com/abstract=2812715 or http://dx.doi.org/10.2139/ssrn.2812715.
Professor Eggers and Kaul of NYU’s Stern School of Business and U. of Minnesota’s Carlson School of Management, respectively, examined the different effects on incumbent firms of their motivation and their ability to develop radical inventions (intentionally not innovations).
Studying a long-term dataset of organizational patenting behavior from 1980-1997, they concluded that “firms are most likely to introduce ‘radical’ patents when their prior technological performance has been just below their aspirations, but such patents are most successful when introduced by firms performing substantially above their aspirations.”
What this means, they write in their HBR explanation of the study, “stronger firms need to fight against the biases that sap their motivation to be innovative,” often by building a culture that encourages experimentation and even failure.
Large, established firms, they conclude, would have more success pursuing radical technologies, “if they went after such technologies from positions of existing strength, instead of only trying for the next big thing in times (or in areas) where they are starting to fall behind.”
Preparing for the Future of Artificial Intelligence; Executive Office of the President, National Science and Technology Council Committee on Technology, Oct. 2016, 48pp.
This report surveys the current state of artificial intelligence, its existing and potential applications, and the questions progress in AI raise for society and public policy. The report also recommends 23 further actions by Federal agencies and other actors. Among them:
- Private and public institutions are encouraged to examine whether and how they can responsibly leverage AI and machine learning in ways that will benefit society.
- Industry should work with government to keep it updated on the general progress of AI in industry, including the likelihood of milestones being reached soon.
- The Federal government should prioritize basic and long-term AI research. The Nation as a whole would benefit from a steady increase in Federal and private-sector AI R&D, with a particular emphasis on basic research and long-term, high-risk research initiatives.
- Schools and universities should include ethics, and related topics in security, privacy and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.
- The U.S. Government should complete the development of a single, government-wide policy, consistent with international humanitarian law, on autonomous and semi-autonomous weapons.
The report concludes that “AI can be a major driver of economic growth and social progress, if industry, civil society, government, and the public work together to support development of the technology, with thoughtful attention to its potential and to managing its risks.”
The National Artificial Intelligence Research and Development Strategic Plan; National Science and Technology Council, Networking and Information Technology Research and Development Subcommittee, Oct. 2016, 40pp.
This companion to Preparing for the Future lays out a strategic plan for Federally-funded research and development in AI.
According to subcommittee co-chairs Bryan Biegel and James Kurose, the Plan “defines a high-level framework that can be used to identify scientific and technological needs in AI, and to track the progress and maximize the impact of R&D investments to fill those needs.”
“It also establish priorities for Federally-funded R&D in AI, looking beyond near-term AI capabilities toward long-term transformational impacts of AI on society and the world.”
The Plan identifies seven priorities for Federally-funded AI research:
- Make long-term investments in AI research that will enable the United States to remain an AI world leader.
- Develop effective methods for human—AI collaboration.
- Understand and address AI’s ethical, legal and societal implications.
- Ensure the safety and security of AI systems.
- Develop shared public datasets and environments for AI training and testing.
- Measure and evaluate AI technologies through standards and benchmarks.
- Understand national AI R&D workforce needs better.
Artificial Intelligence and Life in 2030 (AI100)”; Stanford University, Sept. 2016; https://ai100.stanford.edu.
This “One Hundred Year Study on Artificial Intelligence,” launched in 2014, is a long-term investigation of the field “and its influences on people, their communities, and society.” The Preface to this report (first in a forthcoming series) explains that the study panel “envisions the potential advances that lie ahead, and describes the technical and societal challenges and opportunities these advances raise, including in such arenas as ethics, economics, and the design of systems compatible with human cognition.”
Peter Stone, of the U. of Texas at Austin, chaired the study panel. While it “found no cause for concern that AI is an imminent threat to humankind” it acknowledges that “many of these developments will spur disruptions in how human labor is augmented or replaced by AI, creating new challenges for the economy and society more broadly.”
The report’s three sections cover AI’s nature and research trends; eight domains where AI is already having or is projected to have the greatest impact on a typical North American city: transportation, healthcare, education, low-resource communities, public safety and security, employment and workplace, home/service robots, and entertainment; and prospects and recommendations for AI public policy.