Useless or Creepy Tech Is Bad for Innovation
(Bloomberg View) — The story about a $400 internet-connected juicer that turned out to be superfluous since a human could simply squeeze juice from its producer's proprietary packs by hand was destined to go viral and did, with more than 800,000 hits on the Bloomberg website and strong reactions in other publications. It's evidence of a growing resistance to Silicon Valley-style innovation. Countering this resistance is not just a marketing problem for the industry; it's a strategic and perhaps even existential one.
It's easy to laugh at a juice squeezer produced by a relatively small startup, whose real competence is in making fancy fruit-and-vegetable packets. It's not really problem-solving tech; it's a money-raising gimmick. The laugh is on the investors who fell for the concept of a high-tech device whose price is an entry fee for a subscription service (which delivers the juice packs).
Disparaging something proposed by Elon Musk or Mark Zuckerberg isn't as risk-free for a tech pundit, if only because they have armies of fans that rip into critics. And yet these innovation gurus, too, are increasingly proposing gadgets offering solutions to problems that are sometimes imaginary, often unimportant and in some cases are features, rather than bugs, of human existence.
The media are generally enthusiastic about fully electric cars, though, in terms of consumer qualities, they are at this point inferior to gasoline-powered ones with shorter driving ranges and long charging times. There's a lot of excitement about self-driving vehicles, though they cannot be safely used on roads without clear lane markings, in poor visibility conditions and under a myriad other common circumstances. The assumption is that all these obstacles quickly will be overcome; throw enough money at engineers, and they'll get there somehow.
Recent promises from Musk and Zuckerberg on brain-computer interfaces are the latest example. Facebook promises to turn thoughts into typed text at the speed of 100 words a minute by scanning the brain without surgical intervention, though researchers have currently achieved only eight words a minute with the help of an implant. Musk's new company, Neuralink, plans to use electrodes implanted in the brain to exchange information between human and computer. Both promises carry unrealistic time frames, which is fine because optimism about cutting-edge science rarely gets punished even if it's an indirect method of boosting a tech company's stock price.
The problem is deeper though. Musk, a slicker marketer than Zuckerberg, talks about initially releasing a technology that would help people with brain damage — from strokes, for example. He's aware that twice as many Americans are worried as enthusiastic about brain enhancement through implants. So he's pointing to a real problem that can be solved using this decidedly creepy technology.
Facebook is talking about "sharing thoughts," hitting precisely on the most worrying aspects of the nascent technology: Who wants to share uncensored thoughts, especially with a company that collects information about its users without explaining to them exactly what is harvested? Who wants to give a machine built by a corporate entity access to one's brain?
Trust or desperation is required to give up control, and while the tech under development today requires that we allow devices and software to control more and more of our existence, there is a natural limit to how much people are willing to trust the sellers of these products. Breaking through it may do society a disservice if the makers aren't, in fact, particularly trustworthy. Revelations about Uber's surreptitious use of its software's capabilities are a good illustration of that predicament. At the same time, few people are desperate enough to actually need the technological enhancements. Most people can drive, type or adjust a thermostat by hand.
Giving up old habits, tolerating endless beta versions and learning curves simply for the sake of adopting the new thing is a questionable proposition for most consumers. Shaming them into adoption will eventually stop working, because there's a healthy element to not admitting one's routines are wrong. The ability to form these routines is often what gets us through life. Again, real need is often the only good excuse for adopting innovation.
A robotic prosthetic hand that can help tie a shoelace is a godsend to a one-armed person. A robot that can actually put the lace in a running shoe — you know, string the hard plastic aglet through the little holes so that the lace can then be tied — would remove a major headache for athletic shoe makers such as Adidas, which currently hire workers to do it. Such a machine would literally improve the chances of "reshoring" — the return of industrial production to Western nations where few people are willing to string aglets through holes for minimum wage.
Tasks that desperately need automation and tech solutions are narrow. Automated driving works fine in mining operations and ports. Brain electrodes can enable a completely paralyzed person to communicate. But the Silicon Valley mode of operation is to hype up these technologies to the general public, which doesn't really need most of them. Google Glass was one example: While the tech can be useful in business environments and derivatives have other interesting applications, it turned out to be socially unacceptable.
The more outlandish the promises, the less investors think of perfectly workable but mundane concepts and the more excited they get about mass applications. Had it not been for this effect, the promises would have been quieter and the innovation would have concentrated on the narrow uses for which it may be desperately needed, or at least reasonably applicable. And less investment would be wasted on useless gadgets like a connected juice squeezer or a mind-reading Facebook feature.
Even if there's little hard data about it, the pushback is beginning. Read about the juice squeezer and feel the almost tangible irritation. Thinking smaller and applying resources and energy to narrow, specific problems could be a good chance to build trust before it disappears entirely.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.