Friday, September 29, 2017

Thoughts on a Ouija board


As previously (and frequently) mentioned, I've been chipping away at a couple of essays about 21st century attitudes toward technology. The incredible spike in innovation of the late 19th and early 20th centuries plays a big role. Unfortunately, the more I dig in, the more I find new aspects to the subject.

I came across yet another when watching this Bob Chipman movie review of Ouija. My general rule for movie reviews and criticism (Chipman falls more in the latter but is also pretty good at the former) is to only check out writing on movies that I either have seen, or care so little about that they can't really be spoiled. This one fell in the second category.



Chipman is exceptionally good with historical and cultural context. He started this review with a brief historical overview of the popular board game, suggesting that the filmmakers could have gotten a more interesting and original film had they mined the actual history of the Ouija board rather than opting for something standard and derivative. What caught my ear was the fact that the Ouija board was first marketed in 1891 as an attempt to cash in on the spiritualism craze of the era.

Here's Linda Rodriguez McRobbie writing for the Smithsonian:

As spiritualism had grown in American culture, so too did frustration with how long it took to get any meaningful message out of the spirits, says Brandon Hodge, Spiritualism historian. Calling out the alphabet and waiting for a knock at the right letter, for example, was deeply boring. After all, rapid communication with breathing humans at far distances was a possibility—the telegraph had been around for decades—why shouldn’t spirits be as easy to reach? People were desperate for methods of communication that would be quicker—and while several entrepreneurs realized that, it was the Kennard Novelty Company that really nailed it.

The facts weren't exactly new to me, but somehow I had never thought about the peak of the spiritualism movement coinciding with the explosive scientific and technological advances of the era. I'd always tended to think of that form of spiritualism as quaint and old-fashioned, particularly when compared with the sci-fi infused New Age mysticism of today. Now I'm wondering if I got that exactly backwards.

Particularly in America, the period from around 1880 to 1910 was one of unprecedented technological change, reordering every aspect of society to a degree that hadn't been seen before and hasn't been seen since. It was also, not surprisingly, and era of wild speculation and fantasy. Most of HG Wells' best known scientific romances came from the 1890s. The idea that Mars harbored not only intelligent life but great civilizations had started gaining popularity a decade earlier.

Perhaps living in a time of impossible things makes people credulous, it might even be a form of adaptation. People not only excepted the incredible, they craved more. This gave rise to and army of metaphysical conmen exposed by the  Seybert Commission in the 1880s. While it is always dangerous to generalize from outliers, it is certainly interesting that the greatest age of progress was also remarkable for producing dreamers and suckers.




Thursday, September 28, 2017

I suppose the short version is: "Never read any science coverage in New York magazine"

Let's see, since we started occasionally checking in with the science desk of New York magazine, we've seen questionable often lapsing into irresponsible treatment of autism, homeopathy, beds of healing crystals, magical heuristics and Gwyneth Paltrow's goop. [See here, here, and here] Now we can add manipulation of "energy" fields.

[As a general rule, I don't care much for the use of scare quotes, but what else can I do here?]

By now, pointing out bad science in a NYM article no longer justifies a post. Water is wet. To make this worth our time, we have to look at the specific ways that this piece personifies bad science journalism. In this case, Arianna Rebolini provides excellent examples of a number of genre defining traits.


1. The inexplicable encounter



Before reading something like this, one should probably put on and appropriately eerie piece of music, perhaps and appropriate soundtrack from Bernard Herrmann or (my personal favorite) Jerry Goldsmith. The inexplicable encounter is very much the stuff of movies. The seemingly rational journalist visits a guru or lies down on a bed of magical crystals and soon finds him or herself facing the mysterious, usually resulting in a quest for explanations:


Six months ago, when I walked into my first reiki appointment at a Santa Fe oxygen spa, I had only ever heard the word in passing, and I didn’t really know what to expect. The spa’s description spoke vaguely about healing and relaxation; I thought I’d paid for a massage. What I got was one of the strangest experiences of my life: an hour in which my practitioner waved his hands over me and blew smoke across my body, to which my body responded with warm tingling in my arms and hands, mysterious pressure on my chest, and uncontrollable tremors in my legs. It was unexpected, a little scary, and definitely not relaxing.


According to my practitioner, these sensations indicated an energy blockage being released; they were merely signs that the session was working. He told me my body was returning to its natural state, i.e. “nearly orgasmic” energy. But it didn’t feel orgasmic. It felt crazy. I tend to be a cautious believer, but even I couldn’t accept this as the cause and effect. I needed to know: what had happened to my body?

Side note: you have to admit there's a certain perfection to the "Santa Fe oxygen spa" detail.


2. "How can we say something is impossible when we understand so little?"

Advocates of pseudoscience frequently point out ongoing areas of uncertainty in medicine and other scientific research. The implicit and sometimes explicit message is that so-called scientists don't know everything about their own field so how can they question ours?


The National Center for Complementary and Integrative Health (NCCIH) lumps reiki, along with similar but distinct practices like therapeutic touch and Chinese qigong, within the category of “biofield therapy,” but any definition of “biofield” is maddeningly vague. The clearest description we have is from a 1992 conference, when the NIH defined it as “a massless field, not necessarily electromagnetic, that surrounds and permeates living bodies and affects the body”; however, the NIH has since backed away from that description. In an emailed statement, the NCCIH told me, “to our knowledge there is no current definition of biofield that has been determined by the NIH.” When I asked Miles for more clarity, she explained the NIH’s proposed term describes “the purported subtle vibrational field that surrounds and penetrates the human body,” adding that it’s still a working definition. To my understanding, reiki practice holds that it is the manipulation of this field which causes either physiological distress or allows self-healing. No scientific proof of the biofield exists.


According to [Pamela] Miles, this mystery around the biofield and, in turn, the underlying mechanism of reiki itself doesn’t negate the power of the practice. Indeed, medical professionals have a long history of using medicine before understanding its mechanism of action. Bayer released aspirin in 1897 and it became the most widely used drug in the world; we didn’t know how it worked until 1971. Scientists are still theorizing how anesthesia works. Ditto antidepressants. What’s more, Miles emphasizes, reiki, like meditation, is at its core a spiritual practice being used in medical intervention, and the mechanism of spiritual practice isn’t necessarily provable by scientific technique. But that shouldn’t have any bearing on reiki’s effectiveness, Miles argues.

Putting aside the part about not necessarily being provable by scientific technique and holding off on the meditation comparison until the next point, let's zero in on the part about using medicine before understanding its mechanism.


On the trivial level this is, of course, always true. There will invariably be unanswered questions about any complex process, particularly in the field of medicine, but when the system works properly, we have a general idea of how things work and, more importantly, strong evidence of a causal relationship. (And, no, placebo effects don't count. More on that later.)



3. There is an important difference between a hypothesis that is generally compatible with well-established principles and one that requires you to, for example, invent entirely new laws of physics.


We did talk about anesthesia here, but meditation may make for a more interesting example. This would generally fall in the alternative medicine category. We could go back and forth about its effectiveness and practicality for addressing various problems and conditions, but the broader assertion that thoughts, attitudes, and emotions could have direct (blood pressure) and indirect (quality of sleep) medical implications is not at all controversial. Therefore, the question is one of specifics. Does this special case work and if so does it suggest a productive line of treatment? This certainly puts meditation below anesthesia, which we know works and is highly effective in certain situations, but it also puts meditation above anything that relies on the existence of a "biofield."


4. When discussing fringy subjects, your main sources should not be true believers and interested parties.

Remember the Pamela Miles we mentioned earlier?

So far, the research cautions that reiki should be used in conjunction with, and never instead of, conventional treatments for conditions like pain, anxiety, or depression. But if reiki is to be used with conventional medicine, then there first needs to be clarity around what, precisely, reiki even is. Practitioners going rogue, blending multiple spiritual practices into something new and naming it reiki, muddies the already sparse data. One person who is working toward a singular definition of the practice for patients, practitioners, and medical professionals is Pamela Miles.


Miles, author of Reiki: A Comprehensive Guide, is the unofficial leader of the movement to legitimize reiki in the mainstream. Having practiced and researched reiki for over 31 years, Miles has published in multiple peer-reviewed medical journals, collaborated on NIH-funded medical research, taught reiki at medical schools, and spoken about reiki in the media. Though she doesn’t necessarily advocate for uniform regulation of the practice, she does emphasize the importance of continued research on its effectiveness, and for the education of those receiving it. When I described my experience to Miles over the phone, she wasn’t entirely convinced she’d call what happened “reiki” at all.



5. "Where there's the perception of the smell of smoke, there must be fire."

Testimonials and satisfied customers do provide a low level of evidence, enough to justify limited research, but not enough to stand on its own, particularly when both research and first principles strongly argue that there's nothing there.
But this is a service, often a pricey one (insurance rarely covers it, and treatment can cost up to $400 per session), and customers wouldn’t be paying and returning if they weren’t benefiting from it. That evidence is mostly anecdotal, seen in gushing testimonials by patients — like Anne in Rochester, New York, who said reiki gave her the “gift of freedom,” attributing a “release of depression, anxiety and feelings of abandonment” to the sessions. And there’s no denying that medical professionals are recommending reiki to patients or training as masters themselves. About 60 U.S. hospitals offer reiki sessions, including New York-Presbyterian, Memorial Sloan Kettering, and UCLA Medical Center; as of 2014, over 800 hospitals offer reiki training.



6. "Even if it's just the placebo effect, doesn't that count as helping people?"


The placebo defense is the last resort of the snake oil defender. Rather than backing away from the possibility that the pseudoscience treatment works, to the extent that it works at all, strictly through the power of suggestion, the advocate simply treats placebo effects as another kind of benefit. If the patient feels better, what else matters?
One possible explanation of reiki’s observed benefits might have to do with recent findings on placebos — that is, that the “placebo effect” might offer more opportunities for symptom management than previously thought. This research is led by Ted Kaptchuk, director of Harvard Medical School’s Program in Placebo Studies & Therapeutic Encounter. Kaptchuk’s past research has demonstrated what is now commonly known about placebos (that when delivered with a suggestion of possible side effects, trial participants will often self-report those side effects) but a recent discovery is more surprising. It turns out sugar pills can ease symptoms in patients even if those patients know they’re receiving sugar pills, and even if those pills are given without any suggestion about the results the patients might see. It suggests a holistic look at the experience of healing, a therapeutic element to the doctor-patient interaction in full.

There are a number of problems with the placebo defense, enough to demand a commitment of time I can't provide at the moment and a level of expertise that is well above my pay grade. Here, however, are a few of the more obvious points.

a. There are clearly serious ethical problems with any treatment based on misleading patients.

b. There are also serious ethical and practical concerns with promoting procedures that have no effectiveness outside of the placebo effect. It's true that the author repeatedly makes the point that this "energy manipulation" is not meant as a substitute for conventional treatments, but no amount of caveats will get around the possibility that some patients will forgo the necessary but unpleasant for the trendy and overhyped.

c. As a bit of a side note, advertisers have known about the marketing power of the trappings of medicine for easily a century. As had regulators (back when we had regulators) who imposed rules about models and spokespersons wearing lab coats. In response to this, there was an infamous commercial featuring a soap opera star in street clothes who started his pitch by saying "I'm not a doctor but I play one on TV."

d. Even if we limit ourselves to practical considerations, placebo-based approaches are problematic at best. With real medicine, we have an idea of how likely something is to work or stop working, what the signs of a change might be, and what to do in each of the different contingencies.

To sum up, this is yet another example of New York magazine not only flirting with pseudoscience, but doing so in a tired, standard narrative that would still qualify as bad writing even if it weren't such bad journalism. As we've said before, New Journalism started out by proving that great reporting also be great literature. Articles like this (and far too many others) remind us that lazy and dangerously misleading reporting often comes in the form of a wretchedly the shade story.

Wednesday, September 27, 2017

It's like one of those "you provide the caption" cartoon contest except with blog posts and no prizes

Eventually, I'll get around to using some of these as illustrations. If I don't post them, I'll probably forget about them. In the meantime, feel free to borrow anything that looks interesting.

This one will come in handy next time we write about air travel.


From Amazing Stories Quarterly v1 02 1928


This is also apt for a number of situations.



Old pulp magazine ad. circa 1950




Already used this Wolverton panel on my Twitter page.
Circa early 50s.



And I'm sure these Wolvertons will be appropriate sometime in the near future.







Also circa early 50s, which got me wondering about the term pseudoscience, so ...







And since you won't believe me unless you it for yourself...


Also from Amazing Stories Quarterly v1 02 1928

Tuesday, September 26, 2017

Fighting Gresham's law of journalism -- more "yes, it is just a god damn vending machine" blogging

Just to review, a few days ago, there was a great deal of fanfare around an article by Elizabeth Segran from Fast Company magazine entitled:

Two Ex-Googlers Want To Make Bodegas And Mom-And-Pop Corner Stores Obsolete

At the risk of belaboring the obvious, the article was a bad piece of business and technology journalism. It credulously accepted what should have been incredible claims from an entrepreneur with an enormous interest in hyping the story. The response was largely divided between writers who were understandably offended by the cultural insensitivity and those who unquestioningly accepted the idea that a line of low functionality vending machines (no refrigeration, no hot food or beverages, not set up for cash transactions) presented an immediate threat to convenience stores and other small retail outlets. More than a few commentators managed to fall into both camps.

The problem here is not that all of the coverage of the Bodega Vending Machine Co. was bad; it is that the bad got most of the oxygen. This is primarily a business story (the technological aspect is trivial), and it has produced some excellent business writing, but it appears that the coverage is another example of Gresham's law of journalism: the crappy crowds out the good.

The best of the clear eyed analyses probably comes from  Helen Rosner, a smart, knowledgeable writer who explained in crushing detail the major flaws in the Bodega business plan.
Bodega’s product is, fundamentally, a vending machine. (Well, maybe it’s a mini-bar — open access to product, in fancy places, with a presumed audience that’s affluent and design-minded.) Vending machines are a unique form of commerce, mostly defined by the lack of human interaction at the point of transaction. This kind of unmanned retail operation has a long history (the vending history timeline on the website of NAMA, the National Automatic Merchandising Association, kicks off with Egypt’s Ptolemaic dynasty; it is delightful), with efficiency as its primary appeal. A tiny, self-contained store without an employee saves all sorts of overhead: Less required real estate, lower payroll, shockingly greater likelihood that shoplifters will be crushed to death.

These efficiencies aren’t gravy, though; they’re essential. They work by way of a simple economy of scale: If you run a few dozen machines (or a few thousand), it becomes possible to buy your products at a discount, to warehouse the products more effectively, and to both fill and repair your vending units in a more streamlined way. These businesses live or die by logistics.

This is where things seem likely to fall apart for Bodega. Even with their wifi connections and app-connected camera sensors, the units themselves are still just offering consumers a basic model of unmanned commerce — only with smaller, fancier machines to process the transactions. What Bodega does offer as a differentiator are the number of unique products per unit (100, the average vending machine has 20-40), and the promise that the products will not just be tailored to their general environments — protein bars in the gym, tampons in a sorority house — but to their specific users. A promise of “machine learning” will, as Fast Company explains, “constantly reassess the 100 most-needed items in that community.”

At 100,000 units — the scale McDonald and Rajan envision — that’s ten million items that are active at a time, plus reserve products for restocking, plus new products to introduce as the “machine learning” (I’m sorry, I just can’t) cycles out low performers. Across specialized markets and user-informed preferences, the number of SKUs (industry shorthand for a stocked product, rather than an individual unit of that product) that Bodega would be dealing with would quickly climb into the thousands.
...
Labor is not a minor issue, with a company like this one. “Unmanned retail” isn’t a precisely accurate phrase: There may not be a person ringing up your transaction, but there are plenty of people working to maintain a system that allows that absence — even the famous midcentury automats were just the outward storefront of a working, fully-staffed kitchen. Bodega’s warehouses will need to be staffed. The trucks will need to be driven. The Bodegas themselves will need to be manually restocked — each can, bottle, and box placed one by one onto each unit’s shelves. Many traditional vending machine companies employ restockers who double as machine repairers. Will a Bodega restocker be trained to fix a busted computer-vision camera?

Rosner closes with a wonderfully pithy and honest summary of what is probably Bodega's real business plan.
In Silicon Valley, Bodega’s success will not be measured by how well it truly replaces the stores it wants to eliminate — by how many lives it makes better, how many jobs it creates, how many communities it strengthens, or how many families it serves. Like most startups, its success will depend on whether its founders and investors make money, either by cashing in with an IPO or selling to a bigger company for a tidy profit.
 

Monday, September 25, 2017

It was an age of wondrous sights, flying machines, moving pictures, white blackberries…

[Still laying groundwork for the big essay on the aftermath of the technology spikes of the 1890s (give or take a decade) and the postwar era.The trouble is, the more I look into it, the more I keep coming across interesting examples.]

Though I would need to speak with an actual expert to be certain, it appears that the latter part of the 19th Century was the first time in human history when encountering new fruits and vegetables became a routine part of people's lives. Obviously, there had been previous periods when exploration and trade had suddenly introduced great varieties of fruits and vegetables to a region. By the same token, there had always been the occasional genuinely new variant on something familiar either due to crossbreeding or mutation. Generally though, the progress, while great, was exceedingly slow, measured in centuries not years.

As with so many things in the late 19th/early 20th century, progress in agriculture went from slow and steady to unimaginably rapid. And, as with so many things in the period, the public found a lone visionary inventor to be the face of the movement. This tendency to mythologize always distorted, but there have been less deserving recipients than Luther Burbank.

Luther Burbank (March 7, 1849 – April 11, 1926) was an American botanist, horticulturist and pioneer in agricultural science. He developed more than 800 strains and varieties of plants over his 55-year career. Burbank's varied creations included fruits, flowers, grains, grasses, and vegetables. He developed (but did not create) a spineless cactus (useful for cattle-feed) and the plumcot.

Burbank's most successful strains and varieties include the Shasta daisy, the fire poppy (note possible confusion with the California wildflower, Papaver californicum, which is also called a fire poppy), the "July Elberta" peach, the "Santa Rosa" plum, the "Flaming Gold" nectarine, the "Wickson" plum (named after the agronomist Edward J. Wickson), the freestone peach, and the white blackberry. A natural genetic variant of the Burbank potato with russet-colored skin later became known as the russet Burbank potato. This large, brown-skinned, white-fleshed potato has become the world's predominant potato in food processing. The Russet Burbank potato was in fact invented to help with the devastating situation in Ireland during the Irish Potato famine. This particular potato variety was created by Burbank to help "revive the country's leading crop" as it is slightly late blight-resistant. Late blight (Phytophthora infestans) is a disease that spread and destroyed potatoes all across Europe but caused extreme chaos in Ireland due to the high dependency on potatoes as a crop by the Irish.

Though a bit of a digression from this post, Burbank's legacy is also relevant to our ongoing intellectual property thread.

 Burbank's work spurred the passing of the 1930 Plant Patent Act four years after his death. The legislation made it possible to patent new varieties of plants (excluding tuber-propagated plants). Thomas Edison testified before Congress in support of the legislation and said that "This [bill] will, I feel sure, give us many Burbanks." The authorities issued Plant Patents #12, #13, #14, #15, #16, #18, #41, #65, #66, #235, #266, #267, #269, #290, #291, and #1041 to Burbank posthumously.

The big story here is, of course, agricultural and economic, but think about the psychological impact for a moment, particularly in the context of the times. Science and technology were radically changing virtually every aspect of people's lives. This extended from electricity and internal combustion all the way down to the produce at your local market.

Nor was the agricultural progress limited to Burbank. Less famous but arguably more important were initiatives like the Hatch Act.
The Hatch Act of 1887 (ch. 314, 24 Stat. 440, enacted 1887-03-02, 7 U.S.C. § 361a et seq.) gave federal funds, initially of $15,000 each, to state land-grant colleges in order to create a series of agricultural experiment stations, as well as pass along new information, especially in the areas of soil minerals and plant growth. The bill was named for Congressman William Hatch, who chaired the House Committee of Agriculture at the time the bill was introduced. State agricultural stations created under this act were usually connected with those land-grant state colleges and universities founded under the Morrill Act of 1862, with few exceptions.

Many stations founded under the Hatch Act later became the foundations for state cooperative extension services under the Smith-Lever Act of 1914.

At the risk of putting too fine a point on it, we're talking about ongoing government-funded university-based research. The cherished notion of a couple of guys in a garage somewhere inventing the future is now almost entirely myth, but even in the late 19th century it was fading as a model for research and development. Most of the great advancements were team efforts, often from surprisingly modern R & D labs funded by corporations and financiers like J.P. Morgan, philanthropists such as Carnegie, and, yes, US taxpayers. This would be even more the case in the postwar technological spike, but that's a topic for another post.

Friday, September 22, 2017

Live by the hype, die by the hype



Picking up where we left off last time:
We've been talking about the content bubble (under various names) for going on five years now, but we haven't mentioned it recently so here's a quick primer. The basic argument started out with the claim that original scripted programming on cable only made good business sense under certain conditions and that more and more of the programs were in a corner of the market that could not sustain them indefinitely. My concern, then and even more now, was that when hype drives decision-making, the easily hyped will increasingly be favored over the sound, profitable, and sustainable.

Since then, Netflix, Amazon, and Hulu not only jumped into the field, but started a major bidding war. Netflix in particular was willing to pay more for licensing rights to a show than other outlets had been willing to pay for outright ownership. At the same time, basic cable channels continued to invest more heavily in the idea that they could only be defined by having a "______ original."

The inevitable result has been a huge mass of product that can't possibly find an audience large enough to sustain it. As mentioned below, there are over 450 shows now in production. It has become obvious to pretty much everyone that the majority of the shows will be costly failures. This has led to a parallel bubble in marketing and PR as everyone tries desperately to have one of the handful of shows that gets noticed. Here in LA, I routinely see billboards for shows I've never heard of, often on channels and streaming services I've never heard of.

Let's expand a bit on the role that hype played in all of this. Most television is reruns. This is true regardless of service or medium. If you run through the selection from your cable box and skip over sports and "reality, unquote most of what you see, even on channels known for their original productions, will be previously shown movies and series, mostly old sitcoms and police procedurals. This is a sound and time tested business model and, if done with taste and a respect for the medium, it can even be an interesting and satisfying one for the viewers (Weigel Broadcasting is the master of this. They understand that, no matter how many times you've seen them, old episodes of NYPD Blue and the Phil Silvers Show are more entertaining than the latest CSI and Two Broke Girls).

The primary problem with this model is that it is difficult to hype. A Jack Lemmon festival on Movies!      or a marathon of the Altman run of Combat might be great television, but it's next to impossible to get a news story out of it. Original shows, however, are remarkably effective buzz generators. The stars and, in some cases, the behind-the-scenes talent can get interviews. Press releases can generate quick yet click-worthy copy. In many cases, people at the PR firms will actually write news stories and send them to a friendly reporter who will simply take off early that day.

What's more (and you cannot overestimate this in Hollywood), original series are fantastic for collecting awards, stroking egos, and measuring dicks. A wildly disproportionate number of billboards for shows populate Studio City and North Hollywood. They are generally placed so that people who work on the shows or on rival shows will see them. Many stars actually insist upon this in their contracts.

When there are a fairly small number of originals out there, this can make a certain business sense. A couple of interns in the press office can generate the equivalent of millions in advertising and make the VP of production feel like a big deal in the process, but when there are over 450 shows in production, the model breaks down, particularly when audiences are simultaneously developing a taste for older content.

Buzz and paid advertising are no longer nice, relatively cheap pick-me-up's; they are a costly, necessary addiction. Given the competition for viewers and the costs of production, it is now virtually impossible for a show to simply find its audience unassisted. No matter how good a series is, unless the people behind it are willing and able to spend obscene amounts of money promoting it, no one will watch.

Ken Levine used to be a skeptic on the subject of the content bubble. These days, not so much.

But back to the ratings. Only 11.4 million people watched the back-slap-athon. The last MASH episode drew over 100 million people. I know – apples and oranges, but the point is those 100 million people are out there.

So now the question about the near record low ratings: How come?

The obvious answer is that no one has seen any of these Emmy winning shows. Or in many cases, even heard about them. And that’s not to say that they’re not totally deserving of their wins. The shows selected were excellent. But study after study shows that the vast majority of the country doesn’t know they exist. They’re on delivery services many people don’t have (or don’t want to have because of the cost), and in such a crowded marketplace it’s almost impossible to get noticed above the din.

I guarantee you this: If these shows did not send screeners to every TV Academy member, and if there was not good word-of-mouth within the community, most of them would never get a sniff from Emmy. If Hulu had to rely on TV Academy voters finding, subscribing, and watching THE HANDMAID’S TALE (even though it’s from a popular book) on their own, their outstanding series would be overlooked. And that’s people IN the television industry. So imagine folks who aren’t.


[typos corrected.]

Thursday, September 21, 2017

Arguments for a content bubble – – revisiting 2.

We've been talking about the content bubble (under various names) for going on five years now, but we haven't mentioned it recently so here's a quick primer. The basic argument started out with the claim that original scripted programming on cable only made good business sense under certain conditions and that more and more of the programs were in a corner of the market that could not sustain them indefinitely. My concern, then and even more now, was that when hype drives decision-making, the easily hyped will increasingly be favored over the sound, profitable, and sustainable.

Since then, Netflix, Amazon, and Hulu not only jumped into the field, but started a major bidding war. Netflix in particular was willing to pay more for licensing rights to a show than other outlets had been willing to pay for outright ownership. At the same time, basic cable channels continued to invest more heavily in the idea that they could only be defined by having a "______ original."

The inevitable result has been a huge mass of product that can't possibly find an audience large enough to sustain it. As mentioned below, there are over 450 shows now in production. It has become obvious to pretty much everyone that the majority of the shows will be costly failures. This has led to a parallel bubble in marketing and PR as everyone tries desperately to have one of the handful of shows that gets noticed. Here in LA, I routinely see billboards for shows I've never heard of, often on channels and streaming services I've never heard of.

On top of this immense surplus, there's another problem we noted back in February 2015:

2. Content accumulates. While movies and series tend to lose value over time, they never entirely go away. Some shows sustain considerable repeat viewers. Some manage to attract new audiences. This is true across platforms. Netflix built an entire ad campaign around the fact that they have acquired rights to stream Friends. Given this constant accumulation, at some point, old content has got to start at least marginally cannibalizing the market for new content.

Which brings us to this recent story from the Los Angeles Times by Meg James and Yvonne Villarreal. It's a good piece of reporting. I would have liked to have seen a bit more on the role of companies like Weigel Broadcasting, but the LAT has done an excellent job covering the terrestrial superstations story, particularly compared with the virtual news blackout from papers like the NYT and the WSJ, so I don't have much to complain about on that score.

Jessica Mata wasn’t even a year old when “The Golden Girls” ended its broadcast run on NBC in 1992.

But this summer, she has been captivated by Dorothy, Blanche, Sophia and Rose, the Florida senior-citizen housemates of “The Golden Girls.” Mata watches at least four episodes a day of the sitcom, which joined streaming service Hulu’s programming offerings earlier this year. She views them on her phone or her laptop during breaks between her college classes.

“I know ‘Game of Thrones’ is all the rage — and I watch it too, sometimes — but it doesn’t have me hooked like ‘Golden Girls,’” said the 25-year-old from Houston. “I’m on my third round of watching the series right now.”

Viewers like Mata are discovering reruns of network shows not by flipping through TV channels but on streaming devices such as Hulu and Netflix. These digital platforms are doing something unexpected: They are creating new audiences for old TV shows.

At a time when television is booming with more than 450 original series in production this year, viewers have a multitude of options. But shows such as HBO’s “Game of Thrones” and NBC’s family drama “This Is Us” also are competing for fans’ attention with such well-worn fare as as “The Golden Girls,” “Full House,” and the political drama “The West Wing,” which debuted when Bill Clinton occupied the White House.

Wednesday, September 20, 2017

Bodega and the power of narrative

As mentioned before, criticizing something for being overhyped invariably to some degree makes you part of the problem. No matter how pointed and snarky your approach may be, there is simply no getting around the inherent contradiction in spending an extended period of time basically saying "we shouldn't be wasting time on this."

This is very much a danger when discussing the Bodega vending machine company (and, yes, that's what it is). This is a small and silly startup with lots of talk but few ideas, none of them good or particularly original. Though, in terms of functionality, it's generally inferior to existing vending machines, it insists that it will not only take that market but will actually replace the modern convenience store. If you need more than a couple of tweets to dismantle this business plan, you should tighten your prose.

The company is, however, of interest as a case study in the ways that coverage of business and technology often devolves into bullshit. We've already discussed the premise story, where journalists explicitly or implicitly lend credibility to otherwise laughable proposals because those proposals provide an excuse to write about a cherished topic.

Bodega also provides a highly instructive example of the power and appeal of a magical heuristics narrative. While there is no reasonable business logic that can support this model, the mystical argument is quite strong. Both the magic of association and the magic of language play a powerful role.

The narrative is so powerful that it sometimes overwhelms journalists' own experience.

Elizabeth Segran writing for Fast Company:
In most cases, Bodega doesn’t pay for the retail space, but pitches itself as an amenity or a convenience to property managers. At gyms for instance, McDonald makes the case that having a Bodega stocked with power bars and protein powder might make the facility more attractive to members.
And Mike Murphy writing for Quartz:


The cabinets are filled with items to buy that are relevant to where you are at that given moment: In the office, they might be filled with snacks and drinks; at the gym, they could have Gatorade, supplements, and knee braces; in an apartment lobby, there could be detergent, pharmaceutical goods, and perhaps some snacks, too. Whatever makes most sense.
 
Here in Los Angeles, I have a membership at LA Fitness. That's a very big chain in Southern California with lots of locations, and I have hit well over a dozen. Furthermore, since I've moved around quite a bit, I've gone to a number of gyms in Arkansas, Virginia, and Georgia. I won't claim that this constitutes a representative sample, but it is a pretty good cross-section, and in all that time I cannot recall a gym that did not have vending machines that sold items such as energy drinks, power bars, earbuds, and various protein supplements. Many also had stores/cafés/juice bars, but pretty much all had vending machines.

The fact that these establishments already have a machine that sells exactly the same things that McDonald proposes offering in his machines is definitely worth mentioning. It pretty much takes this part of the pitch out at the knees. But Segran and Murphy either ignore or possibly simply don't notice this fact because it undercuts the narrative. These are two ex-Googlers using data and artificial intelligence and machine learning and all-purpose creative destruction to solve problems and disrupt old and sclerotic industries. The fact that many people have already done what McDonald's promising to accomplish does not fit the story at all.

In fairness, there have been journalists covering this story using actual business principles. Eater and TechCrunch have both done good work in this area.

Tuesday, September 19, 2017

Scientific American's embarrassing example of the premise story, or "yes, it is just a god damn vending machine."




When criticizing the overhyped for being overhyped, one always runs the risk of becoming part of the problem. That said, the badly conceived vending machine startup Bodega is just too good an example of what's wrong with the way we discuss technology and business. It is one of those cases where multiple lines of bad journalism and bad investor logic converge.

For instance, one of the most popular and damaging tech narratives is the premise story. Some company or entrepreneur makes some ridiculous claim about how his or her (the former most of the time) new product or service will revolutionize everything. This claim will almost certainly never come to pass but it gets a tremendous amount of buzz and provides a great excuse to discuss something a writer would like to talk about. The result is a premise story.

The premise story provides a highly useful loophole for tech and business writers. An article about a proposal that is obviously fatally flawed needs to address the lack of viability, but when the proposal is framed as a premise ("if we have [Mars one/Hyperloops/Bodega boxes], what will life be like."), The author can slide by with the mildest of conditional statements.

The trouble is that the implications of an event that almost certainly won't happen might occasionally be interesting in a thought experiment sense but are, by definition, trivial. Since the authors of these pieces want to be taken seriously, they inevitably if implicitly lend an unwanted degree of credibility to these impractical schemes.

Case in point.  Krystal D'Costa, blogging for Scientific American, clearly wanted to write something about real bodegas, their history, their cultural significance, the roles they play in their communities, this is an interesting and admirable topic, but in order to make it marketable and bring in the traffic, she needed to tie it to one of the silliest startup ideas to come out of Silicon Valley in recent memory. (At this point, I am supposed to say "since Juicero," but, at the risk of praising with faint damns, Bodega's business plan really is worse.)

Thus we get these opening paragraphs:
Need deodorant? Or craving potato chips? Don’t want to run to the “corner store” to get it? Well, soon enough you may only need to go as far as a deluxe vending machine. A start-up is looking to put "pantry boxes" stocked with the non-perishable items you might find at a corner store in key locations, like your gym, or a dorm common area or apartment lobbies. The name for this venture? Bodega.

As the name implies, the creators want to replace brick-and-mortar bodegas—the name given to those ubiquitous corner stores in New York City and Los Angeles—with these pantry boxes. They believe that with time, and data, they can tailor these boxes to meet the specific needs of an immediate area by tracking purchases and restocking those items that are most commonly purchased there, which is what seems to distinguish these boxes from normal vending machines. (The logistics of this restocking has not been disclosed as it is presumably a large part of the business plan.) But the idea has drawn a public outcry—and for good reason: It presents a very real threat to smaller mom-and-pop type stores, but it also appropriates the name “bodega," without acknowledging the cultural and social capital that these spaces have.


That “very real threat” is what makes this a topical, click-worthy story; it is also complete bullshit. In terms of functionality, these pantry boxes are nothing more than fairly crappy vending machines. There is nothing you can get out of them that isn't available from existing machines and a great deal that you can get from other machines but you can't get from them. No cold sodas. No hot beverages. No frozen treats. (We won't even get into many of the more elaborate, robotic offerings in Japan and Europe.)

The pantry boxes fair even worse compared to bodegas and other convenience stores in terms of selection. In addition to all the things listed above, they offer a small fraction of the nonperishable goods available in even a tiny store, not to mention no produce, alcohol, or tobacco products.

In lieu of any real improvement in functionality, the founders offer magical heuristics. Ex-Google employees from Silicon Valley using artificial intelligence to disrupt an industry. Even "data" is used here in the magical rather than statistical sense. Large, well-established vending machine companies have tons of quality data going back forever and constantly being updated. Bodega is actually working at a huge disadvantage here, one that would take years of widescale operation to erase.

One of the most telling lines in the excerpt above is the speculation about the logistics of restocking. This is, of course, an enormously challenging business problem. It is one of the primary reasons why vending machines have not already carved out a much larger segment of the retail market.  D'Costa assumes that the company has a workable solution to the problem and is just keeping it to itself. There is absolutely no reason to believe this – – Silicon Valley startups routinely jump into press releases and funding pushes with inchoate and/or unworkable business plans – – but believing that the details have been carefully thought out is essential for preserving the premise.

The sad part is that the rest of  D'Costa's article is interesting and well worth reading. It's a shame she felt the need to prop it up.

Monday, September 18, 2017

Even by Stephens' standards, this is bad advice.

Sorry about not getting around to this piece earlier (so much stupid, so little time), but fortunately there's a timeless awfulness to Bret Stephens' “Tips for Aspiring Op-Ed Writers.” It's a genre that brings out the pretentious, the banal and the clichéd in even the best writers. With a hack like Stephens, you're pretty much guaranteed material for your next what-not-to-do chapter. The man does not disappoint… No, that doesn't sound right. He leaves potential critics with an embarrassment of riches…  No, that's not right either. Let's just say he's consistent.

While there is much to mock here, one piece of advice is not just bad and hackneyed; it's out-and-out dangerous.
4) Authority matters. Readers will look to authors who have standing, either because they have expertise in their field or unique experience of a subject. If you can offer neither on a given topic you should not write about it, however passionate your views may be. Opinion editors are often keen on writers who can provide standing-with-surprise: the well-known environmentalist who supports nuclear power; the right-wing politician who favors transgender rights; the African-American scholar who opposes affirmative action.


Putting aside the potential pitfalls of arguing from authority and ignoring the fact that Stephens' entire career has been based on getting people to print his passionate views on topics where he has no appreciable expertise, relevant experience, or discernible understanding, let's focus on the second half of the paragraph. (We'll also skip the rather curious notion that a pro-nuclear environmentalist would be that difficult to find. Coal would have been the more appropriate choice.)

The standing-with-surprise standard is virtually guaranteed to mislead readers, often on matters of vital importance. There is no concept more essential to coverage of complex issues than context and particularly consensus. There are also few concepts that journalists screw up so frequently or so badly.

Invariably, the standing-with-surprise story distorts the perceived consensus. Even the most decisively settled scientific question can be depicted as a matter of ongoing debate. What's worse, by favoring the man-bites-dog stories, editors encourage journalists to further play up the counterintuitive aspects by stretching what qualifies as relevant expertise or by taking relatively noncommittal statements and framing them as challenges to the establishment view.

This technique is also particularly useful for getting he said/she said quotes to spice up an article. A typical example reads something like this: "Climate scientists tell us that we are approaching dangerous thresholds in global warming. Others, however, are not so certain. Physicist [just make up a name] of [prestigious university] says that models of complex systems based on observational data always have a potential for error."

Bret Stephens has, of course, gotten where he is today by misinforming readers along these very lines. We probably shouldn't be surprised that he considers the practice a rule of good journalism. That doesn't mean we should let him get away with it.

Friday, September 15, 2017

That's a disturbing premise

I came across this while looking for Cohan clips for a previous post.




The Phantom President

The Phantom President tells the fictional story of American presidential candidates, based on the novel by George F. Worts. A colorless stiff candidate for President is replaced in public appearances by a charismatic medicine show pitchman.






A few notes on IBM's Watson, the battle against cancer, and what's wrong with the state of 21st century innovation

If you haven't read the Stat News article by Casey Ross and Ike Swetlitzon on IBM's project to revolutionize cancer treatment using Watson, you should do so as soon as possible. The piece raises all sorts of important points. I'll try to return and explore some of them in greater detail (perhaps even convince Joseph to join the conversation), but for now there are two or three I want to it while the topic is still fresh.

Before we go on, you should take a few minutes to listen to the following track from the Button-down Mind of Bob Newhart. It's a seminal comedy bit that also happens to be directly relevant to the conversation.





The joke here is not that using airplanes for passenger travel is absurd (it was the early 60s for Christ's sake); the joke is that the aircraft at Kitty Hawk was clearly not ready to be monetized in this way. A plan that will be viable sometime the future is not really viable.

Given the complexities of the problem and the mountainous quantities of research that need to be assessed, no one would argue that AI-based tools for diagnosis and recommending treatment wouldn't be tremendously valuable, probably even revolutionary, but IBM's Watson is looking definitely less than ready for prime time in this respect.

The stat news article also hits on a couple of long-standing threads here at the blog, starting with one of the best examples of ddulite reasoning we've seen in a long time.
IBM said in its statement that it has collaborated with the research community and presented data on Watson at industry gatherings and in peer-reviewed journals. Some doctors said they didn’t need to see more research to know that the system is valuable. “Artificial intelligence will be adopted in all medical fields in the future,” said Dr. Uhn Lee, who runs the Watson program at Gachon University Gil Medical Center in South Korea. “If that trend, that change is inevitable, then why don’t we just start early?”

Adopting technology because it supposedly will be the best choice sometime in the future is deeply flawed for at least a couple of reasons. First off, the future is, you know, in the future – – we don't know what we'll find there. While it is reasonably safe to assume that some kind of AI will play a vital role in medicine 20 or 30 years from now, it might have very little to do with the approaches currently being pursued. Furthermore, even early adopters who guess right about the direction of technology are often screwed by getting ahead of the functionality.

Another ongoing thread here at the blog is the ever increasing emphasis on hype and marketing at the expense of research and genuine innovation, particularly when combined with short-term thinking. Following speaks for itself:
“IBM ought to quit trying to cure cancer,” said Peter Greulich, a former IBM brand manager who has written several books about IBM’s history and modern challenges. “They turned the marketing engine loose without controlling how to build and construct a product.”

Greulich said IBM needs to invest more money in Watson and hire more people to make it successful. In the 1960s, he said, IBM spent about 11.5 times its annual earnings to develop its mainframe computer, a line of business that still accounts for much of its profitability today.

If it were to make an equivalent investment in Watson, it would need to spend $137 billion. “The only thing it’s spent that much money on is stock buybacks,” Greulich said.


Thursday, September 14, 2017

Equifax

This is Joseph

Via Kevin Drum, Michael Hilzik of the LA Times has a tough question:
Three Equifax executives sold shares after the discovery of the breach and before its public disclosure, according to Bloomberg. They collected $1.8 million from the sales, which weren’t part of any prearranged option-exercise programs. The sales were made on Aug. 1 and 2, the third and fourth days after the breach was discovered. An Equifax spokeswoman says the executives were unaware of the breach at the time of their sales, but that’s hardly comforting: One was John Gamble, the firm’s chief financial officer. If the firm’s No. 2 executive wasn’t immediately informed about a catastrophic security breach, why not?
This is one of those explanations that actually looks bad, either way.  If the company can't escalate news of a crisis over several days that is a bad, bad sign for their security types.  It also suggests that the senior executives are not aware of optics, as they had weeks, post-sale, to come up with a way to get in front of this news.  If there really was a moment of horror -- my unscheduled sale happened right after a breach nobody told me about -- then they had weeks to craft a thoughtful comment on the bad timing.

Other explanations are less charitable.

In general, the whole way financial stuff is validated in the internet era needs to be re-thought. The move away from personal interaction leads to efficiency, but the use of things like social security numbers as proof of identity is rather silly.  It has the same weakness as birth date -- it never really changes and so once hacked it's permanently compromised.  


Wednesday, September 13, 2017

Magical Heuristics – – you knew there'd be a New York Magazine example eventually


Earlier, we introduced the term magical heuristics (mental tools based on a fundamentally nonrational worldview that, nonetheless, often drape themselves in scientific trappings) and laid out four general categories: magic of association; magic of language; magic of will/doubt/belief; magic of destiny.

This New York Magazine piece by Benjamin Wallace provides a wealth of examples. Though it never lapses into the pseudoscience-curious writing we have come to associate with NYM recently (resonant crystals, homeopathy, Gwyneth Paltrow's goop, the West Hollywood autism epidemic), its underlying narrative is chosen ones doing the impossible through force of will.

[Emphasis added]

But Silicon Valley loved the barf ride. In this, the age of the moon shot — of bold missions to make flying cars and “end all disease” — Musk’s hyperloop met all the criteria of bet-the-ranch, future-shaping audacity: a big vision, promising a new, “fifth mode of transport” after planes, trains, automobiles, and boats; the high purpose of using renewable energy; utopian visuals; and, perhaps most important, a terrific pedigree. Even the Valley’s most peppy cheerleaders weary, occasionally, of pitches for the latest world-changing smartphone apps. Genuine moon shots stir real excitement in the hushed corridors of Sand Hill Road. But the difference between an intrepid moon shot and a misguided fantasy project often hinges entirely on the daredevil behind it.

Musk is Mr. Moon Shot — actually Mr. Mars Shot; he intends to make us a multi-planetary species. He is arguably a genius on his way to becoming a world-­historical figure. At Tesla and SpaceX, he has, through maniacal willpower, work ethic, focus, vision, and risk tolerance, repeatedly proved naysayers wrong by surmounting the seemingly insurmountable. [Not so much. See below. -- MP]  And in recent years, he has assumed the mantle, up for grabs since the death of Steve Jobs, of Silicon Valley’s reigning deity. It was only Musk’s involvement that led many observers to take the hyperloop idea at least semi-seriously.



But for Pishevar, who settled for quoting a different president at the event (Roosevelt, on “the man in the arena”), the propulsion test was a key next step in willing hyperloop into existence. As he wrote later, in a blog post: “There’s a lot of noise, hope, and hype out there about what the Hyperloop could be and will be, but this metal sled absolutely grounds the idea in much-needed reality … I had tears mixed with sand.” Most important, he told me, his parents were still alive. “For me, the fact that they saw the test and have seen the things we’re doing, that’s everything.”

A couple of points. First, the amount of skepticism Elon Musk faced over SpaceX and Tesla has been greatly exaggerated and that exaggeration has been retconned into the record. Second and more important, the primary focus of that skepticism was never on the technology (unlike the response to Musk's more recent and fanciful proposals). Almost no one questioned the viability of electric cars or the engines being developed by TRW. What the doubters took issue with was his ability to raise the capital, quickly get the organizations into place, and avoid being sued out of existence by the aforementioned TRW.

The jump from finance guy to real life Tony Stark connects back to Alon Levy's previous point about omnicompetence.
There is a belief within American media that a successful person can succeed at anything. He (and it’s invariably he) is omnicompetent, and people who question him and laugh at his outlandish ideas will invariably fail and end up working for him. If he cares about something, it’s important; if he says something can be done, it can. The people who are already doing the same thing are peons and their opinions are to be discounted, since they are biased and he never is. He doesn’t need to provide references or evidence – even supposedly scientific science fiction falls into this trope, in which the hero gets ideas from his gut, is always right, and never needs to do experiments.

Musk has been very successful and he surprised almost everyone with his ability to raise capital and set up a large, functional (if not necessarily profitable) companies. Therefore, he can do anything. He is a chosen one. Chosen ones are magical.

Tuesday, September 12, 2017

More magical heuristics -- Levy's omnicompetence

Yesterday, I introduced the term magical heuristics (still open to a better name) to describe nonrational mental tools used by many journalists and investors particularly when discussing science and technology. I laid out four general categories for these heuristics: magic of association; magic of language; magic of attitude; magic of destiny.

This post from Alon Levy (one of the most important contributors to the Hyperloop debate) perfectly fits with two of these categories, magic of association and magic of destiny (the idea that there are chosen ones among us destined for greatness). The whole thing is very much worth reading, but I've selected below the paragraphs that are most relevant to this thread and added emphasis to bring home the point:


There is a belief within American media that a successful person can succeed at anything. He (and it’s invariably he) is omnicompetent, and people who question him and laugh at his outlandish ideas will invariably fail and end up working for him. If he cares about something, it’s important; if he says something can be done, it can. The people who are already doing the same thing are peons and their opinions are to be discounted, since they are biased and he never is. He doesn’t need to provide references or evidence – even supposedly scientific science fiction falls into this trope, in which the hero gets ideas from his gut, is always right, and never needs to do experiments.

...

I write this not to help bury Musk; I’m not nearly famous enough to even hit a nail in his coffin. I write this to point out that, in the US, people will treat any crank seriously if he has enough money or enough prowess in another field. A sufficiently rich person is surrounded by sycophants and stenographers who won’t check his numbers against anything.


...

The more interesting possibility, which I am inclined toward, is that this is not fraud, or not primarily fraud. Musk is the sort of person who thinks he can wend his way from starting online companies to building cars and selling them without dealerships. I have not seen a single defense of the technical details of the proposal except for one Facebook comment that claims, doubly erroneously, that the high lateral acceleration is no problem because the tubes can be canted. Everyone, including the Facebook comment, instead gushes about Musk personally. The thinking is that he’s rich, so he must always have something interesting to say; he can’t be a huckster when venturing outside his field. It would be unthinkable to treat people as professionals in their own fields, who take years to make a successful sideways move and who need to be extremely careful not to make elementary mistakes. The superheros of American media coverage would instantly collapse, relegated to a specialized role while mere mortals take over most functions.

This culture of superstars is a major obstacle frustrating any attempt to improve existing technology. It more or less works for commercial websites, where the startup capital requirements are low, profits per employee are vast, and employee turnover is such that corporate culture is impossible. People get extremely rich for doing something first, even if in their absence their competitors would’ve done the same six months later. Valve, a video game company that recognizes this, oriented its entire structure around having no formal management at all, but for the most part what this leads to is extremely rich people like Bill Gates and Mark Zuckerberg who get treated like superstars and think they can do anything.