Tuesday, March 29, 2022

The New York Times – for it before they were against it

If you haven't already, read Joseph's post "Free speech for the first speaker," a dismantling of the New York Times recent editorial on cancel culture. He does an excellent job addressing the logical flaws. Personally, I couldn't get past the stomach-turning hypocrisy.

Actual examples of cancellation are rare, vanishingly so when the opinions are on the right. The conservative movement always made sure that those pushing the correct views worked with a net. Between sinecures, sympathetic (and often heavily subsidized) publications, right wing media and the occasional discreet bulk purchase of a “New York Times best-selling” book, journalists and pundits who stand up against the liberal establishment know they will never walk alone. No one who still routinely appears on Fox News or has cashed in big on Substack has been in any meaningful way canceled. 

If you’re looking for a real example of someone with what had been a wide-reaching voice being effectively forced out of the discourse for expressing controversial opinions, the first name to come to my mind would be Gawker.

In case you’ve forgotten, Gawker was an angry, bomb throwing, afflict-the-comfortable family of blogs. They violently rejected bothsiderism and the view from nowhere. Instead, they were completely open with their biases, especially a hatred and disgust with tech messiahs, obscenely rich people, and the establishment press.

This post [dead link but you might be able to find it on the Internet Archive]by Ryan Tate originally spotted by Andrew Gelman and the subject of one of our posts, shows a perfect example of a Gawker 3 for 3:

If you want Facebook to spend millions of dollars hiring you, it helps to be a talented engineer, as the New York Times today [18 May 2011] suggests. But it also helps to carouse with Facebook honchos, invite them to your dad's Mediterranean party palace, and get them introduced to your father's venture capital pals, like Sam Lessin did.

Lessin is the poster boy for today's Times story on Facebook "talent acquisitions." Facebook spent several million dollars to buy Lessin's drop.io, only to shut it down and put Lessin to work on internal projects. To the Times, Lessin is an example of how "the best talent" fetches tons of money these days. "Engineers are worth half a million to one million," a Facebook executive told the paper.

We'll let you in on a few things the Times left out: Lessin is not an engineer, but a Harvard social studies major and a former Bain consultant. His file-sharing startup drop.io was an also-ran competitor to the much more popular Dropbox, and was funded by a chum from Lessin's very rich childhood. Lessin's wealthy investment banker dad provided Facebook founder Mark Zuckerberg crucial access to venture capitalists in Facebook's early days. And Lessin had made a habit of wining and dining with Facebook executives for years before he finally scored a deal, including at a famous party he threw at his father's vacation home in Cyprus with girlfriend and Wall Street Journal tech reporter Jessica Vascellaro. (Lessin is well connected in media, too.) . . .


That attitude combined with sharp, funny writing and a willingness to tell interesting, important stories that the rest of the press were ignoring made Gawker a remarkable success story. It also unsurprisingly pissed off tech messiahs, obscenely rich people, and the establishment press

The editors did believe in pushing the envelope, especially when the target were rich white men doing despicable things. They were also reckless and self-destructive and had a huge problem with authority. Combine that with a desire to be provocative to the point of shocking, and you guaranteed that any enemy with deep pockets and a deeper grudge would have plenty of ammunition.

It was right wing billionaire and cartoon villain Peter Thiel who finally came after them. Thiel was a member of the PayPal mafia along with Elon Musk. According to a mutual acquaintance "Musk thinks Peter is a sociopath, and Peter thinks Musk is a fraud and a braggart" showing that for all their other flaws, both men are reasonably good judges of character. 


Thiel’s politics are not central to this story, but it is worth noting that he’s arguably the biggest Trump supporter in the tech industry (now even more so) and is also on the record as believing that it was a mistake to give women the vote.

Rather than take open action, Thiel went the coward's route and secretly bankrolled a lawsuit then engineered it so that Gawker was forced into bankruptcy. When word leaked out of his involvement, he showed no shame (because shame’s not really a big emotion for sociopaths). Instead, he immediately started depicting himself as a courageous defender of privacy (which was pretty ripe coming from someone who'd made a billion off of Facebook, but remember what we said about shame), and he was given the world’s best piece of journalistic real estate to do it from.

Thiel’s NYT opinion piece was as bad as you would expect -- self-serving and highly distorted – but even if it had been objective and honest, the very fact that the paper handed him the biggest gift it had to bestow meant that the gray lady was actively supporting the billionaire who set out, not just to punish, but to silence a publication that criticized him and his circle. 

Have there been excesses on the left? Progressive intolerance of other ideas? Instances of free expression of ideas being squelched? Of course. It is by no means the cultural epidemic we’ve been warned about, but the right does not have a monopoly on assholes. 

There is room for an intelligent and measured conversation about how “constraints on speech can turn into arbitrary rules with disproportionate consequences,” but this is one topic on which the New York Times should keep its goddamn mouth shut.


Monday, March 28, 2022

Eight years ago at the blog -- R.I.P. old (and better) SAT

 Not the timeliest subject, I'll admit, but the way bad changes are sold to a mathematically illiterate press as improvements is always relevant. 

Wednesday, March 26, 2014

The SAT and the penalty for NOT guessing

Last week we had a post on why David Coleman's announcement that the SAT would now feature more "real world" problems was bad news, probably leading to worse questions and almost certainly hurting the test's orthogonality with respect to GPA and other transcript-based variables. Now let's take a at the elimination of the so-called penalty for guessing.

The SAT never had a penalty for guessing, not in the sense that guessing lowed your expected score. What the SAT did have was a correction for guessing. On a multiple-choice test without the correction (which is to say, pretty much all tests except the SAT), blindly guessing on the questions you didn't get a chance to look at will tend to raise your score. Let's say, for example, two students took a five-option test where they knew the answers to the first fifty questions and had no clue what the second fifty were asking (assume they were in Sanskrit). If Student 1 left the Sanskrit questions blank, he or she would get fifty point on the test. If Student 2 answered 'B' to all the Sanskrit questions, he or she would probably get around sixty points.

From an analytic standpoint, that's a big concern. We want to rank the students based on their knowledge of the material but here we have two students with the same mastery of the material but with a ten-point difference in scores. Worse yet, let's say we have a third student who knows a bit of Sanskrit and manages to answer five of those questions, leaving the rest blank thus making fifty-five points. Student 3 knows the material better than Student 2 but Student 2 makes a higher score. That's pretty much the worst possible case scenario for a test.

Now let's say that we subtracted a fraction of a point for each wrong answer -- 1/4 in this case, 1/(number of options - 1) in general -- but not for a blank. Now Student 1 and Student 2 both have fifty points while Student 3 still has fifty-five. The lark's on the wing, the snail's on the thorn, the statistician has rank/ordered the population and all's right with the world.

[Note that these scales are set to balance out for blind guessing. Students making informed guesses ("I know it can't be 'E'") will still come out ahead of those leaving a question blank. This too is as it should be.]

You can't really say that Student 2 has been penalized for guessing since the outcome for guessing is, on average, the same as the outcome for not guessing. It would be more accurate to say that 1 and 3 were originally penalized for NOT guessing.

Compared to some of the other issues we've discussed regarding the SAT, this one is fairly small, but it does illustrate a couple of important points about the test. First, the SAT is a carefully designed tests and second, some of the recent changes aren't nearly so well thought out.

Thursday, March 27, 2014

On SAT changes, The New York Times gets the effect right but the direction wrong

That was quick.

Almost immediately after posting this piece on the elimination of the SAT's correction for guessing (The SAT and the penalty for NOT guessing), I came across this from Todd Balf in the New York Times Magazine.
Students were docked one-quarter point for every multiple-choice question they got wrong, requiring a time-consuming risk analysis to determine which questions to answer and which to leave blank. 
I went through this in some detail in the previous post but for a second opinion (and a more concise one), here's Wikipedia:
The questions are weighted equally. For each correct answer, one raw point is added. For each incorrect answer one-fourth of a point is deducted. No points are deducted for incorrect math grid-in questions. This ensures that a student's mathematically expected gain from guessing is zero. The final score is derived from the raw score; the precise conversion chart varies between test administrations.

The SAT therefore recommends only making educated guesses, that is, when the test taker can eliminate at least one answer he or she thinks is wrong. Without eliminating any answers one's probability of answering correctly is 20%. Eliminating one wrong answer increases this probability to 25% (and the expected gain to 1/16 of a point); two, a 33.3% probability (1/6 of a point); and three, a 50% probability (3/8 of a point). 
You could go even further. You don't actually have to eliminate a wrong answer to make guessing a good strategy. If you have any information about the relative likelihood of the options, guessing will have positive expected value.

The result is that, while time management for a test like the SAT can be complicated, the rule for guessing is embarrassingly simple: give your best guess for questions you read; don't waste time guessing on questions that you didn't have time to read.

The risk analysis actually becomes much more complicated when you take away the penalty for guessing. On the ACT (or the new SAT), there is a positive expected value associated with blind guessing and that value is large enough to cause trouble. Under severe time constraints (a fairly common occurrence with these tests), the minute it would take you to attempt a problem, even if you get it right, would be better spent filling in bubbles for questions you haven't read.

Putting aside what this does to the validity of the test, trying to decide when to start guessing is a real and needless distraction for test takers. In other words, just to put far too fine a point on it, the claim about the effects of the correction for guessing aren't just wrong; they are the opposite of right. The old system didn't  require time-consuming risk analysis but the new one does.

As I said in the previous post, this represents a fairly small aspect of the changes in the SAT (loss of orthogonality being a much bigger concern). Furthermore, the SAT represents a fairly small and perhaps even relatively benign part of the story of David Coleman's education reform initiatives. Nonetheless, this one shouldn't be that difficult to get right, particularly for a publication with the reputation of the New York Times.

Of course, given that this is the second recent high-profile piece from the paper to take an anti-SAT slant, it's possible certain claims weren't vetted as well as others.

Friday, March 25, 2022

Free speech for the first speaker

This is Joseph

In the process of pointing out actual laws limiting free speech, the New York Times manages this interesting perspective

Liberals — and anyone concerned with protecting free speech — are right to fight against these pernicious laws. But legal limits are not the only constraints on Americans’ freedom of speech. On college campuses and in many workplaces, speech that others find harmful or offensive can result not only in online shaming but also in the loss of livelihood. Some progressives believe this has provided a necessary, and even welcome, check on those in power. But when social norms around acceptable speech are constantly shifting and when there is no clear definition of harm, these constraints on speech can turn into arbitrary rules with disproportionate consequences

I think that this perspective is shifting between "free speech" and "consequence free speech". The first is an important way to introduce new thinking and to allow for political discourse. The second gives enormous privilege to the first speaker. 

That said, there is a lot of nuance here. For example, when teaching I can occasionally adopt an unpopular position (that I don't necessarily hold) to force students to engage with criticism. When I discussed IRBs, I ended up seeing it on teaching evaluations. It pushed me to improve how I taught the topic, as offending students is rarely the way to force them to critically evaluate assumptions (even if only to strengthen them and ground them in principle). 

It is common to want the rules to favor your own viewpoint. Newspaper opinion writers have a challenging job in which they have to create provocative opinions to generate interest but are very visible if there is a backlash. I can see why they would prefer to not face consequences for misjudgments and I am sympathetic to wanting economic security.

But the marketplace of ideas requires a robust culture of criticism. This is a fine paragraph until the last sentence:

The progressive movement in America has been a force for good in many ways: for social and racial justice, for pay equity, for a fairer system and society and for calling out hate and hate speech. In the course of their fight for tolerance, many progressives have become intolerant of those who disagree with them or express other opinions and taken on a kind of self-righteousness and censoriousness that the right long displayed and the left long abhorred. It has made people uncertain about the contours of speech: Many know they shouldn’t utter racist things, but they don’t understand what they can say about race or can say to a person of a different race from theirs. Attacking people in the workplace, on campus, on social media and elsewhere who express unpopular views from a place of good faith is the practice of a closed society.

 How do you tell that the views are coming from a place of good faith? "I am just asking questions" is a powerful bad faith rhetorical technique. Assessment of good or bad faith is going to be inherently a subjective question and there are some cases where it just strains plausibility.

For example, Loving versus Virginia (a landmark civil rights case allowing for interracial marriage) cam up in the confirmation hearings of Ketanji Brown Jackson. In context, Judge Jackson has an interracial marriage (as do many prominent figures in government/judiciary like Judge Thomas and Mitch McConnell). Consider:

When asked by a reporter whether he would consider the Supreme Court potentially striking down Roe this year to be “judicial activism,” Braun said he thought what justices did in 1973 to pass Roe was “judicial activism.”

“That issue should have never been federalized, [it was] way out of sync I think with the contour of America then,” he said. “One side of the aisle wants to homogenize [issues] federally, [and that] is not the right way to do it.”

Individual states, he said, should be able to weigh in on these issues “through their own legislation, through their own court systems.”

The same reporter asked Braun whether he would apply the same judgment to Loving, and Braun said “yes.”

“I think that that’s something that if you’re not wanting the Supreme Court to weigh in on issues like that, you’re not going to be able to have your cake and eat it too,” he said. “I think that’s hypocritical.

Now is it inappropriate to wonder why this question was asked of Judge Jackson and not Judge Barrett? Is it because people assume that Judge Barrett is against Loving v. Virginia? Or because it is settled law with broad national support and was a tool to try to rattle a candidate based on their personal situation? Or because Senator Braun was just a curious person who happened to bring up an example that shows that he holds very polarizing opinions?

I think that it is reasonable for people to critique the decision of how to spend the limited time in a Supreme Court confirmation. In other words, like a classical liberal, I am strongly in favor of Senator Braun being free to speak his mind but I am also free to change my opinion of his character (in a positive or negative way) based on his speech. Similarly, the whole point of Judge Jackson's confirmation hearing is to try and change people's opinion of her (reassuring political allies and converting political foes, one presumes). 

So I think that this perspective on free speech is not really advancing the conversation but rather erring on the side of being a bit to kind to provocative speakers. It's a big privilege to be able to be provocative (many places have not allowed it at all) but the only way it makes sense is to the ability to freely respond to the original speech. 

Thursday, March 24, 2022

Twelve years ago at the blog -- part II

Wednesday, March 31, 2010

Blockbusters, Franchises and Apostrophes

More on the economics of genre fiction

The story so far: last week Andrew Gelman had a post on a book that discussed the dominance of best seller lists and suggested that it was due to their increased quality and respectability. I argued that the quality and respectability had if anything decreased (here), posted some background information (here and here) then discussed how the economics of publishing from the late Nineteenth Century through the Post-War era had influenced genre fiction. The following closes with a look at where we are now and how the current state of the market determines what we're seeing at the bookstore.

As the market shrank in the last part of the Twentieth Century, the pay scale shifted to the feast and (mostly) famine distribution of today. (The century also saw a similar shift for musicians, artists and actors.) Non-paying outlets sprang up. Fan fiction emerged (non-licensed use of characters had, of course, been around for years -- Tiajuana bibles being a classic example -- but fan fiction was written for the author's enjoyment without any real expectation of payment). These changes are generally blamed on the internet but the conventional wisdom is at least a couple of decades off. All of these trends were well established by the Seventies.

With the loss of the short story market and the consolidation of publishing, the economics of writing on spec became brutal. Writing and trying to sell a novel represents a tremendous investment of time and energy with little hope of success. By comparison writing on spec in the Forties meant coming up with twelve to fifteen pages then sending them off to twenty or so potential markets. The best of these markets paid good money; the worst were hungry for anything publishable.

The shift from short story to novel also meant greater risk for the publisher (and, though we don't normally think of it in these terms, for the reader who also invested money and time). A back-pages story that most readers skipped over might hurt the sales and reputation of a magazine slightly but as long as the featured stories were strong, the effect would be negligible. Novels though are free-standing and the novel gets that gets skipped over is the novel that goes unsold.

When Gold Medal signed John. D. MacDonald they knew were getting a skilled, prolific writer with a track record artistically and commercially successful short fiction. The same could be said about the signing of Donald Westlake, Lawrence Block, Joe Gores and many others. Publishing these first time authors was a remarkably low risk proposition.

Unfortunately for publishers today, there are no potential first time authors with those resumes. Publishers now have to roll the dice on inexperienced writers of unknown talent and productivity. In response to that change, they have taken various steps to mitigate the risk.

One response was the rise of the marketable blockbuster. The earliest example I can think of is the book Lace by Shirley Conran. If memory serves, Lace got a great deal of attention in the publishing world for Conran's huge advance, her lack of fiction-writing experience, and the role marketing played in the process. The general feeling was that the tagline ("Which one of you bitches is my mother? ") came first while the book itself was merely an afterthought.

More recently we have Dexter, a marketer's dream ("He's a serial killer who kills serial killers... It's torture porn you can feel good about!"). The author had a few books in his resume but nothing distinguished. The most notable was probably a collaboration with Star Trek actor Michael Dorn. The first book in the series, Darkly Dreaming Dexter was so poorly constructed that all of the principals had to act completely out of character to resolve the plot (tip for new authors: when a character casually overlooks her own attempted vivisection, it's time for a rewrite*).

The problems with the quality of the novel had no apparent effect on sales, nor did it prevent the character from appearing in a successful series of sequels and being picked up by Showtime (The TV show was handled by far more experienced writers who managed to seal up almost all of the plot holes).

The point here is not that Darkly Dreaming Dexter was a bad book or that publishing standards have declined. The point is that the economics have changed. Experienced fiction writers are more rare. Marketable concepts and franchises are more valuable, as is synergy with other media. The markets are smaller. There are fewer players. And much of the audience has a troublesome form of brand loyalty.

Normally of course brand loyalty is a plus, but books are an unusual case. If you convince a Coke drinker to also to drink Sprite you probably won't increase his overall soda consumption; you'll just have cannibalization. But readers who stick exclusively with one writer are severely underconsuming. Convince James Patterson readers to start reading Dean Koontz and you could double overall sales.

When most readers got their fiction either through magazines or by leafing through paperback racks, it was easy to introduce them to new writers. Now the situation is more difficult. One creative solution has been apostrophe series such as Tom Clancy's Op Center. Other people are credited with actually writing the books but the name above the title is there for branding purposes.

Which all leads us back to the original question: Why did thrillers become so dominant?

They tend to be easily marketable.

They are compatible with franchises.

They lend themselves to adaptation as big budget action movies.

Their somewhat impersonal style makes them suitable for ghosting or apostrophe branding.

They are, in short, they are what the market is looking for. As for me, I'm looking for the next reprint from Hard Case, but I might borrow the latest Turow after you're done with it.


* "Is that a spoiler?"
"No, sir. It was spoiled when I got here."

Wednesday, March 23, 2022

Twelve years ago at the blog -- part I

Tuesday, March 30, 2010

The Decline of the Middle (Creative) Class

I suggested in an earlier post that the rise to dominance of the thriller had not been accompanied by a rise in quality and reputation. In this and the next post, I'll try to put some foundations under this claim.

Popular art is driven by markets and shifts in popular art can always be traced back, at least partly, to economic, social and technological developments as well as changes in popular taste. The emergence of genre fiction followed the rise of the popular magazine (check here for more). Jazz hit its stride as the population started moving to cities. Talking pictures replaced silents when the technology made them possible.

Crime fiction, like science fiction first appeared in response to demand from general interest magazines like the Strand then moved into genre specific magazines like Black Mask and a few years later, cheap paperbacks. The demand for short stories was so great that even a successful author like Fitzgerald saw them as a lucrative alternative to novels. There was money to be made and that money brought in a lot of new writers.

It seems strange to say it now but for much of the Twentieth Century, it was possible to make a middle class living as a writer of short fiction. It wasn't easy; you had to write well and type fast enough to melt the keys but a surprisingly large number of people managed to do it.

Nor were writers the only example of the new creative middle class. According to Rosy McHargue (reported by music historian Brad Kay) in 1925 there were two hundred thousand professional musicians in the United States. Some were just scraping by, but many were making a good living. (keep in mind that many restaurants, most clubs and all theaters had at least one musician on the payroll.) Likewise, the large number of newspapers and independent publishers meant lots of work for graphic artists.

I don't want to wax too nostalgic for this era. Sturgeon's Law held firmly in place: 95% of what was published was crap. But it was the market for crap that made the system work. It provided the freelance equivalent of paid training -- writers could start at least partially supporting themselves while learning their craft, and it mitigated some of the risk of going into the profession -- even if you turned out not to be good enough you could still manage food and shelter while you were failing.

It was also a remarkably graduated system, one that rewarded quality while making room for the aforementioned crap. The better the stories the better the market and the higher the acceptance rate. In 1935, Robert E. Howard made over $2,000 strictly through magazine sales. Later, as the paperback market grew, writers at the very top like Ray Bradbury or John O'Hara would also see their stories collected in book form.

Starting with Gold Medal Books, paperback originals became a force in 1950. This did cut into the magazine market and hastened the demise of the pulps but it made it easier than ever before to become a novelist. It was more difficult (though still possible) to make a living simply by selling short stories, but easier to make the transition to longer and more lucrative works.

It was, in short, a beautifully functioning market with an almost ideal compensation system for a freelance based industry. It produced some exceptionally high quality products that have generated billions of dollars and continue to generate them in resales and adaptations (not to mention imitations and unlicensed remakes). This includes pretty much every piece of genre fiction you can think written before 1970.

The foundation of that system, the short story submarket, is essentially dead and the economics and business models of the rest of the publishing industry has changed radically leading to the rise of marketing, the blockbuster mentality and what I like to call the Slim Jim conundrum.

Tune in next time.

Tuesday, March 22, 2022

"The explainers I found were bad — boring, biased, inaccessible" and if they can do it, why can't I?


This is bad, but it's bad in an instructive way for anyone who would like to understand the coverage of crypto, NYT's credulous attitude toward tech trends, bothsidesing, and the career of Kevin Roose. 
It would probably take more than 14,000 words to go through all the problems with this. I'm trying to get Joseph in on the discussion, but even between the two of us, I doubt we can do a comprehensive job of it. For tonight, the best I can do is give you a quick tweet-level view. 

[Before we go on, the phrase "the casino is a trojan horse with a new financial system hidden inside” needs to be singled out for special praise.]


[When I said "Given enough time and money, there's no question something along these general lines would work," I meant a maglev vactrain. Musk's air-caster idea was so bad that all the "hyperloop" companies quietly dropped it immediately. Not a penny of those hundreds of millions of capital raised have been spent developing Musk's actual proposal. All they kept was the name.]   

"A balanced treatment of an unbalanced phenomenon distorts reality." — Norm Ornstein


Every good tech puff piece has to have a historical example of skeptics doubting some earlier revolutionary technology. These examples generally range from unrepresentative to complete bullshit. 

Here's a more thoughtful thread on this section.

Editors also love to see some hot strange bedfellows action, even if you have to fudge a few details.

Nor does the piece end on a strong note.


We have by no means mined out Roose's explainer -- everywhere you swing a pick, you're likely to hit paydirt -- and we seem to be less than halfway through the project.

Monday, March 21, 2022

Someone saw Double Indemnity and thought "what a great concept for a life insurance ad"


Sure, we've all thought it, but we didn't expect them to just come out and say it.




There's a long and very mixed history for edgy advertising. If done with a light touch it can be effective. Secret Weapon, until recently the company behind Jack in the Box's long running campaigns, was a master of hitting just the right balance.




Really edgy ad campaigns are often built around the theory that name recognition, particularly for a new company, is necessary and initially sufficient and the best way to achieve that is with an ad that gets everybody talking. Apple's "1984" was only tangentially related to the Macintosh PC, and yet it often routinely lands on lists of the best television commercials.

Of course, "1984" was in one sense highly traditional. It associated the product with positive things: freedom, self-expression, empowerment. What happens when a campaign focuses on nothing but making an impression?

We have at least one prominent example that debuted during the 1999 Superbowl. Anybody remember Outpost.com?




At least in this case, things didn't work out that well.


Friday, March 18, 2022

Weekend Web3 notes

First, Ben McKenzie gives an excellent interview on crypto from the very heart of the trendiness beast. 




Next is a conversation with a software and dev ops engineer named Geoffrey Huntley who has "stolen" all the NFTs available through the Ethereum and Solana blockchains. It's all part of a satirical art project to educate people about what NFTs are and, more importantly, are not. Coffeezilla is a YouTuber who specializes in uncovering get-rich-quick gurus who make up much of the advertising on the platform (including on his own channel because that's how the algorithm works).





The innate absurdity of stealing NFTs was also the subject of this Colbert sketch.






We'll let John Wick have the last word.



Thursday, March 17, 2022

Two alternatives to daylight saving time, one of which is just silly

First, from XKCD:



I come from a nocturnal family, so we had a strong preference for falling back over springing forward. My father insisted that after losing that hour, he never felt right till he got it back. If anything, I feel the same only stronger.

We currently have a system that's pretty good in the fall but which sucks in the spring, which raises the question, why not just keep the good part? If we gain an hour twice a month, we'll get twenty-four 49 hour weekends, we'll all be well-rested and at the end of the year, we'll be back where we started.

Who's with me? 


Wednesday, March 16, 2022

"The ultimate distillation of the golden age of fraud"

If you're new to Web3 and the hype economy in general (a phrase I apparently no longer have exclusive rights to) this New Republic overview by Ben McKenzie, Jacob Silverman is a good place to start.

In the golden age of fraud, grift sits comfortably alongside the general sense of unreality permeating the American economy. JPEGs sell for millions of dollars and are denominated as a new asset class, despite the many practical and philosophical problems accompanying them. Ephemeral meme stocks and dog-themed crypto tokens outperform deep-pocketed companies that actually, you know, make things. The world’s richest man, Elon Musk, owes his fortune to a state-subsidized electric car company that produces far fewer vehicles than his competitors but is worth more than almost all of them put together. (Although it must be said that Musk’s rivals often aren’t much better: Volkswagen, the number-two car company in the world by revenue, cooked the books for years by rigging emission-detection software.)

...

The cryptocurrency industry may be the ultimate distillation of the golden age of fraud. Like Uber, it’s benefited from vast sums of cheap investment capital and pliant public officials easily charmed by new technologies. But whereas Uber provides a clear service, albeit at the expense of underpaid gig workers, the use cases for crypto remain uncertain, even as it drapes itself in utopian rhetoric about financial revolution. Wild volatility, a lack of payments infrastructure, rampant scams, and technical complexity make crypto an unappealing choice to be a real currency or store of value. And its environmental impact and Ponzi-like economics—new investors are required to buy out the old—mean that it may actually be a negative-sum game.

In a hype economy built on froth, virality, misinformation, and celebrity endorsements, crypto has no apparent utility besides being a source of risky speculation. As economists from both the left (KrugmanRoubini) and right (Hanke) have pointed out, crypto has no inherent value except what another person might pay for it. In economics, this is referred to as the “greater fool” theory. At its base, crypto is private money (an outdated notion from the nineteenth century) that largely runs on rails purposefully set up to be outside the banking system and away from those pesky government authorities with their annoying focus on transparency and the rule of law. Its value is a collective hallucination, dependent on constant salesmanship and, in some cases, deception and market manipulation.

Tuesday, March 15, 2022

Tuesday Tweets

I'm amazed/appalled at how little attention these stories are generating. It's almost as if the interest of journalists is inversely proportionate to the level of extremism.






Musk: unhinged tweeting at... 12:48 AM... 1:31 AM... 4:09 AM...





You know you're a true believer when you brag about how easy it is to repo your car.









You'll be surprised to learn that people who shop at the Beverly Center aren't all that price sensitive.




Of course that's what you'd expect from liberals like Boehlert and the... uh... Wall Street Journal.



Accusations that a political party is in the pocket of a foreign power have historically been the go-to examples of a witch hunt, but in this remake of the Crucible, the case against the witches is actually pretty convincing.














Marshall makes THE essential point about Putin's show of weakness in Ukraine.
More here.










Misc.










Monday, March 14, 2022

Twelve years ago at the blog -- did not expect to run across Trump

He's only tangentially related to the topic of the post, but his appearance does give this a slightly unnerving quality you want in a Monday post.  

The actual topic of the following is brands and how to value them. A couple of the references have aged badly (does anyone even remember John Edwards?). On the whole though, I think it holds up. Apple is still Apple. Clorox is still Clorox. And Trump still has unconvincing hair. 

Tuesday, March 2, 2010

Comparing Apples and Really Bad Toupees

DISCLAIMER: Though I have worked in some related areas like product launches, I have never done an analysis of brand value. What follows are a few thoughts about branding without any claim of special expertise or insight. If I've gotten something wrong here I would appreciate any notes or corrections.

Joseph's post reminded me of this article in the Wall Street Journal about the dispute between Donald Trump and Carl Icahn over the value of the Trump brand. Trump, not surprisingly, favors the high end:
In court Thursday, Mr. Trump boasted that his brand was recently valued by an outside appraiser at $3 billion.

In an interview Wednesday, Mr. Trump dismissed the idea that financial troubles had tarnished his casino brand. He also dismissed Mr. Icahn's claims that the Trump gaming brand was damaged, pointing to a recent filing in which Mr. Icahn made clear that he wants to assume the license to the brand. "Every building in Atlantic City is in trouble. OK? This isn't unique to Trump," he said. "Everybody wants the brand, including Carl. It's the hottest brand in the country."
While Icahn's estimate is a bit lower:
Mr. Icahn, however, believes his group also would have the right to use the Trump name under an existing licensing deal, but says the success of the casinos don't hinge on that. The main disadvantage to losing the name, he says, would be the $15 million to $20 million cost of changing the casinos' signs.
So we can probably put the value of the Trump brand somewhere in the following range:

-15,000,000 < TRUMP < 3,000,000,000

(the second inequality should be less than or equal to -- not sure how to do it on this text editor)

Neither party here is what you'd call trustworthy and both are clearly pulling the numbers they want out of appropriate places but they are able to make these claims with straight faces partly because of the nature of the problem.

Assigning a value to a brand can be a tricky thing. Let's reduce this to pretty much the simplest possible case and talk about the price differential between your product and a similar house brand. If you make Clorox, we're in pretty good shape. There may be some subtle difference in the quality between your product and, say, the Target store brand but it's probably safe to ignore it and ascribe the extra dollar consumers pay for your product to the effect.

But what about a product like Apple Computers? There's clearly a brand effect at work but in order to measure the price differential we have to decide what products to compare them to. If we simply look at specs the brand effect is huge but Apple users would be quick to argue that they were also paying for high quality, stylish design and friendly interfaces. People certainly pay more for Macs, Ipods, Iphones, and the rest, but how much of that extra money is for features and how much is for brand?

(full disclosure: I use a PC with a dual Vista/Ubuntu operating system. I do my programming [Python, Octave] and analysis [R] in Ubuntu and keep Vista for compatibility issues. I'm very happy with my system. If an Apple user would like equal time we'd be glad to oblige)

I suspect that more products are closer to the Apple end of this spectrum than the Clorox end but even with things like bleach, all we have is a snapshot of a single product. To useful we need to estimate the long term value of the brand. Is it a Zima (assuming Zima was briefly a valuable brand) or is it a Kellogg's Corn Flakes? And we would generally want a brand that could include multiple brands. How do we measure the impact of a brand on products we haven't launched yet? (This last point is particularly relevant for Apple.)

The short answer is you take smart people, give them some precedents and some guidelines then let them make lots of educated guesses and hope they aren't gaming the system to tell you what you want to hear.

It is an extraordinarily easy system to game even with guidelines. In the case of Trump's casinos we have three resorts, each with its own brand that interacts in an unknown and unknowable way with the Trump brand. If you removed Trump's name from these buildings, how would it affect the number of people who visit or the amount they spend?

If we were talking about Holiday Inn or even Harrah's, we could do a pretty good job estimating the effect of changing the name over the door. We would still have to make some assumptions but we would have data to back them up. With Trump, all we would have is assumption-based assumptions. If you take these assumptions about the economy, trends in gambling and luxury spending, the role of Trump's brand and where it's headed, and you give each one of them a small, reasonable, completely defensible nudge in the right direction, it is easy to change your estimates by one or two orders of magnitude.

We also have an unusual, possibly even unique, range of data problem. Many companies have tried to build a brand on a public persona, sometimes quite successfully. Normally a sharp business analyst would be in a good position to estimate the value of one of these brands and answer questions like "if Wayne Gretsky were to remove his name from this winter resort, what impact would it have?"

The trouble with Trump is that almost no one likes him, at least according to his Q score. Most persona-based brands are built upon people who were at some point well-liked and Q score is one of the standard metrics analysts use when looking at those brands. Until we get some start-ups involving John Edwards and Tiger Woods, Mr. Trump may well be outside of the range of our data.

Friday, March 11, 2022

I'm not recommending this SNL clip because it's funny. In a sense, I'm recommending it because it's not.

Yes, I know. Complaining that Saturday Night Live isn't funny anymore has been a cliche for longer than most of the cast members have been alive, but that's not really where I'm going with this. For one thing, it's not a question of anymore. The show never was consistently amusing or even interesting. That was never the point.

For a few years back in the 70s, Saturday Night Live did hit a very sweet spot, being on the intersection of the conceptual comedy movement of people like Steve Martin and Andy Kaufman on one side and the rise of the Second City school of sketch comedy on the other. It also benefitted from an early association with talents like George Carlin, Richard Pryor and Buck Henry.

Lorne Michaels’ initial idea was to rip off National Lampoon's stage and radio shows keeping much of the cast and many of the writers. The material was toned down for television (including adding the Muppets to the original line-up), but it was clearly an unlicensed Lampoon TV show. That initial concept burned itself out fairly quickly and was definitely showing its age even before the 1980 reboot, which more or less introduced the current incarnation of the show.

SNL has been an institution for most of its run, and as a rule, intentionally funny institutions are rare. It's true that lots of incredibly talented people have worked for there, both in front of and behind the camera (Michaels has a good eye) and they do hit paydirt now and then, but that's not really what the show's about. 

Almost since its inception, people watched SNL so they could talk about what other people were talking about, and since at least second or third season, the producers have built the show around this. In addition to catchphrases and recurring characters who often didn't need to recur, this has led to increasing reliance on topical sketches that check off familiar figures and events, where the laughs come less from the jokes and more from the sense of recognition.

Sketches like this.



This follows a very old tradition. Don jr. rubbing his nose and looking for a bathroom with a mirrored countertop is the 2022 version of the famously drunk character walking out with an ice pack on his head. 

We've talked before about how bad art can often be more useful than great art.  Someone like Shakespeare will see things that their contemporaries miss which pretty much by definition makes she or he unrepresentative.

This sketch by comparison is the exact opposite, bad but representative of its moment. Some talented performers manage a few real laughs, but on the whole, it's just a bunch of walk-ons close-captioned for the comically impaired. The jokes are simply characters saying out loud obvious things about themselves: 

Fox News has been pushing Russian propaganda about Ukraine and is now desperately trying to backpedal since the position has become toxic;

Tucker Carlson is a smug, preppie racist;

Trump is a babbling idiot;

Fox viewers are old;

And so on.

There's no real imagination here, just a checklist of people and incidents associated with conservative media and the war in Ukraine, but it is that very lack of imagination that makes this useful. The writers made a list of things that they believed their audience would recognize and agree with. They seem to have been successful. The cold open without anything to recommend it other than the topic has gotten buzz, write-ups and over three million views on YouTube.

There is, of course, the bigger question of to what extent we can generalize from the SNL audience to the wider population, but that's a conversation that requires a different set of tools. 

Thursday, March 10, 2022

Not merely a villain but also a fool

Excellent essay by Cory Doctorow on how (and how not to) break up tech giants like Google and the walking anti-trust violation formerly known as Facebook. The opening passage fits nicely with our long-running tech messiah thread.

Science fiction has a longstanding love-hate relationship with the tech tycoon. The literature is full of billionaire inventors, sometimes painted as system-bucking heroes, at other times as megalomanical supervillains.

From time to time, we even manage to portray one of these people in a way that hews most closely to reality: ordinary mediocrities, no better than you or I, whose success comes down to a combination of luck and a willingness to set aside consideration of the needs of others. It’s easy to find such people atop our increasingly steep economic pyramid, but it’s very hard to find any who’ll admit it. There is nothing a successful person hates more than being reminded that “meritocracy” is a self-serving myth, a circular logic that says, “The system puts the best people in charge, and I am in charge, therefore I am the best.”

But while the powerful remain blissfully insulated from the bursting of the meritocratic delusion, public sentiment is increasingly turning against the ultra-wealthy, and in the most interesting way possible. Today, the commercial tyrant isn’t merely seen as a villain, but also as a fool – someone whose greatness is due to an accident of history and a vacancy of mor­als, not the result of a powerful genius gone awry.

It’s a distinction with a difference. If Facebook is Facebook because Mark Zuckerberg is a once-in-a-millennium genius who did what no other could, then our best hope is to somehow gentle the Zuck, bring him into public service, like a caged ET that govern­ment scientists either bribe or torment into working on behalf of the human race. That’s the constitutional monarchy model, the model where we continue to acknowledge the divine right of kings, but bind them to the material plane by draping the king in golden chains of office whose ends are held by an aristocracy that keeps the monarch from getting too frisky.

But if Facebook is Facebook because Zuck got lucky, if he just combined cheap capital with regulatory tolerance for buying out the competition and building a legally impregnable walled garden around his users, then we don’t need Zuck or Facebook. There’s plenty more where he came from, and all we need to do is withdraw the privileges that regulatory forbearance granted him. That’s the republic model, where we get rid of the king and govern ourselves.

Wednesday, March 9, 2022

"Love Me, I'm a Liberal"

A couple of threads converged to remind me of this. First, I've been thinking about the relationships between the young revolutionaries of the sixties and the insurrectionists of today (and, given the typical age of Fox viewers, wondering how much of an overlap there is).

Second, the war in Ukraine and the various Russian scandals that came before it have highlighted longstanding rifts between liberals and the anti-anti-Trump left. This is part of a tradition that goes back to the split over war with Germany in the late thirties and the tankies of the fifties and sixties. 

Phil Ochs' "Love Me, I'm a Liberal" is one of the best and wittiest examples of sixties radical disgust with those just to the right. You can find complete annotated lyrics here, though the song is better listened to than read.




Excerpts:

Intro
In every American community, you have varying shades of political opinion. One of the shadiest of these is the liberals. An outspoken group on many subjects. Ten degrees to the left of center in good times, ten degrees to the right of center if it affects them personally. So here, then, is a lesson in safe logic. 

...

I cheered when Humphrey was chosen
My faith in the system restored
I'm glad that the Commies were thrown out
Of the A.F.L. C.I.O. board
And I love Puerto Ricans and Negros
As long as they don't move next door
So love me, love me, love me, I'm a liberal


Ah, the people of old Mississippi
Should all hang their heads in shame
Now, I can't understand how their minds work
What's the matter don't they watch Les Crane?
But if you ask me to bus my children
I hope the cops take down your name
So love me, love me, love me, I'm a liberal

...

Sure, once I was young and impulsive
I wore every conceivable pin
Even went to socialist meetings
Learned all the old union hymns
Ah, but I've grown older and wiser
And that's why I'm turning you in
So love me, love me, love me, I'm a liberal


That last verse would prove to be prescient. Ochs' generation turned out to be far less committed to these causes than their parents were. He never saw his radical peers become reactionaries. He killed himself in 1976.