Monday, July 18, 2016

Live fast, die young, leave a good-looking resume








It is important to remember that this ageism is part of a larger phenomenon. Though most of us don't have any adult memories of the Post-War period, it was not that long ago when having a shortage of science and technology workers actually meant a shortage. Companies like Texas Instruments were famous for being very flexible in their hiring and taking anyone who might be good at the job. Now when you hear employers in someplace like Silicon Valley complaining about a "shortage" of workers, what they mean is they can't find a large, ongoing supply of workers under the age of 30 with exactly the advanced degrees they need from top-ranked universities who are willing to work 60+ hours with no job security for good, but not great money until they burn out and have to be discarded.

It is also important to remember that those under-qualified and and over-protected workers of the Post-War era laid the technological foundation on which most of today's Silicon Valley fortunes are based.

From Dice.com:

In 2007, a fresh-faced Mark Zuckerberg famously ruffled feathers among some older colleagues when he suggested that tech companies should not hire people over 30. “Young people are just smarter,” the Facebook chief executive, then 22, told a crowd at Stanford University.

Nearly a decade after the public gaffe, some say little has changed in terms of how older workers are perceived in the tech industry. Despite making recent attempts to diversify their workforces through aggressive initiatives to attract more women and minorities, Silicon Valley firms still wear their disproportionately young ranks like a badge of honor, proudly flaunting a youth-focused culture in which 28 is seen as middle age and 35 over the hill.

While workers over 40 are protected by federal civil rights laws in the United States, the plight of older employees so rarely enters into conversations about workplace discrimination in tech that one would be forgiven for not realizing it’s an issue at all.

In fact, ageism is very prevalent. Just ask Dan Lyons, a technology journalist and writer for HBO’s “Silicon Valley.” As notably chronicled in his recent best-selling book “Disrupted: My Misadventure in the Start-Up Bubble,” Lyons lost his longtime position at Newsweek magazine when he was in his 50s and decided to switch gears by taking a marketing fellowship at the software company HubSpot. In his book, published earlier this year, Lyons describes the startup’s culture as a frat-like circus filled with Nerf gunfights and hookup dens.

To complement the book, Lyons also wrote a LinkedIn post in which he called out tech industry executives for their defiantly ageist rhetoric, including his old boss at HubSpot, who he said once called gray hair and experience “overrated.” The LinkedIn post went viral, and Lyons said it was at that moment that he realized how widespread the problem really is.

“I got this outpouring of emails from people,” Lyons told Dice Insights. “I don’t mean to toot my own horn—I don’t think it’s that the article was so good. It’s just that there are a s–tload of people out there who experienced this. It was upsetting really.”




Friday, July 15, 2016

Cracked: "Why Everybody Wins If Batman & Superman Are Public Domain "

I believe we can safely take this as a piece of devil's advocacy -- Warner Bros. would not really come out ahead on this  -- but it's well though-out and makes some excellent points. For instance, it points the self-evident absurdity of extending copyrights to encourage creativity when the old laws had recently produced an unprecedented wave of valuable commercial properties. It also points out that being in the public domain has not appreciably hurt and has arguably helped characters like Tarzan and Sherlock Holmes.






Thursday, July 14, 2016

Matthew Stewart on McKinsey and Co.

Following up on yesterday's post and laying groundwork a Teach for India thread, here's  another selection from Matthew Stewart's seminal 2006 essay [Emphasis added]:
The first point to note is that management education confers some benefits that have little to do with either management or education. Like an elaborate tattoo on an aboriginal warrior, an M.B.A. is a way of signaling just how deeply and irrevocably committed you are to a career in management. The degree also provides a tidy hoard of what sociologists call “social capital”—or what the rest of us, notwithstanding the invention of the PalmPilot, call a “Rolodex.”

For companies, M.B.A. programs can be a way to outsource recruiting. Marvin Bower, McKinsey’s managing director from 1950 to 1967, was the first to understand this fact, and he built a legendary company around it. Through careful cultivation of the deans and judicious philanthropy, Bower secured a quasi-monopoly on Baker Scholars (the handful of top students at the Harvard Business School). Bower was not so foolish as to imagine that these scholars were of interest on account of the education they received. Rather, they were valuable because they were among the smartest, most ambitious, and best-connected individuals of their generation. Harvard had done him the favor of scouring the landscape, attracting and screening vast numbers of applicants, further testing those who matriculated, and then serving up the best and the brightest for Bower’s delectation.

Not to put too fine a point on this, bt arguably the primary architect of the Common Core initiative had no relevant education experience, either academic or professional, before being hired to work in the field by McKinsey. He was, however, smart (philosophy degrees from Yale and Oxford), ambitious (Rhodes Scholarship), and the son of a prominent university president.

It is also worth noting that Coleman's opposite number in the U.K. is also a former McKinsey man and that the company also played a major role in setting up Teach For India.


Wednesday, July 13, 2016

The birth of scientific management

I've had this in the queue for a while -- management gurus figure in a lot of our ongoing threads --but news of the emergence of Teach For India and similar organizations have recently have made the topic more topical. Seeing the same model tried in different countries has brought into high relief some things that were always implicit in the story such as the role of Taylorism in TFA and in the larger reform movement, particularly through the involvement of McKinsey and Company (more on that soon).

Arguably the leading debunker of scientific management is Matthew Stewart. Here's his account of how Taylor got started.
Management theory came to life in 1899 with a simple question: “How many tons of pig iron bars can a worker load onto a rail car in the course of a working day?” The man behind this question was Frederick Winslow Taylor, the author of The Principles of Scientific Management and, by most accounts, the founding father of the whole management business.

Taylor was forty-three years old and on contract with the Bethlehem Steel Company when the pig iron question hit him. Staring out over an industrial yard that covered several square miles of the Pennsylvania landscape, he watched as laborers loaded ninety-two-pound bars onto rail cars. There were 80,000 tons’ worth of iron bars, which were to be carted off as fast as possible to meet new demand sparked by the Spanish-American War. Taylor narrowed his eyes: there was waste there, he was certain. After hastily reviewing the books at company headquarters, he estimated that the men were currently loading iron at the rate of twelve and a half tons per man per day.

Taylor stormed down to the yard with his assistants (“college men,” he called them) and rounded up a group of top-notch lifters (“first-class men”), who in this case happened to be ten “large, powerful Hungarians.” He offered to double the workers’ wages in exchange for their participation in an experiment. The Hungarians, eager to impress their apparent benefactor, put on a spirited show. Huffing up and down the rail car ramps, they loaded sixteen and a half tons in something under fourteen minutes. Taylor did the math: over a ten-hour day, it worked out to seventy-five tons per day per man. Naturally, he had to allow time for bathroom breaks, lunch, and rest periods, so he adjusted the figure approximately 40 percent downward. Henceforth, each laborer in the yard was assigned to load forty-seven and a half pig tons per day, with bonus pay for reaching the target and penalties for failing.

When the Hungarians realized that they were being asked to quadruple their previous daily workload, they howled and refused to work. So Taylor found a “high-priced man,” a lean Pennsylvania Dutchman whose intelligence he compared to that of an ox. Lured by the promise of a 60 percent increase in wages, from $1.15 to a whopping $1.85 a day, Taylor’s high-priced man loaded forty-five and three-quarters tons over the course of a grueling day—close enough, in Taylor’s mind, to count as the first victory for the methods of modern management.

...

Yet even as Taylor’s idea of management began to catch on, a number of flaws in his approach were evident. The first thing many observers noted about scientific management was that there was almost no science to it. The most significant variable in Taylor’s pig iron calculation was the 40 percent “adjustment” he made in extrapolating from a fourteen-minute sample to a full workday. Why time a bunch of Hungarians down to the second if you’re going to daub the results with such a great blob of fudge? When he was grilled before Congress on the matter, Taylor casually mentioned that in other experiments these “adjustments” ranged from 20 percent to 225 percent. He defended these unsightly “wags” (wild-ass guesses, in M.B.A.-speak) as the product of his “judgment” and “experience”—but, of course, the whole point of scientific management was to eliminate the reliance on such inscrutable variables.

One of the distinguishing features of anything that aspires to the name of science is the reproducibility of experimental results. Yet Taylor never published the data on which his pig iron or other conclusions were based. When Carl Barth, one of his devotees, took over the work at Bethlehem Steel, he found Taylor’s data to be unusable. Another, even more fundamental feature of science—here I invoke the ghost of Karl Popper—is that it must produce falsifiable propositions. Insofar as Taylor limited his concern to prosaic activities such as lifting bars onto rail cars, he did produce propositions that were falsifiable—and, indeed, were often falsified. But whenever he raised his sights to management in general, he seemed capable only of soaring platitudes. At the end of the day his “method” amounted to a set of exhortations: Think harder! Work smarter! Buy a stopwatch!

The trouble with such claims isn’t that they are all wrong. It’s that they are too true. When a congressman asked him if his methods were open to misuse, Taylor replied, No. If management has the right state of mind, his methods will always lead to the correct result. Unfortunately, Taylor was right about that. Taylorism, like much of management theory to come, is at its core a collection of quasi-religious dicta on the virtue of being good at what you do, ensconced in a protective bubble of parables (otherwise known as case studies).

Curiously, Taylor and his college men often appeared to float free from the kind of accountability that they demanded from everybody else. Others might have been asked, for example: Did Bethlehem’s profits increase as a result of their work? Taylor, however, rarely addressed the question head-on. With good reason. Bethlehem fired him in 1901 and threw out his various systems. Yet this evident vacuum of concrete results did not stop Taylor from repeating his parables as he preached the doctrine of efficiency to countless audiences across the country.

...

That Taylorism and its modern variants are often just a way of putting labor in its place need hardly be stated: from the Hungarians’ point of view, the pig iron experiment was an infuriatingly obtuse way of demanding more work for less pay. That management theory represents a covert assault on capital, however, is equally true. (The Soviet five-year planning process took its inspiration directly from one of Taylor’s more ardent followers, the engineer H. L. Gantt.) Much of management theory today is in fact the consecration of class interest—not of the capitalist class, nor of labor, but of a new social group: the management class.


Tuesday, July 12, 2016

Flashman finds work (again)

[I'm planning a couple of posts on Teach For America's international push. One of the elements I'm planning to cover is the absolutely central role played in the reform movement by the management consulting firm McKinsey & Co which makes this post from 2011 once again relevant. The list of the company's highlights comes from  Barry Ritholtz.]

Back in the late Sixties, George MacDonald Fraser came up with a wonderful idea for a series of comic historical novels. He took Flashman, the villain from the best known example of Britain's beloved school novels (a genre that includes the Mike and Psmith books, much admired by Orwell, and, of course, Harry Potter) and placed him on the scene at every military fiasco of the Nineteenth century from the Charge of the Light Brigade to Custer's Last Stand.

Had Fraser been a student of business instead of military history and had decided to make Flashman a consultant in the late Twentieth Century, his resume might read something like this:

• Advocating side pockets and off balance sheet accounting to Enron, it became known as “the firm that built Enron” (Guardian, BusinessWeek)
• Argued that NY was losing Derivative business to London, and should more aggressively pursue derivative underwriting (Investment Dealers’ Digest)
• General Electric lost over $1 billion after following McKinsey’s advice in 2007 — just before the financial crisis hit. (The Ledger)
• Advising AT&T (Bell Labs invented cellphones) that there wasn’t much future to mobile phones (WaPo)
• Allstate* reduced legitimate Auto claims payouts in a McK&Co strategem (Bloomberg, CNN NLB)
• Swissair went into bankruptcy after implementing a McKinsey strategy (BusinessWeek)
• British railway company Railtrack was advised to “reduce spending on infrastructure” — leading to a number of fatal accidents, and a subsequent collapse of Railtrack. (Property Week, the Independent)


* Update: Here's a bit more on the Good Hands People, part of our ongoing "How to Lie with Statistics" series (more examples here and here).

Monday, July 11, 2016

One reason there were so few women in classic cartoons

I always assumed it was because the knockabout humor played better with male characters, but animation authority Mark Evanier points out an additional economic motive.


Once upon a time, voice actors working under the Screen Actors Guild contract were paid by the session. The actor received a flat fee for the cartoon whether he did one voice or twenty. Most of the early Hanna-Barbera cartoons had a cast of two: Daws Butler and Don Messick did the Pixie & Dixie cartoons. The Secret Squirrel cartoons were voiced by Paul Frees and Mel Blanc. The Atom Ant cartoons were Howie Morris and Allan Melvin until Howie quit H-B, at which time they became Don Messick and Allan Melvin. Once in a while, they'd spring for a guest voice — usually a woman — but the writers were told not to write in too many female parts so that wasn't necessary. (There are early H-B cartoons where small female roles were voiced by men.)

Most cartoons were done with small casts. The Rocky & Bullwinkle cartoons were voiced by June Foray, Paul Frees, Bill Scott, William Conrad and no one else. I don't think there's a single other actor in any of them…and in some, some of those folks play five or six roles.

In '68, the S.A.G. contract was changed to limit the number of voices one actor could do for one fee. The math changed over the years but it pretty much came down to three voices per session fee per actor. Before, if a cartoon called for twelve speaking parts, you could have Daws and Don each do six and it cost you two session fees. After '68, you were going to have to pay four session fees…so you could pay Daws and Don each two fees per session or for the same money, you could bring in four actors. In most cases then, they would hire four actors.

This made things better for the kind of actor — like Hans Conried or Gary Owens — who couldn't do multiple roles. A voice actor no longer had to be like Blanc, Butler, Messick or Frees — guys who could do a couple hundred different voices. It also increased the opportunities for women since it led to shows having more female characters. And it even led to some of the multi-voiced guys making more money. I wrote a CBS Storybreak once which had a ton of tiny roles and we decided that we didn't want to bring in a parade of voice actors to each do 1-3 lines. It was easier to have Frank Welker do them all so that day, Frank — who was in the studio for about 90 minutes — played twenty characters and was paid for seven sessions.

Friday, July 8, 2016

If only they had come up with a cool name


When discussing the Hyperlink, it is essential to keep in mind that, though they may not have been combined in exactly this form in a widely publicized version before, the basic elements have been discussed extensively by countless engineers for a long time now.

From Wikipedia:

Russian professor Boris Weinberg offered a vactrain concept in 1914 in the book Motion without friction (airless electric way). He also built the world's first model of his proposed transport in Tomsk Polytechnic University in 1909.


I believe the illustration below is from 1917 (but I'm a bit confused by the annotation on the Wikisource page).








Thursday, July 7, 2016

Damn you, Tom Simonite!

Earlier this week, I sent out emails to some transportation researchers asking about Tesla's argument that Autopilot saves lives because the system had one fatality in 130 million miles, the average for U.S. roads as a whole is one per 94 million. It appeared that those 130 were disproportionately driven under safer-than-average conditions, making the comparison largely meaningless.

Unfortunately, all of the points I wanted to make in my post (and then some) were covered yesterday in this excellent piece by Mr. Simonite of the MIT Technology Review.

Soon after, Tesla’s CEO and cofounder Elon Musk threw out more figures intended to prove Autopilot’s worth in a tetchy e-mail to Fortune (first disclosed yesterday). “If anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available,” he wrote.

Tesla and Musk’s message is clear: the data proves Autopilot is much safer than human drivers. But experts say those comparisons are worthless, because the company is comparing apples and oranges.

“It has no meaning,” says Alain Kornhauser, a Princeton professor and director of the university’s transportation program, of Tesla’s comparison of U.S.-wide statistics with data collected from its own cars. Autopilot is designed to be used only for highway driving, and may well make that safer, but standard traffic safety statistics include a much broader range of driving conditions, he says.

Tesla’s comparisons are also undermined by the fact that its expensive, relatively large vehicles are much safer in a crash than most vehicles on the road, says Bryant Walker Smith, an assistant professor at the University of South Carolina. He describes comparisons of the rate of accidents by Autopilot with population-wide statistics as “ludicrous on their face.” Tesla did not respond to a request asking it to explain why Musk and the company compare figures from very different kinds of driving.

Google has in the past drawn similar contrasts between the track record of its self-driving cars and accident statistics for humans, says Smith. He, Kornhauser, and other researchers argue that companies working on autonomous driving technology need to drop such comparisons altogether. In April, a RAND Corporation report concluded that fatalities and injuries are so rare that it would require an automated car to drive as many as hundreds of billions of miles before its performance could be fairly compared with statistics from the much larger population of human drivers.

Instead researchers say that Tesla and others need to release more data on the limitations and performance of automated driving systems if self-driving cars are to become safe and understood enough for mass market use.

You should read the whole thing.

Wednesday, July 6, 2016

The near miss effect -- what compulsive gamblers and tech reporters have in common [repost]

[This piece from 2014 seems once again relevant given our Hyperloop and Tesla threads.]

A few days ago I posted a bit of a rant about how excited tech reporters got over a Google press release about the company's driverless car. The problem was that, based on the details available in the reports, there didn't seem to be any significant indications of increased functionality.

Of course, reporters have a strong incentive to see signs of progress -- "just around the corner" sells better than "don't get your hopes up" -- but I think this eagerness has more than one cause (most things do) and I wonder if one of those factors might have something to do with the near-miss effect as described in this memorable story from This American Life:

Sarah Koenig

Habib and especially Dixon have spent a long time studying what's called the near-miss effect. In slot machines, a near miss is just what it sounds like. It's when, say, two cherries line up on the payoff line, and then the third is about to come but stops just short or just past the payoff line.
...

Sarah Koenig

In 2006, Dixon teamed up with Habib to see if they could figure out what was happening to people neurologically when they saw near misses. They scanned the brains of 22 gamblers-- 11 addicted, or what they called pathological gamblers, and 11 non-pathological gamblers-- as all these people watch near misses on slot machine displays.
The results surprised them. Because while both addicted and non-addicted gamblers said the near misses felt more like wins, their brains said something different. Here's Reza Habib.

Reza Habib

What you see in the non-pathological gamblers is that the regions that are activated for losses, those same regions tend to be also activated for near misses. And so the brain, at least, processes these near misses in the same way that it processes losses in the non-pathological gamblers. In pathological gamblers, the same regions that are activated for wins are also activated for near misses.
And so these include regions such as the amygdala, which is a region involved in emotional processing, as well as parts of the brain stem which are involved in reward and dopamine function, which is part of the reward system. So the pathological gamblers, their brains, at least, are responding to these near misses in the same way that they respond to wins.

Mark Dixon

This is Mark again. And one of the effects of this, or the implications of these data, are that a pathological gambler going into the casino who's actually losing, his brain is firing like he's winning. Disturbing, isn't it.

Sarah Koenig

Yeah. It's crazy.

Mark Dixon

Oh, it's way crazy. And so you are experiencing those same sensations as a win when you're not winning.

Just as casinos are very good at eliciting the reactions associated with winning even when very little winning is going on, companies like Apple and Google have become very good at eliciting the reactions associated with technological progress even when the technology is advancing little and sometimes not at all. In this case, Google assumed correctly that, by showing journalists a prototype that looked different but apparently did nothing new, the reporters would react as if they had seen an advance in the technology. They respond to these meaningless press conferences in much the same way as a pathological gambler responds to two cherries out of three.

If anything, the shift to custom-made, low-speed cars would seem to be a sign of trouble. Google's stated goal is to release this technology to manufacturers in 2018. Whenever you're designing complex systems that have to work with complex systems designed by others, compatibility is usually more than half the battle. I would therefore expect some of the most daunting engineering challenges to come from getting Google's technology to work smoothly with a wide range of makes and models. The decision to go from road testing Toyota Priuses ('Prii' believe it or not) and Audi TTs to track testing what are basically go-carts does not make the four year goal look more likely.

What happens if that deadline isn't met? I suspect it will still be business as usual. Google has proven that merely symbolic progress or, failing even that, just bringing the subject up is enough to create the desired effect of public perception. My guess is that the only real deadline is beating GM, Mercedes-Benz, or Nissan or any of the other companies that have equally gifted engineers working on these problems (but with less gifted PR departments to promote them).

Tuesday, July 5, 2016

In that last paragraph you can substitute in SpaceX, Solar City and the Hyperloop

From a very good article by Rick Newman

Tesla never should have said its cars operate on 'autopilot'

Tesla (TSLA) has never sold a self-driving car. No automaker has. The technology isn’t ready for the public yet.

Yet Tesla offers an option on its Model S sedan called “Autopilot,” which apparently factored into a fatal accident on May 7 in central Florida, which the federal government is now investigating. It’s the first known incident in which self-driving automotive technology may have contributed to a fatality, and it comes as Tesla is facing financial strains and is attempting a controversial tie-up with Solar City (SCTY) that shareholders may squash.

...

But Tesla has gone much farther than any other automaker in promoting its self-driving technology, and branding it as a breakthrough – which it’s not, really. “Autopilot” is a term borrowed from the aviation industry, and it refers to planes that are flown almost entirely by computers except during takeoff, landing and unexpected events at altitude. Autonomous technology for cars is nowhere near as advanced as it is for airplanes, which Tesla itself acknowledges, in a fine-print kind of way.

...
The cameras, lasers, sonars and other sensors that gather the info allowing computers to make driving decisions still aren’t as good as ordinary human drivers are at reacting to the unexpected craziness that occurs on America’s roads. At a conference in Boston earlier this year, MIT engineering professor John Leonard showed video footage of several things self-driving technology still can’t master, such as a police officer waving traffic forward through a red light, which contradicts coding that says cars should stop at red lights.

At the same conference, Gil Pratt, CEO of the Toyota Research Institute, said, “We don’t have any autonomous car yet where the car, on average, is better than a person.” Just about all the experts agree that while sensors and computers might be better at piloting a vehicle on a predictable route with no surprises – plus, computers don’t drink, do drugs or get emotional – technology still struggles to make good split-second decisions when something that’s not in the database occurs. Bright light, driving rain, snow and confusing construction detours still confound automotive sensors.

...

The traditional automakers are impressed with the fact that Tesla has brought compelling products to market against steep odds. But they’re also dismissive of Musk’s penchant for hype and his habit of claiming leadership on technologies that Big Auto has been developing for decades, and rolling out cautiously. A safety expert for Volvo, which probably focuses more on safety than any other automaker, told the Verge in April that Tesla’s Autopilot feature is “an unsupervised wannabe…. it gives you the impression it’s doing much more than it is.”

It's hard to believe, but we've been hammering away at the autonomous car story for over five years now, and in all that time, the pattern has remained remarkably consistent.

Ddulite journalists have, with very few exceptions, stuck close to a fundamentally flawed narrative that understates the serious problems with the technology, overstates the impediments of regulation, and fetishizes the role of visionary entrepreneurs.

This fetishizing has reached its zenith with Elon Musk. As I said earlier, the basic scenario described here with Tesla applies to virtually all of the man's enterprises. He has some good ideas, moves quickly, and tends to shake things up in a positive way, but the hype-to-substance ratio is astounding, the businesses invariably run on large checks from the government and still frequently manage to lose money, and the technological advances often owe most of their foundation to other companies.


Monday, July 4, 2016

In lieu of fireworks, here's some data visualization for the 4th

I realize that's probably not how the creator thought of it, but bear with me. Starting with a complex subject, this video presents a great deal of information in a clear and unambiguous fashion. What's more, it holds the interest, is aesthetically pleasing, and can give us real insight into the subject. Isn't that what we look for in data visualization?

(And wouldn't you rather listen to Franz Liszt's Hungarian Rhapsody rather than some guy talking about GDP?)



Liszt Hungarian Rhapsody 2


 

Friday, July 1, 2016

This is the sound of the sky not falling

From the LA Times:
Fatal crash of Tesla Model S in autopilot mode leads to investigation by federal officials

Federal regulators opened a preliminary probe into the autopilot feature on a Tesla Model S electric car after a fatal crash involving the technology, Tesla said Thursday.

The fatality – thought to be the first in the auto industry related to an autopilot feature – sparked questions about the limitations of the technology and its place in what is seen as an inevitable march toward self-driving vehicles. It followed other recent incidents in which drivers reported collisions while using such technology.

The male driver died in a May 7 crash in Williston, Fla., when a big rig made a left turn in front of his Tesla.

In a blog post, Tesla Motors Inc. said the 2015 car passed under the trailer, with the bottom of the trailer hitting the Model S’ windshield.

“Neither autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied,” Tesla said.

Tesla said it immediately reported the fatal crash to the National Highway Traffic Safety Administration. The automaker emphasized that its autopilot feature is still in a beta phase of introduction and has limitations.

...

The need to maintain control became clear to Aaron Souppouris when he test-drove a Model S in April. Souppouris, a senior editor at the blog Engadget, said Tesla loaned him the car for an article about the autopilot feature and he drove it about 500 miles around England.

There were times at night, he said, when the car went back and forth within a lane and seemed “skittish,” he said.

Once on autopilot, the car tried to change lanes but then reverted back suddenly, and another time it disengaged the autopilot mode in the middle of a lane switch, Souppouris said. The car did better during the day than at night, he said.

I should probably do the smart thing and hold off until we see how things shake out, but I'm going to make a prediction:

This is not going to be that big of a deal.

According to the standard narrative this should be huge; the story has always been one of yet another good-to-go transformative technology kept out of our hands by lawyers and bureaucrats. One of the recurring comments in the reporting was that the first time a self-driving car was involved in a serious accident the result would be a litigious nightmare.

The truth is that the main reason we don't have fully autonomous cars is that parts of the technology have been remarkably hard to crack. Furthermore, the potential economic and environmental impact of driverless cars (level 4 or 5 depending on which scale you prefer) has been overstated, while far-less-hyped semi-autonomous features like adaptive cruise control and automatic braking are already revolutionizing auto safety.

There may be a lawsuit or some regulatory tightening after this, but these things will probably be minor factors (I'd worry more about the PR hit). The main problem facing Tesla is still the inability to turn a profit, and no amount of business-friendly policies will save a company that can't manage that.

Thursday, June 30, 2016

You need to watch this

I don't want to give anything away except to say that the journalistic criticism you'll find here is a damned sight sharper than anything you'll find in the New York Times or in On the Media.







Wednesday, June 29, 2016

This would also be perhaps the first time a question starting with "how does it feel" will produce an interesting answer

As we've observed before, a handful of journalists and pundits have actually managed not only to maintain but to enhance their reputations through the recent primary. Some names that come to mind are Chait, Krugman, Ornstein & Mann and perhaps most of all, Josh Marshall. Over the past year, I don't think anyone has been better at what Orwell would call seeing one's noise in front of one's face than TPM's founder.

In an earlier post, we addressed the unique role that emotional gratification has played in the Trump campaign and speculated on how the candidate might react to the plummeting levels of fun. In a recent post, Marshall goes a step further, from not-having-a-good-time to “psychic disembowelment.”
If you are campaigning on the fact that you're a winner but you're losing, the premise of your campaign just falls apart.

Just as polls created a positive feedback loop for Trump in the primaries, where they seemed to confirm that no transgression or conventional misstep could hurt him, he is now entering a negative feedback loop with the same polls. The perception of losing amplifies every misstep. It makes him lose more both because the premise of the campaign starts to collapse in on itself but, relatedly, the brittle edifice of a narcissistic ego starts to come under an insuperable strain.

Put simply, he gets more erratic.

I suspect that in a couple months this will become the sum of most of the Democrats attacks on Trump both because it undermines the central premise of his campaign ("I always win; and I can make you win too.") but also because these attacks cut visibly cut Trump so deeply, triggering a sort of psychic disembowelment. You can see this in the increasingly irritated and thin-skinned responses to criticism or any references to his flagging campaign efforts.

...

At some point in the not distant future, some reporter - probably a not altogether pleasant one - will ask Trump: "How does it feel to be losing so badly? Just on a personal level? Does it hurt? Do regret getting into this?" It won't be pretty because Trump's ego is fragile. From there I suspect you'll see it cropping up in campaign attacks from every direction.


As mentioned before, under normal circumstances, psychoanalyzing presidential candidates is almost always a waste of time, but these are clearly special times. Addressing Trump's motives and emotional problems is not only productive; it's practically unavoidable.

Tuesday, June 28, 2016

Brexit Bregrets or Polls as Self-refuting Prophesies

I haven't been closely following either the Brexit in general or the Brexit polling in particular, so I don't want to go out on any limbs speculating about what drove what except to note that this post from Scott Lemieux raises an interesting possibility.
I don’t know how many Brexit voters fall into the remorseful category. But I remember seeing somewhere (HELP ME BROCKINGTON) that a large majority of Brexit voters assumed that Remain would win. For what was surely a decisive number of Brexit voters, the vote was not a considered view that leaving the EU would be better than remaining, but rather was a vehicle for sending a message to British elites. 
I'm not saying that this was a factor but just as an intellectual exercise, try this. Imagine that widely reported polls contributed to the perception that the voters would chose to remain. That in turn created the perception that "leave" was a safe protest vote. Does it make sense to say that the polls were wrong in this context?