Saturday, May 30, 2015

It is a legitimately good sign when a transportation story uses the term “maximum throughput”

Tom Vanderbilt writing for the Washington Post:
But as good as fast-moving roads might be for the individual driver, they are not the best for the most drivers. As data gleaned from in-pavement “loop detectors” on Washington state highways showed, those highways were able to achieve “maximum throughput” — pushing the most cars through one segment of road in a given time — at speeds that were roughly 80 percent of the posted speed limit of 60 mph. Why? At higher speeds, drivers need to allow more “headway” between vehicles, meaning more space is required per vehicle. And faster-moving traffic tends to break down more quickly, with more severe “shock waves”; it takes a lot longer to recover from a traffic jam than to get into one. I have been told, anecdotally, by traffic engineers that the left-hand “passing lane” can become congested first. (I’ll leave it to you to decide if karmic justice is at work there.)

In a “speed harmonization” experiment on Colorado’s I-70, the state highway patrol was able to improve traffic flow by enforcing, via phalanxes of patrol cars, 55 mph speeds — thus preventing the instability caused by people driving fast into packs of congested, slower-moving vehicles.
This goes along with an idea I had about having a fleet of driverless cars using real time data to selectively slow traffic in some or all lanes for the purpose of reducing the accordion effect. If nothing else it would be an interesting project to model.

Thanks to Mike for the link.

Friday, May 29, 2015

A statistician walks into a grocery store... repeatedly




Things have gotten busy over at the food blog. The plan was always to go hot and heavy for a month or two. With almost one hundred posts either up or in the queue, I think we're getting there.

The content is a mix of new pieces (like the recent one on chickenless chicken and black bean soup), reposts, links to recent news of interest...





And lots of historical pieces. Some, like this WWII film (highly recommended for anyone interested in the history of nutrition), focus on how we think about food.



Others (Frankenberry, Count Chocula and the late Forry Ackerman) focus on how we have marketed it.



Check it out if you get a chance and please share anything you think is deserving. We've had very little progress on the SEO front.




Adventures in Ithuvania

There's a wonderful Far Side cartoon that shows two scientists addressing a man sitting behind a desk in a sumptuous office. The lead scientist says:

"Sorry, your highness, but you're really not the dictator of Ithuvania, a small European republic. In fact, there is no Ithuvania. The hordes of admirers, the military parades, this office -- we faked it all as in experiment in human psychology. In fact, you highness, your real name is Edward Belcher, you're from Long Island, New York, and it's time to go home, Eddie."

Sometimes, when I come across yet another bit of jaw-dropping flakiness from some tech-bubble billionaire, my thoughts turn to Ithuvania. What if this were an experiment? What if some well-funded research organization decided to see what would happen if it randomly selected individuals of average intelligence, handed them huge checks and told them they were super-geniuses?

I'm not saying that's what happened; I'm just saying the results would have been awfully damned similar.

From Wired:

THE SEASTEADING INSTITUTE was the toast of tech entrepreneurs when it received financial backing from venture capitalist Peter Thiel in 2008. Its mission was to build a manmade island nation where inventors could work free of heavy-handed government interference. One early rendering shows an island raised on concrete stilts in eerily calm waters. The buildings atop the platform resemble nothing so much as the swanky tech campus of an entrepreneur’s ultimate dream: No sign of land or civilization in sight. The island, despite appearing strapped for square footage, has room for a full-size swimming pool with deck lounges.

In a 2009 essay, Thiel described these island paradises as a potential “escape from politics in all its forms.” It wasn’t just desirable, he said. It seemed possible. “We may have reached the stage at which it is economically feasible, or where it will soon be feasible,” he wrote.

More than a half-decade later, the dream has yet to be realized. And optimism is starting to waver. Earlier this year, during a talk at George Mason University, Thiel said, “I’m not exactly sure that I’m going to succeed in building a libertarian utopia any time soon.” Part of the problem: A truly self-sufficient society might exceed the range even of Thiel’s fortune. “You need to have a version where you could get started with a budget of less than $50 billion,” he said.

For its part, The Seasteading Institute has also come to appreciate that the middle of the ocean is less inviting than early renderings suggest. It now hopes to find shelter in calmer, government-regulated waters. According to its most recent vision statement, “The high cost of open ocean engineering serves as a large barrier to entry and hinders entrepreneurship in international waters. This has led us to look for cost-reducing solutions within the territorial waters of a host nation.”

Thiel’s reassessment marks a clear departure from tech culture’s unflinching confidence in its ability to self-govern. In recent years a number of prominent entrepreneurs have urged Silicon Valley to create a less inhibited place for its work. Larry Page called on technologists to “set aside a small part of the world” to test new ideas. Elon Musk has aimed at colonizing Mars. And venture capitalist Tim Draper made a proposal to divide Silicon Valley into its own state. But aside from the continued growth of full-service tech campuses such as Google’s and Facebook’s, very little has been accomplished in the way of true societal independence.

Building a government, it turns out, is a more complex challenge than much of Silicon Valley would have you believe. Now, Thiel and other high-profile Silicon Valley investors are carefully taking stock of the anti-government view they helped popularize. For all Thiel’s open criticism of elected officials, he sounded remarkably like a politician recanting false promises on the stage at George Mason. Toward the end of the talk, he reflected for a moment on his early essay on seasteading. “Writing is always such a dangerous thing,” he said. “It was late at night. I quickly typed it off.”


Thursday, May 28, 2015

The actual future of the driverless vehicle

I've been fairly skeptical of the likelihood of seeing driverless cars in the near future, partially because the economic case was weak, but primarily because the data infrastructure was prohibitively expensive, at least if you wanted a car that could go anywhere (and that flexibility is one of the main reasons for having a car).

Driverless trucks are another story. There you have vehicles with significant labor costs that mostly travel a relatively small number of routes.  Though Daimler is obviously downplaying the possibility, I would not be at all surprised to hear that fully autonomous trucks will be operating in at least a few countries in the next decade.

From NPR.
Automaker Daimler unveiled a truck last week that drives itself, called the Freightliner Inspiration. But the truck is not yet entirely autonomous.

"You still have the driver in the driver's seat, ready to take over at all times in case something goes wrong or the system encounters a situation it's not really well prepared for," says Alex Davies, associate editor for Wired, who was at the demonstration and rode in the big rig.

The driver controls the rig on surface roads, but on the highway, engages the autopilot mode. Cameras detect the lane lines to keep the truck headed in the right direction, Davies tells NPR's Rachel Martin.

"Then from there on, the truck will stay in its lane, maintain its speed and a safe distance from other vehicles," he says. "But you still need to be in the seat, ready to take over."

And being ready to take over means the driver can't exactly take a nap.

When it's time for the driver to take over — at the correct exit or if bad weather hits — the truck alerts the driver with a beep. If the driver doesn't respond, the truck slows and eventually comes to a complete stop, Davies says.

Daimler says the Inspiration, the first self-driving truck licensed to roll on public roads — highways and interstates in Nevada — is the future of trucking and may hit the market before autonomous cars, according to the Associated Press. Drivers will still be human, but might be called "logistics managers."

"The human brain is still the best computer money can buy," said Daimler Trucks North America LLC CEO Martin Daum.

Davies says no automaker will ever use the term "driverless" for a vehicle, preferring the safer-sounding "autonomous" or, in the case of the Freightliner Inspiration, "piloted."

Wednesday, May 27, 2015

"Inequality and Urbanism"

When time permits, I want to get back to this.

From Paul Krugman:
First, when it comes to things that make urban life better or worse, there is absolutely no reason to have faith in the invisible hand of the market. External economies are everywhere in an urban environment. After all, external economies — the perceived payoff to being near other people engaged in activities that generate positive spillovers — is the reason cities exist in the first place. And this in turn means that market values can very easily produce destructive incentives. When, say, a bank branch takes over the space formerly occupied by a beloved neighborhood shop, everyone may be maximizing returns, yet the disappearance of that shop may lead to a decline in foot traffic, contribute to the exodus of a few families and their replacement by young bankers who are never home, and so on in a way that reduces the whole neighborhood’s attractiveness.

On the other hand, however, an influx of well-paid yuppies can help support the essential infrastructure of hipster coffee shops (you can never have too many hipster coffee shops), ethnic restaurants, and dry cleaners, and help make the neighborhood better for everyone.

What does history tell us? Politically, I’d like to say that inequality is bad for urbanism. That’s far from obvious, however. Jane Jacobs wrote The Death and Life of Great American Cities right in the middle of the great postwar expansion, an era of widely shared economic growth, relatively equal income distribution, empowered labor — and collapsing urban life, as white families fled the cities and a combination of highway building and urban renewal destroyed many neighborhoods.

And when a partial urban revival began, it was arguably associated with forces driven by or associated with rising inequality. Affluent types in search of a bit of cool — probably 5 percenters rather than 1 percenters, and more or less David Brooks’s Bobos (bourgeois bohemians) drove gentrification and revival in urban cores; in New York, at least, large number of poorly paid but striving immigrants drove the revival of outer borough neighborhoods like Jackson Heights or Brighton Beach.

"...pretty soon you're talking real money"

Every now and then, it is useful to step back and remind ourselves just how much money is changing hands due to the education reform movement.
For the first time in three decades, a new company is poised to develop and administer the state-required exams Texas students begin taking in the third grade.

The state is in negotiations with Educational Testing Service, or ETS, to take over the bulk of the four-year, $340 million student assessment contract, the Texas Education Agency announced Monday. Company Vice President John Oswald said ETS is "privileged and honored" to land the work. Final contracts are still being negotiated.

The London-based Pearson Education has held the state's largest education-related contract — most recently, a five-year, $468 million deal to provide state exams through 2015 — since Texas began requiring state student assessments in the 1980s. Under the new agreement, the company would still develop the state's assessments designed for special needs and foreign students. That portion of the contract is worth about $60 million.  
Of course, as Diane Ravitch points out, Texas doesn't seem to have cut itself the best deal.
Here is the puzzling question: Why did it cost $468 million for a five-year contract with Pearson when New York State pays Pearson “only” $32 million for a five-year contract? Does New York have smarter negotiators? Does Pearson have better lobbyists in Texas than in New York? Does New York get Texas’s used questions? True, Texas has more children than New York, but not 15 times more. Can anyone explain?

Tuesday, May 26, 2015

Three from our war on science correspondent...

... Mike the Mad Biologist.

Justin Pidot writing for Slate.
Imagine visiting Yellowstone this summer. You wake up before dawn to take a picture of the sunrise over the mists emanating from Yellowstone hot springs. A thunderhead towers above the rising sun, and the picture turns out beautifully. You submit the photo to a contest sponsored by the National Weather Service. Under a statute signed into law by the Wyoming governor this spring, you have just committed a crime and could face up to one year in prison.

Wyoming doesn’t, of course, care about pictures of geysers or photo competitions. But photos are a type of data, and the new law makes it a crime to gather data about the condition of the environment across most of the state if you plan to share that data with the state or federal government. The reason? The state wants to conceal the fact that many of its streams are contaminated by E. coli bacteria, strains of which can cause serious health problems, even death. A small organization called Western Watersheds Project (which I represent pro bono in an unrelated lawsuit) has found the bacteria in a number of streams crossing federal land in concentrations that violate water quality standards under the federal Clean Water Act. Rather than engaging in an honest public debate about the cause or extent of the problem, Wyoming prefers to pretend the problem doesn’t exist. And under the new law, the state threatens anyone who would challenge that belief by producing information to the contrary with a term in jail.


John Timmer writing for Ars Technica
Yesterday, by a party-line vote, Republicans in the House Committee on Science, Space, and Technology approved a budget authorization for NASA that would see continued spending on Orion and the Space Launch System but slash the agency's budget for Earth sciences. This vote follows the committee's decision to cut the NSF's geoscience budget and comes after a prominent attack on NASA's Earth sciences work during a Senate hearing, all of which suggests a concerted campaign against the researchers who, among other things, are telling us that climate change is a reality.


Emily DeMarco writing for Science Insider.
Representative Louie Gohmert (R–TX) is worried that scientists employed by the U.S. government have been running roughshod over the rights of Americans in pursuit of their personal political goals. So this week Gohmert, the chair of the oversight and investigations subpanel of the U.S. House of Representatives’ Natural Resources Committee, held a hearing to explore “the consequences of politically driven science.” Notably absent, however, were any scientists, including those alleged to have gone astray.

“The purpose of this hearing is to hear from real people, mammals called human beings that have been harmed by the federal government,” Gohmert said in opening the 29 April hearing, which featured testimony from three Republican-called witnesses on alleged misdeeds by researchers with the U.S. Fish and Wildlife Service (FWS) and National Park Service (NPS).
I'm trying to cut back on mean and snarky comments so

Monday, May 25, 2015

Really cheap chickenless chicken and black-bean soup

I've got another post at the food blog on how you can actually live on a food stamp budget.


The New York Times has a class problem -- "Poor Little Rich Women"

Actually, it's not just a NYT problem -- it extends throughout the media -- but it has gotten particularly embarrassing with what is supposed to be our best paper. The NYT has always identified with the upper classes ("It does not soil the breakfast cloth") but recently it has come to focus on and empathize with the one percent to such a degree that the identification is distorting its journalistic judgement.

We've seen irony-free articles about how hard it is getting by on 300K and op-eds on how unfair it is when wealthy students at top prep schools have to settle for safety schools because their SATs are too low.

And now the New York elite have their own personal anthropologist in Wednesday Martin:
A social researcher works where she lands and resists the notion that any group is inherently more or less worthy of study than another. I stuck to the facts. The women I met, mainly at playgrounds, play groups and the nursery schools where I took my sons, were mostly 30-somethings with advanced degrees from prestigious universities and business schools. They were married to rich, powerful men, many of whom ran hedge or private equity funds; they often had three or four children under the age of 10; they lived west of Lexington Avenue, north of 63rd Street and south of 94th Street; and they did not work outside the home.

Instead they toiled in what the sociologist Sharon Hays calls “intensive mothering,” exhaustively enriching their children’s lives by virtually every measure, then advocating for them anxiously and sometimes ruthlessly in the linked high-stakes games of social jockeying and school admissions.

Their self-care was no less zealous or competitive. No ponytails or mom jeans here: they exercised themselves to a razor’s edge, wore expensive and exquisite outfits to school drop-off and looked a decade younger than they were. Many ran their homes (plural) like C.E.O.s.

It didn’t take long for me to realize that my background in anthropology might help me figure it all out, and that this elite tribe and its practices made for a fascinating story.
Martin's observations are "fascinating" only if you start with a strong proclivity to find the lives of the New York elite interesting and important. For the rest of us, there is little here that is notable or even all that surprising.

For example, you would expect a great deal of time and money to be spent on maintaining personal appearance given the wealth and leisure, the-see-and-be-seen culture, and, to be blunt, the fact that some of these women are married to rich older men who demanded highly restrictive prenups and who have shown no compunction about dumping spouses.

Nor does the explicit and rather cold-blooded talk of wife bonuses come as a shock to anyone. Given the culture of the industries that produce the elite of New York, it would be rather strange if the language of these industries did not make it into day-to-day life.

Martin clearly wants to imbue her subjects with an importance and dignity not shared by the millions of other stay-at-home spouses around the country. This, along with other embarrassing traits of New York Times journalism, is beautifully illustrated when she talks about her subjects as "CEOs" of households.

Anyone who follows financial reporting will have encountered the big swinging check syndrome: a questionable or downright bad business decision will be lauded as bold and brilliant because it involves a stunning amount of money. In this article, the fact that these spouses are managing household with large budgets is seen, in and of itself, as an accomplishment. (The real accomplishment is doing the opposite.) Adding to the embarrassment, all of this comes on the heels of a slue of stories about well-to-do New Yorkers who are comically bad at household finance.

As mentioned before, this is an ongoing concern. It's not just that the press tends to see the world from an increasingly homogeneous lens -- top quartile, Ivy League, Northeastern, largely white -- but that the press does not realize how different the world looks to the rest of us.

Friday, May 22, 2015

I haven't made up my mind whether Mike is onto something with the "gentry class" designation

But it's definitely an idea worth discussing.



To return to [David] Leonhardt, this is the key point:
There is a good debate about what to call families who don’t have enormous wealth but who also make much more than most Americans. There is also a good debate about whether any changes to tax policy — the background to Josh’s article — should involve sacrifices from that group.
In the course of the debate, though, let’s at least remain cleareyed about the fact that six figures of income is not the 21st-century American norm.
[I'm not much of a Leonhardt fan, but he makes some good points in the original piece. MP]
I suggest we call them the gentry. They’re not middle-class (whether it be the upper or lower reaches), since they can live very differently from (or, perhaps, better than) most of us. They can have most of the nice things. At the same time, they’re not wealthy or flat-out rich: if they don’t work, they can fall down the ladder, sometimes very quickly. Living comfortably or well with the interest on investments isn’t an option.
The reason I refer to this group as the gentry is, in part, it’s the group that’s responsible for gentrification in urban areas (no gentry, no gentrification), so it seems to fit. The other reason is to intentionally invoke the Victorian notion of the word. The gentry, whether it be a more religious, conservative style, or a more liberal, less traditional style, definitely has specific class interests (boldface mine):
Yet the upper middle class collectively wields far more influence. These are households with enough money to make modest political contributions, enough time to email their elected officials and to sign petitions, and enough influence to sway their neighbors. Upper-middle-class Americansvote at substantially higher rates than those less well-off, and though their turnout levels aren’t quite as high as those even richer than they are, there are far more upper-middle-class people than there are rich people….
Another thing that separates the upper middle class from the truly wealthy is that even though they’re comfortable, they’re less able to take the threat of tax increases or benefit cuts in stride. Take away the mortgage interest deduction from a Koch brother and he’ll barely notice. Take it away from a two-earner couple living in an expensive suburb and you’ll have a fight on your hands. So the upper middle class often uses its political muscle to foil the fondest wishes of egalitarian liberals.
…part of my objection is that upper-middle-income voters only oppose tax hikes on themselves. They are generally fine with raising taxes on people richer than themselves, including taxes on the investments that rich people make in new products, services, and businesses. I find that both annoyingly self-serving and destructive. The bigger reason, however, is that upper-middle-class people don’t just use their political muscle to keep their taxes low. They also use it to make life more expensive for everyone else.
…You’d almost get the impression that while working- and lower-middle-class people are expected to compete, whether with the Ubers of the world or with Chinese manufacturing workers or with immigrants with modest skills, members of the upper middle class ought to be immune.
Leaving aside Salam’s conservative take on things (very different from the Mad Biologist!), he’s on to something, though I would argue what Salam calls upper-middle class is actually a gentry, with very clear economic interests. While there can be cultural and regional disagreements, en masse, they are quite coherent.
While the U.S. pretends to be a class-free society (even as U.S.-ians spend an inordinate amount of time making very subtle class distinctions), the reality is we do have classes with specific economic interests.
It’s time we realize we have a gentry.

Thursday, May 21, 2015

Fake Data in Political Science

This is Joseph.

One of the most challenging things in population research is the need to trust in the data and analysis done by other groups.  Unlike chemistry, we cannot simply replicate experiments without huge amounts of expense.  Furthermore, the population is getting less and less responsive to surveys.  In a very real sense, endlessly replicating strong and clean results is going to partially displace other research questions.  After all, people have a limited tolerance for surveys.  This goes double for high burden approaches, such as door-to-door interviews and interventions (which require trained and paid field agents to conduct the survey with a high degree of professionalism and often limited wages).

This need to trust, where possible, makes stories like this one painful.  Full respect to the field that the problem was detected and I am glad that academia was self correcting.  But these actions have pretty strong consequences

I also think there is a very important difference between a technical error, misunderstanding of data, and completely making data up.  The first two are the cases that give every analyst nightmares.  But the last seems to have no excuses at all -- how could somebody not know that they were faking data? 

That said, it's not like medicine is innocent (as Thomas Lumley points out) and medical research probably has a lot more direct potential to cause harm (as patient concerns about this "treatment is not working" will be dismissed in the face of "randomized controlled trial" "evidence"). 

EDIT: and how could I overlook Andrew Gelman's take on this (which is right in his area)

Blast from the past -- The curse of large numbers and the real problem with p-values

Following up on Joseph's recent piece.

[Originally posted MARCH 22, 2010]

(Some final thoughts on statistical significance)

The real problem with p-values isn't just that people want it to do something that it can't do; they want it to do something that no single number can ever do, fully describe the quality and reliability of an experiment or study. This simply isn't one of those mathematical beasts that can be reduced to a scalar. If you try then sooner or later you will inevitably run into a situation where you get the same metric for two tests of widely different quality.

Which leads me to the curse of large numbers. Those you who are familiar with statistics (i.e. pretty much everybody who reads this blog) might want to skip the next paragraph because this goes all the way back to stat 101.

Let's take simplest case we can. You want to show that the mean of some group is positive so you take a random sample and calculate the probability of getting the results you saw or something more extreme (the probability of getting exactly results you saw is pretty much zero) working under the assumption that the mean of the group was actually zero. This works because the bigger the samples you take the more the means of those samples will tend to follow a nice smooth bell curve and the closer those means will tend to group around the mean of the group you're sampling from.

(For any teachers out there, a good way of introducing the central limit theorem is to have students simulate coin flips with Excel then make histograms based on various sample sizes.)

You might think of sampling error as the average difference between the mean of the group you're interested in and the mean of the samples you take from it (that's not exactly what it means but it's close) . The bigger the sample the smaller you expect that error to be which makes sense. If you picked three people at random, you might get three tall people or three millionaires, but if you pick twenty people at random, the chances of getting twenty tall people or twenty millionaires is virtually are next to nothing.

The trouble is that sampling error is only one of the things a statistician has to worry about. The sampled population might not reflect the population you want to draw inferences about. Your sample might not be random. Data may not be accurately entered. There may be problems with aliasing and confounding. Independence assumptions may be violated. With respect to sample size, the biases associated with these problems are all fixed quantities. A big sample does absolutely nothing to address them.

There's an old joke about a statistician who wakes up to find his room on fire, says to himself "I need more observations" and goes back to sleep. We do spend a lot of our time pushing for more data (and, some would say, whining about not having enough), but we do that not because small sample sizes are the root of all of our problems but because they are the easiest problem to fix.

Of course "fix" as used here is an asymptotic concept and the asymptote is not zero. Even an infinite sample wouldn't result in a perfect study; you would still be left with all of the flaws and biases that are an inevitable part of all research no matter how well thought out and executed it may be.

This is a particular concern for the corporate statistician who often encounters the combination of large samples and low quality data. It's not unusual to see analyses done on tens or even hundreds of thousands of sales or customer records and more often than not, when the results are presented someone will point to the nano-scale p-value as an indication of the quality and reliability of the findings.

As far as I know, no one reviewing for a serious journal would think that p<0 .001="" 99.9="" a="" almost="" an="" analytic="" background="" br="" but="" conclusion="" everyone="" is="" means="" re="" s="" sure="" that="" thinks.="" true="" we="" what="" without="">
And that is a problem.

Wednesday, May 20, 2015

Very good statistics post (p-value edition)

This is Joseph (and I located the piece through Economist's View)

This article on p-values was a very interesting read.  The author (who teaches statistics) has a very nice discussion of p-values:
One reason for this, I think, is that we fail to teach well how, with enough data, any non-zero parameter or difference becomes statistically significant at arbitrarily small levels. The proverbial expression of this, due I believe to Andy Gelman, is that "the p-value is a measure of sample size". More exactly, a p-value generally runs together the size of the parameter, how well we can estimate the parameter, and the sample size. The p-value reflects how much information the data has about the parameter, and we can think of "information" as the product of sample size and precision (in the sense of inverse variance) of estimation, say n/σ2. In some cases, this heuristic is actually exactly right, and what I just called "information" really is the Fisher information.
But I found this way of talking about p-values to be extremely useful, and something that should be kept in mind in Epidemiology -- where a significant association estimates from a big sample with a small effect can often be uninteresting.  You never reduce bias to zero in a real observational study and interventions rarely remove an association entirely (as not everyone changes behavior or mitigation is partial).  In the era of big data, this becomes important.

Fun stuff.  

Double-talk and Mathiness

If you haven't been following the reaction to Paul Romer's Mathiness paper, you should check it out. Here's a key quote:

Academic politics, like any other type of politics, is better served by words that are evocative and ambiguous, but if an argument is transparently political, economists interested in science will simply ignore it. The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content
Brad DeLong has had some sharp comments, but I think the best reply to Romer's critics came from Romer himself.

If you think that what McGrattan and Prescott do for location is even remotely on the same level as what Solow did for capital or what Becker did for human capital, please go read the two M-P papers (JET 2009, AER 2010.)
When you think you are too stupid to understand what they are saying and want to give up, trust me, it isn’t you. What they are saying makes no sense. No one can understand it. The authors do not understand it.
Here is a sample of what you can expect:
Technology capital is distinguished from other types of capital in that a firm can use it simultaneously in multiple domestic and foreign locations.  (Footnote: In the language of classical general equilibrium theory, a unit of technology capital is a set of technologies, with one technology for each location.) (JET 2009, p. 2455)
“A unit … is a set”? This is just gibberish. Forget about whether the model connects in any meaningful way to the real world. There is no way to make sense of this statement even in the make-believe world of the model. In the model, the authors define technology capital is a cardinal measure. It is supposed to be something that you can have 2 units of, or 4, or 10. What could 2 or 4 or 10 sets of technologies possibly mean?
We assume that the measure of a country’s production locations is proportional to its population, since locations correspond to markets and some measure of people defines a market. (JET 2009 p. 2461)
I feel guilty pulling a quote like this one, as if I’m humiliating some miserable undergraduate by reading to the class from a term paper on a project that fell apart. But remember, this is from an article that was published in the Journal of Economic Theory.
As you read this quote, remember that the motivation for the theory is that for these authors, perfect competition is the ultimate non-negotiable, more sacred even than micro-foundations. If this were a Hotelling model of location or a Krugman model of spatial location, I’d have some way to try to make sense about how “some measure of people defines a market.” But in the formal mathematical model of perfect competition that the authors are using, this sentence means nothing.
These words are untethered, undisciplined by logic or math, chosen to sound plausible enough to someone who is not paying close attention, like the set up for an applause line in a speech by a politician. This is mathiness.
There is lots more:
One unit of technology capital and z units of the composite input at a given location produce y = g(z). Consider the case of brand equity with units of technology capital indexed by m. For ease of exposition, assume for now that m is discrete and that m = 1 is the Wal-Mart brand, m = 2 is the Home Depot brand, and so on. Wal-Mart chooses the locations in which to set up stores and use its brand. It may be the case that both Wal-Mart and Home Depot have stores at the same location. (AER 2010, p. 1497.)
And if you look at the math, a company like Wal-Mart has to use one unit of technology capital for each location. Because the number of locations in the US is the US population, Wal-Mart must be using more than 300 million units of technology capital. (So more than 300 million technology sets?)
How can we reconcile the math with words that say Wal-Mart gets index m=1 and Home Depot gets m=2? And if technology capital is brand equity, why does Wal-Mart need another unit of brand equity for each US citizen/location? I haven’t a clue, but neither do the authors. One of the things that Milton Friedman got right was his observation that “confused writing is a sign of confused thinking.”
As a discussant, I put serious effort into trying to clean up the mess in the working paper that became the 2009 JET paper. I worked through the math. I talked with the authors.
The things I explained, such as how to convert any concave function like g(z) into a function with one additional variable that is homogeneous of degree one, just helped them put lipstick on this pig.
It was an embarrassment for me that the 2007 NBER version contained the acknowledgement “Discussions with Robert Lucas and Paul Romer were extremely helpful…”
One thing has been bothering me for years now is not just that economists often combine overly simplistic modeling assumptions with overly complicated math (we all do that from time to time), but that many seem to equate these things with thinking scientifically.

On the mathematical point, go back and look at “is economics a science?” debates and see how long it takes someone on the pro side to bring up how mathematically sophisticated their work is. You will frequently hear this same argument made by astrologers and other fringe science types and it has no less validity there than it does here.

This is not to say that economics is a fringe or even a soft science. While I'm not going to get into the relative hardness of different disciplines (always a waste of time), I will say that econ is a fascinating field full of smart people doing sound and important work.

My concerns are more cultural and meta. I'll try to come back and fill in some of the details later (I've got a lot on my plate now, if you'll pardon the pun), but just to paint it in broad strokes, these attitudes not only make economics look bad, they lead to all sorts of bad things like epicyclic modeling and, in the final stages, mathiness.

Which brings us to the double talk of Sid Caesar, the famous bits where he would seem to be speaking in various languages. The words were complete gibberish but by mimicking the inflection and rhythm (what he called the 'song') of each language, he could could create the impression of going from French to German to Russian and so on.

I realize Romer is making some bigger points here, but in at least one aspect, this reminds me of those Caesar routines. Instead of musical qualities, mathiness uses linguistic properties like vocabulary and syntax to create the impression of scientific reasoning. It is only when you pay attention that you realize nothing is being said.