Close window  |  View original article

Government Don't Know Jack: Research

The government can't pick what's worth researching.

By Will Offensicht  |  January 29, 2008

This is a multi-part series examining inherent conflicts in government with the ultimate goal of avoiding the Confucian Cycle.

This series of articles addresses the intuitive belief that there ought to be something government could do to make the economy perform better.  Having government boost the economy would be ideal in many ways.  Taxation gives the government a bit of every dollar that moves in the economy.  For this reason, government gets a better return on its investments than any private business can get.

When the government built the Interstate Highway System, for example, economic activity increased a lot.  The government didn't have to own the roads to make money, however, because they taxed all the increased activity.

It's wrong to say that government can never to anything that benefits the economy.  As explained in "Cynicism and the Confucian Cycle", we believe that our bureaucracy has expanded to the point that government finds it extremely difficult if not impossible to actually get anything worthwhile done.  As a result, although government should be able to benefit the economy, when it comes to actual practice, government "investments" cost us more than they produce.

This hasn't always been true.  We'll look at some historical government investments where the public made out OK, but first let's define some terms.

Research and Development

We talk of research and development (R&D) as if it were one word -- "arandee" -- but these terms actually describe very different activities.  Dictionary.com says:

research - Diligent and systematic inquiry or investigation into a subject in order to discover or revise facts, theories, applications, etc.

"Development" has a lot of definitions, but this one relates to research:

development - Determination of the best techniques for applying a new device or process to the production of goods or services.

Albert Einstein wasn't doing research when he developed his theory of relativity.  He noodled around with ideas about how space and time related and he came up with his theory that E might equal MC2.  Research became possible after he had the idea.

A few people became convinced that the idea might have merit.  They realized that certain astronomical observations would confirm or refute the theory, and a research program got underway.  The observations were made, new facts came to light, and the theory was confirmed.

Now we know that E = MC2; that's cool.  Let's build a nuclear power plant.  Woah!  Turns out, it takes billions of dollars worth of development to build a functioning nuclear reactor that will put out any energy at all, and billions more to develop the bomb, and then more and more billions as utilities try to build and deploy nuclear power plants.

It's truly unfortunate that nuclear power was first developed during wartime when the military could declare it to be a secret.  Once the bureaucrats of the Nuclear Regulatory Commission got hold of the technology, they never let go, and it's become very bureaucratic and very expensive to operate a nuclear plant.

The bureaucratic and regulatory obstacles to operating a nuclear plant are entirely separate from the political and legal obstacles to building it.  Nuclear electricity has suffered a double whammy -- the technology itself is under the regulatory thumb which multiplies costs; and so many people were viscerally opposed to nuclear power generation that costs were multiplied yet again.

Thus the question: did government get its money's worth for its investment in the Manhattan Project which developed the nuclear bomb?  We're not talking about military power -- it's obvious that things might have worked out unpleasantly if the Soviet Union had had the atomic bomb and we didn't -- but how did it work out as an investment in technology which could lead to economic growth?

The example of nuclear research and development is illustrative in two ways:

The vehemently anti-nuke crowd rejoiced that Federal control of nuclear technology made it possible for them to prevent commercial deployment, but is this the sort of obstacle we want to get in the way of other technologies as they come along?

Some would say "Yes!  No technology should be used at all until we know it's safe," but we've dealt with that Luddite stance elsewhere.

The Argument in Favor

It's obvious that research may take a long time to pay off.  It took many years and many billions to go from confirming Einstein's notion to commercially-successful nuclear power systems and hydrogen fusion has been 20 years out for the last 40 years.  Commercial companies which have to show a profit can seldom make such long-term investments.

Thus, the argument goes, only government can afford to take the long view and make such risky investments.  Government collects income taxes and other fees the whole time the research is going on, so government gets a 30% discount right off the top.

After all, the Spanish Government paid for Columbus' research to find out what lay beyond the horizon and for the development of the silver and gold mines he found there.  That was a risky investment, which paid off for several hundred years.  Why shouldn't government be in the business of making such investments?

Government research has paid off in the US as well.  After the Louisiana Purchase, the government paid for Lewis and Clark's research to find out what lay beyond the Mississippi River.  The government later gave the railroads huge tracts of land and set up land-grant colleges whose research made it possible to farm the Great Plains effectively.

This research unquestionably paid off -- taxes on the Black Hills gold alone paid for everything.  As a bonus, farmers were able to settle the plains once the land-grant colleges researched how to grow crops there.  These settlers also paid taxes.

There are Details

There are philosophical and operational issues which have to be dealt with if government research is to be turned into useful products.

Suppose government funds some research that discovers, say, a promising technique for making computer memory or low temperature semiconductors.  Despite the commercial appeal of such technology, the research doesn't get us a salable product.  There remains the issue of developing the technology to the point where it can be manufactured. Once someone figures out how to make the product, however, other firms can copy it as Stalin had his engineers copy America's atomic bomb.

Even with the research out of the way, development can be an expensive process; a new semiconductor fabrication line costs north of a billion dollars.  Thus, businesses won't make the investment in the development needed to commercialize the technology unless they get some form of protection for their investment.  Traditionally, this involved some sort of patent ownership of the basic technology, not just the means of making it.

It's hard to ensure that research is worthwhile.  Starting in 1975, Senator Proxmire sponsored the "Golden Fleece" award for wasting taxpayer's money on unnecessary "research."  His first award went to the National Science Foundation for conducting an $84,000 study about why people fall in love.  In addition to listing all of the Senator's "Golden Fleece" awards, Taxpayers for Common Sense says, "The awards helped save millions of dollars by causing targeted programs to be curtailed, modified or canceled, and by putting government officials on notice to prevent future waste."

Sen. Proxmire also sponsored legislation specifically putting publicly-funded research in the public domain, where it usually languished.  Without patent protection, people were unwilling to invest the money needed to develop commercial products.  Thus,  most research sat on the shelf without being applied and there was no return on the government's investment.

The policy has been changed so that universities can obtain patent rights to the results of their government-funded research; they then license the patents to businesses.  The license fees may bring the university more money than the original research grant.

In effect, the government acts like a venture capitalist who gives the entire up-side of the venture to the university.  There is debate about whether this is an appropriate use of tax money, of course, but the universities who benefit lobby powerfully to keep the gravy train going.

Who Decides

There are only two questions in government, "Who says?" and "Who pays?"   Sen. Proxmire believed that government research was inherently wasteful because the bureaucracy was incapable of deciding who should receive research funding.

Given the impossibility of knowing which research projects will pay off, what criteria should be used for deciding whom to fund? Partly due to Sen. Proxmire's objections to the way research was funded, the government uses a system called "peer review."  Wikipedia says:

Peer review (known as refereeing in some academic fields) is a process of subjecting an author's scholarly work, research or ideas to the scrutiny of others who are experts in the same field. It is used primarily by editors to select and to screen submitted manuscripts, and by funding agencies, to decide the awarding of grants. The peer review process has a normative function by encouraging authors to meet the accepted high standards of their discipline and to prevent the dissemination of unwarranted claims, unacceptable interpretations or personal views.

Having "experts in the same field" decide who gets the money sounds good, but the bureaucracy, of course, will not be denied.  The National Institute of Health has thoughtfully provided a multi-page web site documenting the arcane, bureaucratically-enabled peer review process by which NIH grants are disbursed.

The "peer review" system has been institutionalized to the point that any new developments which have not appeared in a "peer-reviewed journal" are often treated with skepticism, as if being "peer reviewed" is a guarantee of accuracy, veracity and significance as opposed to political pull.

Peers Have No Clothes

Peer review has been around for so long and has been supported so fervently that criticizing it is tantamount to criticizing the scientific method itself, but peer review has serious flaws.

The "review" part is clearly necessary.  Grant proposals have to be reviewed because we don't have enough money to fund everyone who'd like to do research.  Review is needed, but by whom?  The problem lies in "peer."

Dictionary.com says that "peer" means

  1. a person of the same legal status: a jury of one's peers.
  2. a person who is equal to another in abilities, qualifications, age, background, and social status.

That's the problem with peer review -- what if a genius doesn't have any peers?

The Wikipedia definition of "peer review" refers to "experts in the same field."  Back when President Roosevelt was deciding whether to fund the Manhattan Project to develop the atomic bomb, a military adviser said, "There's no way a bomb that small can destroy a city.  I've made an entire career of understanding high explosives."

The expert summed up the problems with peer review in a single sentence -- he'd invested his career in the older technology.  Based on everything he knew about conventional explosives, the atomic bomb was ridiculous.  As we've noted before, experts are as hidebound and career-conscious as everyone else.

To be fair to the explosives expert, there was no way the atomic bomb could work based on everything he knew about chemistry -- but may we be forgiven for suspecting that there might be just a teensy bit of his not wanting all of his knowledge, all of his expertise to be rendered irrelevant?

Peer review by "experts in the same field" guarantees that nothing really new or unconventional will ever be funded.  The experts will know that anything that radical can't possibly work.  They won't want to be blamed for funding something that fails so they'll vote "No."

Einstein didn't have any peers; he'd never have passed "peer review."  Niels Bohr won the Nobel prize for Physics in 1922.  He said, "We are all agreed that your theory is crazy. The question which divides us is whether it is crazy enough to have a chance of being correct.  My own feeling is that it is not crazy enough to be true."  He also said, "Einstein, stop telling God what to do."

Dr. Bohr was at the peak of the physics establishment of his day, and could be considered a genius in his own right; but he didn't appreciate the theory of relativity at all.  If that's the best that the smartest people can do when confronted with something really new, how can government decide where to invest research money?

The only way to tell the difference between a genius and a crackpot is that the genius turns out to be right, nobody can tell up front.  Genius is evident only after the fact.  Expensive research was needed to confirm Einstein's theory of relativity.  He persuaded people to collect the data he needed; the data showed he was right.  Thus, Einstein was a genius.  If he'd turnout to be wrong, of course, he'd have been just another crackpot.

Crackpots are thick on the ground and we can't fund them all.  How do we decide which upcoming genius/crackpot to fund?

Academic Politics and Tenure

A heavily bureaucratic peer review system which would have rejected Einstein and nuclear power gets even less effective when combined with academic politics and tenure.

Bigfoot

Jeffrey Meldrum is a tenured assistant professor of anatomy and biology at Idaho State University who has published a book Sasquatch: Legend Meets Science about Bigfoot, a legendary gorilla-sized primate which is reputed to be wandering around the American West.

Scientific American reports that Prof. Meldrum argues that Bigfoot might exist "despite ostracism from his fellow anthropologists and university colleagues."

Their article goes on:

Neither side can win its case without a Sasquatch specimen or fossil or without the true confessions of a fleet of perhaps fleet-footed hoaxers. In the meantime, observers watch a debate that is striking in that both sides use virtually the same language, refute each other's interpretations with the same tone of disbelief and insist they have the identical goal: honoring the scientific method. And the question of how science on the fringe should be dealt with remains open: some observers say that Meldrum, who has been lambasted by colleagues and passed over for promotion twice, should just be left alone to do his thing; others counter that in this era of creationism, global warming denial, and widespread anti-science sentiment and scientific illiteracy, it is particularly imperative that bad science be soundly scrutinized and exposed.

Prof. Meldrum has tenure, has published a book about his research, and has private funding; he's not taking any money away from his colleagues.  If "academic freedom" means anything, shouldn't he be permitted to pursue his studies, whether valid or not?

Are the teachers who are trying to ostracize Prof. Meldrum suppressing his research because they're embarrassed to be associated with it?  There's no way they can prove that Bigfoot doesn't exist.  Is embarrassment or disagreement any reason to force someone to stop a funded research program?

If a faculty would oppose research because it might be embarrassing, how can they be trusted to distribute government reseated funds?

Red Shift

Red Shift is a set of observations that some stars seem redder than theory suggests that they should be.  The classic explanation is that these starts are moving away from us.  The redder the star appears, the faster it's moving away.

This theory was suggested by Christian Doppler in 1842 and has become entrenched in astronomy.  Thousands of red shifts have been plotted; the fact that so many stars are moving away from the earth so fast is used to confirm the "Big Bang" theory of cosmology.

Prof Halton Arp has found a high red shift star which seems to be linked to a low red shift star.  If these stars were in fact as close together as they seem, yet have such different red shifts, then red shift doesn't mean what everybody thinks it means.  As Arp put it,

If the cause of these redshifts is misunderstood, then distances can be wrong by factors of 10 to 100, and luminosities and masses will be wrong by factors up to 10,000. We would have a totally erroneous picture of extragalactic space, and be faced with one of the most embarrassing boondoggles of our intellectual history. [emphasis added]

Prof. Arp believes that the universe might be 100 times smaller than everybody thinks it is.  I have a friend who talks privately with astronomers.  They more or less agree that Prof Arp's ideas are interesting, but since the establishment cut off his telescope time so that he couldn't continue his research, nobody wants to talk about his ideas.

That's peer review at its best -- cutting off an interesting line of research because the results might show that they've dedicated their careers to an embarrassing boondoggle.  How can we trust peer review to allocate research funds?

And what about the Australian doctors who suggested that antibiotics might cure ulcers?  At the time, ulcers were treated by diet and long-term use of various drugs.  It took years for the establishment to accept the new ideas even though veterinarians were treating animal ulcers with antibiotics.

And what about the doctor who thought that cancers couldn't grow unless they were able to force the body to grow blood vessels to feed them?  He was ridiculed for years but was finally proved to be right.

One Successful Model

Peer review and scientific grant committees guarantee that nothing out of the ordinary will be researched, but there is one model which we've used which works very well -- start a difficult, high-pressure project.

When President Kennedy said, "Man on the moon by the end of the decade," nobody knew how to do it.  It was easy to decide what to research.  Whatever wouldn't help get to the moon didn't get any money.  Everything else was funded.

We not only got to the moon, many technical developments came out of the Apollo project.  We learned a lot about integrated circuits -- the Apollo computer had 36K words and we needed more. We learned about medical instrumentation and sensors, we learned about materials and how the human body responds to weightlessness.  We could go on and on with the list, but the point is none of these developments were funded for their own sake, each development was needed to help get to the moon.

Thus, if government is serious about getting real research done, the solution is simple -- redirect the research budget and tell NASA to set up a moon base.  They'll have to do a lot of research, but it will be goal-directed, so good things will emerge.

Another Successful Model

The government can also make research happen by giving prizes.  DARPA offered a two-million dollar prize for the fastest robot guided vehicle to cross a route in the desert.  This inspired much research, a TV documentary, and led to significant advances in robot vehicle technology.  Their later prize for having robots drive in traffic also worked out very well.  It seems that setting a broad goal and offering money and prestige brings out the best in competitive Americans.

Private individuals have funded prizes, the X prize foundation offers rewards for people who get to orbit, land on the moon, reach certain goals in Genomics, etc.  This, too, seems to work.

Current research funding, like most government activity, is driven by politics, jealousies, and short-sightedness.  Government could get some actual value for its research dollars by setting goals or offering prizes, but as Sen. Proxmire noted, most of the money we spend on research goes right down the old rat hole.  The Golden Fleeces have multiplied.