This sounds familiar:
Ecology is one of the hardest branches of biology, possibly of all science. Real ecological communities are fantastically complex … and hard to dissect and understand. Experiments in the wild are difficult to control, and important variables are often hard to measure. … Experiments in the laboratory are problematic too. …
Much of the uncertainty in economics derives from our inability to do laboratory experiments, and that includes uncertainty about which model best describes the macroeconmy.
When the present crisis is finally over, those who advocated fiscal policy, those who advocated monetary policy, and those who advocated no policy at all will all say “I told you so” based upon their reading of the evidence.
Some New Keynesians will cite fiscal policy as the important policy response, and the timing of the policy relative to the recovery will likely support that argument. Other New Keynesians along with Monetarists (e.g. Lucas and others who believe monetary policy can help, but fiscal policy is ineffective) will insist it was monetary policy that saved us. The timing of the monetary policy response will support their position as well.
Still others, those such as Prescott who believe in Real Business Cycle models, will say the economy recovered despite policy, and would have recovered all that much faster if government hadn’t gotten in the way. Without a baseline to refer to showing what would have happened without policy, it would be hard to refute this argument.
Once this is all over, there will be ways to tease this out of the data, e.g. the pattern of the response of key macroeconomic variables may be most consistent with one of the policies, but there will still be considerable uncertainty due to the high correlation in the timing of the monetary and fiscal policy responses (cross-country studies could help too since the policy response varied by country, but other differences across countries that are difficult to control for making these estimates uncertain as well).
Ideally, we would go to the lab and run the economy with the same initial conditions, say, 1,000 times with no policy intervention at all to establish the average non-intervention response (and its variance), i.e. the baseline, an important missing piece of information when all you have is non-experimental data. Then, we would run the economy again with a monetary policy response to the crisis 1,000 times (or do several experiments with different monetary policy responses to see which is best), and yet again 1,000 more times with fiscal policy (or, as with monetary policy, perhaps several fiscal polices involving different levels of spending and taxes), then compare the results to see how well each policy attenuates the cycle. (I would also want to run the economy with several combinations of the two polices in case there are important interaction effects the experiments with individual treatments might miss.)
That would probably give us a pretty good idea about which policy works best. However, without the ability to do experiments, the best we can do is to build a model of the economy based upon historical data, and then use the model to simulate the experiments above. That is, estimate the model based upon actual data, then run it with various combinations of monetary and fiscal policy and see how the outcome varies with differences in policy. Unfortunately, the answers you get are only as good as the model used to get them, and considerable uncertainty remains over which macroeconomic model is best (which is why we have Real Business Cycle, New Keynesian, and Monetarist type macroeconomic models along with all their various sub forms, though more recently questions have arisen over whether any of the existing theoretical structures are satisfactory).
Here’s another way to think about it. Macroeconomists know all of the major historical episodes and correlations that a model must explain. We can’t do experiments, so there is just one set of data, and of course any model that is built will be able to explain how these data evolve over time. And it’s possible to build different models that explain the data equally well. If we could do experiments, we could test these models in ways that would potentially rule some of them out, but with just one set of data and models built specifically to explain the data such testing is not possible.
So we have to wait for time to bring us more data and then see if the model can explain them, test the models across countries, find things we didn’t know about when we built the model and test the model against those — and there are other ways to get at this — but for the most part it’s time that settles these issues. The models either do or do not continue to explain new data as they arrive.
But at any point in time, it will be difficult to distinguish between different models because those models are built to explain everything that is known about the historical macro data. Perhaps some time in the distant future when we have much more data than we have now, it will become more difficult to construct competing models and we will begin to converge on a common theoretical structure — it seemed like we were headed in that direction prior to the recent crisis — but for now we are stuck arguing about which model is best without the means to turn to the data and clearly distinguish one from the other.