Search Options
Home Media Explainers Research & Publications Statistics Monetary Policy The €uro Payments & Markets Careers
Suggestions
Sort by

Economic forecasting and monetary policy

Speech by Lorenzo Bini SmaghiMember of the Executive Board of the European Central BankKeynote speechDIW 1925 – 2005 conferenceBerlin, 8 December 2005

Ladies and Gentlemen,

I am delighted to speak at this conference today. The DIW is the largest and oldest economics institute in Germany and conducts research in many fields of economic analysis.

Speaking in a research institute, I feel stimulated to address a few analytical issues, raising questions rather than providing answers, and without necessarily limiting the reasoning to predefined frameworks.

The conduct of monetary policy faces challenges which are quite similar to those of economic forecasting, with some additional complications.

As Niels Bohr, a Nobel laureate in Physics, used to say: “making predictions is very difficult, especially about the future.” I would add: “not only about the future”.

Forecasting the direction in which the economy is going is essential for monetary policy, but not sufficient as knowledge about the starting position, and possibly even the past trajectory is required. This is less simple than one would think, because some key variables are not observable.

In my remarks today, I will try to explain why forecasts are so important for central banks. I will also make a digression on some (unanswered) questions in monetary policy making. I will then examine how forecast errors may affect the monetary policy decision making and how the assessment of risks is at least as important as the central scenario of the forecast.

A few basic concepts in monetary policy analysis…

I would like to start by recalling a few basic concepts that I will use in my reasoning. Although these concepts have been well known in the literature, I have the impression that they might have been forgotten in more recent discussions about monetary policy.

The first concept, quite well known for over 40 years now[1], is that monetary policy affects inflation and output with long and variable lags.

This is well known to central banks. A few years ago, the ECB and the national central banks set up a group of experts to study extensively these lags for the euro area. A conclusion which is shared by a large number of studies, not only on European but also on US data, is that monetary policy typically affects output (as measured by GDP) with a peak lag of about one year, and inflation with a lag of two years and more.[2] These are average lags, measured over a sample period. The actual lags may be longer or shorter, depending on the particular combination of shocks affecting the economy at any point in time. In any case, what the analysis suggests is that a change in the monetary stance today has an impact on output roughly one year from now and on inflation one to two years from now.

The implication for monetary policy is quite straightforward: policy should be based not only on current output and inflation developments, but (mainly) on expected future developments, one to two years ahead. [3]

If one shares this generally recognised principle, one should also agree that a fair assessment about a given monetary policy stance cannot be made referring to the current state of the economy but rather on the basis of the expected state of the economy one or two years ahead. The rationale of a given stance or of a given monetary policy decision can be understood or criticized only on the basis of an economic forecast.

It is interesting to note that most often the monetary policy stance is discussed in public on the basis of the current underlying conditions, in particular whether a given decision will “hurt” or “not hurt” the current recovery, ignoring the fact that monetary policy has very little if no impact at all on the economy for several months ahead. It is also interesting to note that some of the doubts expressed on recent monetary policy decisions were made against the background of much more optimistic forecasts of the euro area economy over the next 12 to 24 months. Indeed, the most recent growth forecasts of the IMF, the OECD and the European Commission, over the 2006-2007 horizon all come out as somewhat more optimistic than the ECB staff forecasts.

The second concept is that any forecast of output and inflation is closely related to the assessment of the current economic conditions, in particular the cyclical position of the economy, which is called in the literature the “output gap”. An incorrect assessment of the underlying position of the economy in the cycle is likely to lead to a forecast error over the relevant forecasting horizon. The converse is also true: forecasts shape our view of current economic conditions.

The reason is simple. Most, if not all models we use to conceptualise macroeconomic behaviour make reference to long-term equilibrium conditions. These models require, de facto, an assessment of long-term equilibrium conditions in order to make consistent forecasts. These assumptions thus “constrain” the dynamics of major variables and, for reasons of consistency, the starting position. Any error in the assessment of the long term equilibrium will be mirrored by a forecast error on growth and inflation.

The good news is that the forecast error might be smaller, the shorter the horizon. The bad news is that what is relevant for monetary policy is not so much the short term but the 1 to 2 years ahead forecast horizon.

Just to give you a very concrete example of how the assessment of the level and the rate of change are related, let me mention the forecast that major institutions made in 2000 about the continuation of economic growth in the euro area, at a pace close to 3 per cent. On the basis of these forecasts, the cyclical position in 2000 was estimated to be close to potential output. As projections were drastically revised downwards in 2001 and in the following years, the estimate of the cyclical position in 2000 was also revised, and turned out to be quite above potential. What in 2000 looked like a situation of output close to potential, and consistent with a fast rate of growth of the economy, turned out to be a very large positive output gap ex post. There is a nice box in the ECB’s Monthly Bulletin of February 2005 illustrating this issue.

The difficulties in empirically estimating concepts such as the long term equilibrium level or the output gap suggest that one should not rely only on empirical models but also use a panoply of indicators to obtain plausible estimates. However, one should be aware that any adjustment, judgmental or of another nature, to any of these concepts has an impact also on the other variables of the model, if consistency is to be preserved.

Furthermore, the difficulty in estimating these unobservable variables has called for not using them in the framework of monetary policy. However, for any monetary policy decision to be based on a consistent framework, it must at least implicitly make reference to some estimates of these unobservable variables.

The third concept is that the level of a variable is as important as its change or its rate of change, not only for forecasting but also for monetary policy. What matters for monetary policy is not only the change in interest rates but also their level. Keeping interest rates unchanged is a policy decision in the same way as changing them.

In fact, according to theoretical models,[4] the amount of monetary accommodation or restriction does not depend on the level of the interest rate itself but on the distance with respect to some measure of equilibrium rate, what is generally called the natural interest rate.[5] If the natural interest rate falls because of a negative shock to productivity (due for instance to an oil price shock), but the prevailing interest rate remains constant, all other things being equal, monetary policy has become tighter.

This makes things more complicated, because the natural rate of interest is not observable and very difficult to estimate in real time.[6]

A related concept that one should derive from these models is that the stance of monetary policy should not be assessed on the basis of the level of the interest rate, but should take into account the underlying economic conditions. An unchanged interest rate can imply an unchanged policy as well as a tightening or a relaxation. It all depends on what happens to the variables affecting the economy at a given moment in time. For instance, an unchanged nominal interest rate, while inflationary pressures build up and/or economic growth picks up, implies a monetary expansion. An unchanged interest rate in the face of lower inflationary pressures and fading growth implies instead a monetary tightening.

If this concept is applied in practice, it could easily be verified, for instance, that while the ECB reference rate remained at 2 per cent throughout the second half of 2005, the monetary stance had become more and more expansionary as projected HICP inflation and real GDP growth edged up.

The academic and market literature has developed a whole series of indicators of monetary conditions that have become quite popular. These indicators have to be taken with a lot of caution, because they rely on estimates for variables, such as the natural interest rate and the output gap, which are unobservable, as well as on estimates of parameters. Nonetheless, their attempt at assessing something which is conceptually correct, i.e. that the stance of monetary policy does not depend only on the level of the nominal short-term interest rate, should be praised.

The most popular of these indicators is the Taylor rule, which measures the stance of monetary policy in relation to a (typically fixed but in some cases time-varying) natural interest rate, the (possibly future expected) deviation of inflation from the target, and the (possibly future expected) output gap. Despite its limitations which are well known, this measure makes it very clear that, for given level of the nominal interest rate, a change in the output and inflation outlook also changes the monetary policy stance.

Again, it is interesting to note that in recent months these indicators all pointed in the same direction in the euro area.

..and few possible gaps

I have referred so far to standard macro models that forecasters and policy-makers use, explicitly or implicitly. Let me now turn to some questions that available economic models might not fully answer.

The simplest version of the macro model for policy analysis (at least, in economics textbooks) is based on a linear-quadratic approach. According to these models, the central bank typically minimises a quadratic loss function based on a linear description of economic behaviour. The optimal policy of the central bank is derived as a linear relationship between the policy instrument (say a real interest rate, as a deviation from the natural rate) and a policy target (normally a linear combination of inflation and the output gap).[7] In a context of uncertainty, the optimal targeting criterion is replaced by its best forecast.[8]

These models have two important characteristics. First, any given level of the policy instrument corresponds to a specific level of income and inflation; the relationship between policy and outcomes is monotonic. Second, the optimal behaviour of the central bank under uncertainty is essentially the same as under perfect foresight. This is often called the “certainty equivalence” principle.[9]

This analytical framework suggests that by controlling the interest rate, the central bank can set any level of the policy target, with a given time lag, barring unforeseen shocks. Accordingly, if output falls, following an unexpected shock, there is a defined level of the interest rate which brings output back to its original level. The central bank just has to know the appropriate level at which the interest rate should be set to get back the economy on track.

The problem with these linear models is that they do not allow for a full understanding of the risks of alternative interest rate policies. For instance, standard macro-models suggest that, for a given exogenous shock that produces a negative output gap, the more liquidity is provided in the economy, the faster the recovery will be. An implication of this result would be that there are a priori no obvious reasons to set any lower bound to the reduction of the interest rate to face an economic slowdown, except for the zero bound.

In recent years, however, we experienced, and to some extent still experience, both in Europe and in the US, long periods characterised by very low real interest rates that have not lead to stronger growth. How can this be reconciled with the predictions of the model?

One view, which is consistent with the simple type of macro model I just sketched, would be that in the last few years the euro area economy experienced a sequence of negative shocks that have progressively widened the negative output gap, and counteracted the stimulating effect of the very low level of interest rates.

Another possible interpretation, however, would be that when real interest rates are very low or negative for a very long time, the monetary policy transmission mechanism might change. Since there aren’t many historical episodes of a very accommodative policy stance on a lasting basis, we do not know enough of this type of situation. In my view, this is an issue which deserves much greater attention than it has received so far.

Let me just point to some research which might suggest that further work in this area might be of interest.

A recent book co-authored by Joe Stiglitz points to the fact that much of the mainstream monetary theory “assumes away” too much in terms of how financial markets work in reality.[10] In particular, when asymmetric information is pervasive in credit and financial markets, price mechanisms alone may not always clear markets.

While recent research in the economics of information has examined possible “pathological” effects of very high real interest rates, we do not seem to know enough about the impact of very low real interest rates for prolonged periods of time. Recently, Ragu Rajan (IMF) developed some interesting considerations about the possible impact of prolonged very low real interest rates on economic efficiency, in particular investment, which could result from a general tendency to under-price risk in a context of asymmetric information between lenders and borrowers.[11] For instance, low rates of return might encourage reckless or speculative investment behaviour by pension fund and insurance managers (from the standpoint of investors).

Rajan's main argument, in short, is that investment may be more sensitive to current, rather than future expected, financing conditions due to the structure of incentives in financial markets. Therefore, agents do not discount the possible correction of liquidity conditions in their investment decisions, which may then turn out (ex post) to have been too risky and ultimately wasteful once the policy accommodation is removed. Since there is significant irreversibility in investment, this phenomenon might have long-lasting consequences on the economy.

Another issue that is little examined in the literature is the impact of a given level of interest rate on the sectoral allocation of resources[12] and the resulting distortionary effects that can arise from a prolonged deviation from the natural rate, as a result of a very accommodative policy.

These are just examples of an infant literature, which needs to be fed.

To sum up, available models may not be sophisticated enough to consider the effects of prolonged periods of very high liquidity as the one we have experienced in recent years. There are some reasons to believe that this is a relevant issue and that more research is warranted on this matter.

Optimizing monetary policy and minimizing policy errors

According to standard macro models, inflation is a function of the deviation of aggregate demand from the economy’s output potential, i.e. the output gap, and other exogenous shocks. In the absence of other shocks (notably cost push shocks), a central bank aiming at achieving price stability should calibrate the level of interest rate, as a deviation from its natural level, in relation to the forecasted output gap.

The representation of such an optimal policy is often found to be consistent with the rule originally suggested by John Taylor. An enormous literature has burgeoned on monetary policy reaction functions.

One policy recommendation from this literature is that the interest rate should be lower than its natural rate if the economy is below potential. If the interest rate is not lower than the natural rate, monetary policy becomes tighter, which might aggravate the slack in the economy and generate deflationary pressures. By how much interest rates should decrease depends on the expected gravity of the slowdown and on the particular nature of the shocks that affect the economy.

Another policy recommendation is to take into account the lags in the transmission of monetary policy. Since monetary policy may influence inflation with a lag of over 24 months, the interest rate response to shocks should be pre-emptive. Policy interest rates should ideally increase as soon as signs of inflationary pressures become apparent, and before inflation actually picks up. If the economy is below potential, but growing at a rate higher than potential, so that the output gap is expected to fall over the next 12 to 24 months, the interest rate should be lower than the natural rate but increasing.

To sum up, in a stylised framework the interest rate cycle should anticipate the economic cycle, by about 12 to 24 months. This means that interest rates should peak well ahead of the peak of the cycle and start rising well ahead of the trough.

This is what the theory suggests. It assumes that we are able to perfectly forecast the peak and the trough of the cycle and that we can estimate at each point in time the natural rate so as to set the optimal interest rate.

Real life is of course different, because many variables needed to implement optimal policy are unknown and difficult to measure or forecast. On the other hand, there is no alternative to making the best possible effort to estimate these key parameters and variables.

Policy makers are fully aware of these complexities. This is why they use a multiplicity of techniques to minimise forecast errors.

First, more than one forecast model is generally used and the results are cross-checked. This is the reason why our monetary policy strategy is based on two pillars, the economic analysis and the monetary analysis. The monetary analysis can be associated to a reduced form longer-term forecast model of inflation,[13] and the economic analysis to a structural form short to medium term forecast model of inflation. It is well known that forecast combination is generally superior to individual forecasts, especially with the aim of minimising the risk of large errors.[14]

Judgement is also used in making the forecast, exploiting available indicators. This is particularly important for the assessment of the prevailing cyclical position of the economy. Since economic relationships (and our knowledge of them) change over time, the policy maker will always try to use a combination of models and expert judgement in making the forecast.[15]

Furthermore, any baseline forecast has to be accompanied by an analysis of risks surrounding the central scenario. This is important because risks (and the losses associated to them) are typically not symmetrically distributed, so they do not necessarily cancel out on average. Moreover, a thorough discussion of the risks helps to better understand and to test the robustness of the assumptions underlying the central scenario of the forecasts.

Type I and Type II errors in monetary policy

Let me try to explain how the difficulties in making forecasts are taken into account by policy makers. I will focus on the most challenging task for monetary policy: turning points.

As I already mentioned, theory suggests that monetary easing should end well ahead of the trough, and the turning point should occur much before the economy starts growing at potential. The pace of the tightening should depend on the strength of the expected recovery.

In practice, things might be more complicated because there is a lot of uncertainty about when the economy reaches its trough and on the pace at which it is expected to recover.

There is one additional problem: the uncertainty about the end point of the tightening phase, i.e. the so-called natural long-term level of the interest rate. This problem is generally considered as not very relevant until the tightening phase is well advanced.

This is not without some risk. Indeed, if we agree with the standard model, according to which the degree of accommodation provided at any moment in time is not defined by the level of the interest rate itself but by the difference between that rate and its natural level, some assessment of the natural rate must be made, implicitly or explicitly, at any moment in time, including when deciding the timing of a turning point and the policy thereafter. A wrong assessment of the natural rate may induce a wrong assessment of the degree of monetary accommodation prevailing at any moment in time. For instance, if the natural interest rate level is lower than previously estimated, the distance between the prevailing rate and its natural level might be lower than thought and thus monetary policy may turn out to be less expansionary than initially thought.

Things are made more complex, from an analytical standpoint, also by the fact that the assessment of the natural rate and of the output gap are obviously interlinked, since the same underlying shocks drive both the natural rate and potential output growth.

Concerning the optimal timing of a turning point, the policy maker is faced with the possibility of making two sorts of mistakes. I will use an analogy with statistical hypothesis testing.

Type I error is made in case the recovery takes place and inflationary pressures are building up, but this is not adequately forecasted in time. This is what policy makers would call “being behind the curve”.

Type II error consists instead in forecasting a recovery that does not materialize as expected. This would imply tightening too soon and being “ahead of the curve”.

A policy maker must seriously weigh the relative costs of type I rather than type II error. This is consistent with the view that the policy-maker acts as a “risk manager”, as suggested some time ago by the Chairman of the Fed.[16] Hence, the policy-maker needs to have a view on the distribution of risks around the most likely outcome (the point forecast).

Type I error, being behind the curve, entails the risk of becoming more and more accommodative as the economy recovers, adding liquidity instead of withdrawing it and fuelling inflationary pressures. Type I error can eventually be corrected by increasing the speed of tightening down the road.

If there are uncertainties about the expected turning point in the cycle and the projected strength of the recovery, a policy maker might afford to wait for some time before starting to tighten and even risk being late, if there is room to catch up with a faster speed of tightening. This is what a standard model would suggest. However, this view – and this is an important point which I would like to underline and which is often not sufficiently considered – does not take into account the fact that as the tightening is delayed, the economy might accumulate large imbalances which render a sharp tightening more risky, in terms of economic and financial stability.

A well known example is the 1994 tightening in the US. This experience has shown in particular that an economy which is flooded with liquidity, following a persistent period with very low or even negative real rates of return, grows so accustomed to it that it discards any possibility of a return to normality. Hence, a removal of an extraordinary and prolonged degree of accommodation might have to take place at a very gradual pace and has to be well communicated to economic agents, so that they can reduce in a smooth way their overall excess liquidity position.

The main message is that the cost of type I error increases the larger are the liquidity imbalances accumulated during the easing phase. Type I error can be reduced if the economy is in a situation in which there is as little excess liquidity as possible. In this case, the cost of taking time and waiting for data to confirm that the economy is indeed strengthening would not come on top of already existing distortions.

Finally, type I error is less costly (at least in an upturn) if the central bank is very credible. If agents know that the central bank is committed to maintain price stability over the medium term, they can tolerate type I error for some time as the increase in inflation expectation remains contained. However, a central bank cannot rely too much on its acquired credibility and must monitor very closely long-term inflation expectations imbedded in market instruments to avoid remaining for too long behind the curve.

If markets start doubting about the anti inflationary stance of the central bank, inflation expectations might increase. Reining them back may require a much larger increase in interest rates. The cost of a type I error would thus increase dramatically.

Let me turn now to type II error, i.e. tightening too early. The cost of such an error is to put an excessive break on the economy, which might not be as strong as initially expected, thus slowing down, and possibly even stopping the recovery. This could even lead to a policy reversal, as interest rates would have to be reduced again. Reversing gear might create a reputation risk. The central bank might also be accused of having delayed the recovery, which could undermine its independence.

The cost of a type II error can be reduced if at the start of the tightening the economy benefits of very ample liquidity, following the persistence of very low interest rates.

To sum up, the existence of very ample liquidity in the economy raises the cost of type I errors and reduces the cost of type II errors. The cost of the type II errors can be further minimised if the interest rate rise is seen as a return to a level more consistent with adequate liquidity conditions rather than the start of a tightening cycle.

The relative costs of type I and type II errors are always considered by central banks.

Let me give a few examples. Between 2002 and 2004, the forecast of all major institutions (including the central scenarios of our own projections) pointed to accelerating economic growth in the euro area over the following two years. A mechanistic policy response would have induced an upward adjustment in interest rates and the start of a tightening cycle. This did not occur because the downward risks to the recovery were heavily taken into account in the decision making process. The risk of making a type II error was seriously considered while the cost of taking time to wait for more solid signs before tightening were considered as being relatively small. Ex post, this turned out to be the right decision.

At the end of 2005 the balance of risks changed. After over two years of very low, and partly negative real rates, the cost of type I error increased substantially, because of the large liquidity accumulated in the economy. There were signs of substantial distortions in the allocation of resources in the economy. The very high growth rate of mortgage credit and of property prices in euro area countries can be seen as a reflection of such distortions. At a global level, a high degree of risk appetite has developed in financial markets, as reflected for example in high money and credit growth, low bond yields, rising equity and property prices, and low corporate bond spreads.

Under these circumstances, waiting further, taking time to have stronger data on economic activity would have entailed a further rise in the degree of accommodation and possibly of the distortions in the economy. The cost of type I errors would have increased excessively.

The cost of a possible type II error was considered to be lower due to the ample liquidity available in the economy. Market participants fully perceived that the adjustment in interest rates aimed primarily at withdrawing part of the excess liquidity in the system rather than a starting a typical tightening cycle.

It is clear that the costs of alternative errors may change over time, depending on the projected economic developments and on the imbalances accumulated in the system. These costs are continuously assessed by the policy makers, with a view to minimise them.

A good diet is the best prevention

Since monetary policy is forward looking, it is sometimes depicted as part of a prevention strategy, which is better than a cure at a later stage.

The analogy is quite suggestive but may not take fully into account the complexity of monetary policy decisions. Indeed, vaccination prevents fever, but vaccination has to take place at the right time. It should happen neither too late nor too early, especially if the body is still weak and maybe under antibiotics.

If vaccination is given while one still is under antibiotics, the body may weaken further, especially if antibiotics have been taken for a long time, maybe too long. The right sequence thus foresees that first the antibiotics cure is stopped and only after a while can the vaccination be given.

One way of characterizing the 1 December decision might be that it is part of a phase in which the euro area stops taking antibiotics, which it has absorbed for too long. We haven’t come yet to the full treatment of vaccines to prevent influenza (ie inflation). The body might need to get stronger before getting the shot.

In any case, I must admit that I have some doubts that the analogy with medicine is the most appropriate one for monetary policy. Doctors are generally considered by their patients as saviours, capable of healing any illness and being able to provide special medicines that can push the human body even beyond realistic performance, especially for athletes.

That’s not the way monetary policy works nor should work. Monetary policy is nether a medicine, nor a drug, that can ensure that the economy is always at full potential.

I see central banks more as dieticians, who aim at calibrating the amount of calories as a function of the athlete’s activity and performance. When the athlete is in good shape and trains for a competition, the amount of calories burned by his body increases. If the dietician underestimates the amount of calories that the athlete needs and prescribes an insufficient amount, the muscles do not develop sufficiently and he will not perform well. This is type II error. If instead the dietician overestimates the amounts of calories, the athlete will eat too much and will accumulate too much fat in his body. This will also lead him to underperform the day of the competition. This is type I error.

If the athlete is not in good shape, for instance because of an injury, giving him more calories will not make him better off. He is most likely to get fatter. The fatter he gets the more he will have to train, at the end of his injury period, in order to get back on track.

As a dietician can only aim over time at minimizing the percentage of fat in a body, monetary policy can only aim at calibrating the amount of liquidity necessary to the economy to develop without inflation. Too much or too little liquidity are both bad for the economy as well as too many or too little calories are bad for a body. But liquidity, like calories, cannot by itself systematically increase the performance of the economy over time. It can be improved only through structural reforms that raise growth potential, as much as an athlete can only improve his performance through training.

The comparison with dieticians may not sound very glorious for central bankers.

However, it matches well the school of thought according to which monetary policy has overall little impact on economic developments, except when major mistakes are made.

This is why central banks should aim primarily at avoiding major policy errors rather than trying to fine tune the economy towards an optimal path.

I tend to belong to this school of thought.

Thank you very much for your attention.

  1. [1] See M. Friedman (1961): “The lag in effect of monetary policy”, Journal of Political Economy, 69, pp. 447-466.

  2. [2] See I. Angeloni, A. K. Kashyap and B. Mojon (2003), Monetary policy transmission in the euro area, Cambridge University Press.

  3. [3] This might depend on the extent to which private agents are forward-looking. If the private sector is very forward-looking, it can be optimal for policy to display some degree of inertia; see M. Woodford (2003): Interest and prices: foundations of a theory of monetary policy, Princeton University Press. However, the point that policy needs to react to forecasts remains generally valid also in these models.

  4. [4] See Woodford (cit.) for a textbook treatment.

  5. [5] This is typically defined as the level of the real interest rate which would prevail in an economy without nominal rigidities.

  6. [6] See the article in the ECB Monthly Bulletin of May 2004, entitled "The natural real interest rate in the euro area".

  7. [7] More generally, however, the optimal reaction function will be a complicated function of the underlying shocks hitting the economy.

  8. [8] See Svensson, L. E. O. and M. Woodford (2003): "Indicator variables for optimal policy," Journal of Monetary Economics, Elsevier, vol. 50(3), pages 691-720.

  9. [9] Note that certainty equivalence only holds in the presence of additive uncertainty. It is well known that in the presence of multiplicative uncertainty (i.e. uncertainty on the transmission mechanism) the central bank optimally behaves in a more cautious way than in the case of perfect foresight.

  10. [10] See B. Greenwald and J. Stiglitz (2003), Towards a new paradigm in monetary economics, Raffaele Mattioli Lectures, Cambridge University Press.

  11. [11] See R. Rajan (2005): “Has financial development made the world riskier?”, working paper.

  12. [12] On the sectoral impact of monetary policy in the euro area see for example G. Peersman and F. Smets (2005): “The industry effects of monetary policy in the euro area”, Economic Journal, 115, 503.

  13. [13] On the role of monetary models in forecasting inflation, see the article in the ECB Monthly Bulletin of October 2004, entitled “Monetary analysis in real time”.

  14. [14] See M. Clements and D. Hendry (2005): A Companion to Economic Forecasting, Blackwell Companions to Contemporary Economics.

  15. [15] On the use of judgement in monetary policy making, see L. Svensson (2005): “Monetary policy with judgement: forecast targeting”, International Journal of Central Banking, March.

  16. [16] See for instance the remarks by Chairman Alan Greenspan at the Meetings of the American Economic Association, San Diego, California, 3 January 2004, “Risk and Uncertainty in Monetary Policy”.

CONTACT

European Central Bank

Directorate General Communications

Reproduction is permitted provided that the source is acknowledged.

Media contacts