Papers by Guido Travaglini
In this paper we propose a mathematical and statistical model of the dynamic evolution of incomes... more In this paper we propose a mathematical and statistical model of the dynamic evolution of incomes and of their distributional patterns among the three traditional income classes: aristocracy, populace, and the bourgeoisie. After highlighting their major features and historical evolution, the Maddison total and personal income datasets, as well as Gini coefficients and other variables striding the years 1-2008 A.D. are introduced for eleven West European countries selected among the most representative of the vestiges of the Roman Empire. Most of the series of the datasets originally come in low-resolution terms. Their corresponding high-resolution annual observations are computed by means of interpolation and calibration, and further appropriately denoised. The results of such computations point to three major important findings regarding the evolution of personal income levels, growth rates, and distribution over the last 20 centuries in Western Europe: (i) incomes and living conditions enjoyed during the Roman Era remained unprecedented only after the inception of the Industrial Revolution; (ii) the Kuznets curves behave in accordance with received theory by significantly exhibiting an inverted-U pattern; (iii) the engine of economic growth is in the vast majority of cases, perhaps unexpectedly, represented by the bourgeoisie.
The Maunder Minimum (MM) was an extended period of reduced solar activity in terms of yearly suns... more The Maunder Minimum (MM) was an extended period of reduced solar activity in terms of yearly sunspot numbers (SSN) during 1610 – 1715. The reality of this “grand minimum” is generally accepted in the scientific community, but the statistics of the SSN record suggest a need for data reconstruction. The MM data show a nonstandard distribution compared with the entire SSN signal (1610 – 2014). The pattern does not satisfy the weakly stationary solar dynamo approximation, which characterizes many natural events spanning centuries or even millennia, including the Sun and the stars. Over the entire observation period (1610 – 2014), the reported SSN exhibits statistically significant regime switches, departures from autoregressive stationarity, and growing trends. Reconstruction of the SSN during the pre-MM and MM periods is performed using five novel statistical procedures in support of signal analysis. A Bayesian–Monte Carlo backcast technique is found to be most reliable and produces an SSN signal that meets the weak-stationarity requirement. The computed MM signal for this reconstruction does not show a “grand” minimum or even a “semi-grand” minimum.
The paper addresses the problem and related issues of Time-Varying Parameter (TVP) estimation, a ... more The paper addresses the problem and related issues of Time-Varying Parameter (TVP) estimation, a technique recently introduced in the field of Macro-Econometrics, and especially in FAVAR (Factor-Analysis Vector Auto-Regression) modeling. Different from standard multiple or single regression estimation, where Time-Fixed Parameter (TFP) estimation dominates over the entire sample and may be conducive to the “Lucas Critique”, TVP produces changing parameters which may be utilized by the analyst to infer the dynamics underlying the data process, such as structural breaks, changes in covariances, parameter significance, and so on. This advantage, however, comes at a high cost represented by two major occurrences. The first is the degree of attentiteveness placed by the analyst in constructing the formal building blocks of FAVAR models, while the second is the machine timing required to produce Gibbs-simulated Normal and inverse Wishart parameter distributions to approximate Bayesian prio...
Supervised Principal Component Analysis (SPCA) and Factor Instrumental Variables (FIV) are compet... more Supervised Principal Component Analysis (SPCA) and Factor Instrumental Variables (FIV) are competing methods addressed at estimating models affected by regressor collinearity and at detecting a reduced-size instrument set from a large database, possibly dominated by nonexogeneity and weakness. While the first method stresses the role of regressors by taking account of their data-induced tie with the endogenous variable, the second places absolute relevance on the data-induced structure of the covariance matrix and selects the true common factors as instruments by means of formal statistical procedures. Theoretical analysis and Montecarlo simulations demonstrate that FIV is more efficient than SPCA and standard Generalized Method of Moments (GMM), even when the instruments are few and possibly weak. The prefered FIV estimation is then applied to a large dataset to test the more recent theories on the determinants of total violent crime and homicide trends in the United States for the...
The goal of this paper is to test on a millennial scale the magnitude of the recent warmth period... more The goal of this paper is to test on a millennial scale the magnitude of the recent warmth period, known as the "hockey-stick", and the relevance of the causative anthropogenic climate change hypothesis advanced by several academics and worldwide institutions. A select batch of ten longterm climate proxies, included in the NOAA 92 PCN dataset all of which running well into the nineties, is updated to the year 2011 by means of a Time-Varying Parameter Kalman Filter SISO model for state prediction. This procedure is applied by appropriately selecting as observable one out of the HADSST2 and of the HADCRUT3 series of instrumental temperature anomalies available since the year 1850. The updated proxy series are thereafter individually tested for the values and time location of their four maximum non-neighboring attained temperatures. The results are at best inconclusive, since three of the updated series, including Michael Mann's celebrated and controversial tree-ring reconstructions, do not refute the hypothesis, while the others quite significantly point to different dates of maximum temperature achievements into the past centuries, in particular those associated to the Medieval Warm Period.
It is mostly undisputed in several scientific fields that the Maunder Minimum (~1645-1715) repres... more It is mostly undisputed in several scientific fields that the Maunder Minimum (~1645-1715) represents an extended period of reduced solar activity with likely chilling effects on the Earth’s climate. In fact, the measured Sunspot Numbers (SSN) that have been collected during that period, and later popularized by Hoyt and Schatten, and Eddy, represent a significant outlier with respect to the previous and to the following periods, respectively and . However, the SSN yearly time series stretching the period 1610-1715 is severely flawed from the observational viewpoint because its statistical pattern defies, if not decries, the evolution of any long-run physical phenomenon by exhibiting significant regime switches, departures from stationarity, and growing trends. The plea for reconstructing more homogenous SSN time series for this period is here heeded by developing several competitive Bayesian-flavored Monte Carlo simulation and instrumental-proxy calibration techniques. Their implementation occurs to the accompaniment of newly-concocted procedures for signal smoothing and stationarity testing. In all cases but one, after due selection of the variables involved, the Maunder Minimum (MM) is found to be nonexistent, while the Monte Carlo simulation technique for SSN reconstruction is found to statistically outsmart the calibration-based methods, casting dark shadows on their commonplace utilization.
Solar activity, as measured by the yearly revisited time series of sunspot numbers (SSN) for the ... more Solar activity, as measured by the yearly revisited time series of sunspot numbers (SSN) for the period 1700-2014 (Clette et al., 2014), undergoes in this paper a triple statistical and econometric checkup. The conclusions are that the SSN sequence: (1) is best modeled as a signal that features nonlinearity in mean and variance, long memory, mean reversion, ‘threshold’ symmetry, and stationarity; (2) is best described as a discrete damped harmonic oscillator which linearly approximates the flux- transport dynamo model; (3) its prediction well into the 22nd century testifies of a substantial fall of the SSN centered around the year 2030. In addition, the first and last Gleissberg cycles show almost the same peak number and height during the period considered, yet the former slightly prevails when measured by means of the estimated smoother. All of these conclusions are achieved by making use of modern tools developed in the field of Financial Econometrics and of two new proposed procedures for signal smoothing and prediction.
It is mostly undisputed in several scientific fields that the Maunder Minimum (~1645-1715) repres... more It is mostly undisputed in several scientific fields that the Maunder Minimum (~1645-1715) represents an extended period of reduced solar activity with chilling effects on the Earth’s climate. In fact, the measured Sunspot Numbers (SSN) that have been collected during that period, and later popularized by Hoyt and Schatten, and Eddy, represent a significant outlier with respect to the previous period and to the following period , which includes both the Dalton Minimum and the Modern Minimum . However these SSN, to date, are observationally inconsistent with some renown past temperature reconstructions. In other words, the Maunder Minimum may not have been characterized by total absence of the standard mean 11-year solar cycle, nor by zero-mean values of the SSN nor by Earth temperatures probably similar to those of the Cryogenian Era, even when accounting for the “Great Frost” year 1709 which ravaged Europe. Appropriate calibration of the SSN for the years 1610-1699 by means of Gibbs sampling, and their subsequent reconstruction, demonstrate that the Maunder Minimum is not a significant outlier period in both intensity and duration within the Sun’s and Earth’s timelines of the last four centuries.
ABSTRACT Income convergence is here tested for the 25 nonoil Heston-Summers countries for which p... more ABSTRACT Income convergence is here tested for the 25 nonoil Heston-Summers countries for which physical capital data are available. ß-convergence is tested via a dynamic Cobb-Douglas growth equation both in panel and in single-country form. õ-convergence is tested for the stationarity of unconditional and conditional time series of single~country income deviations from the sample mean. Although the two methods are (weakly) related to each other, conflicting results emerge: while conditional ß -convergencc cannot be significantly rejected (accepted) at the panel (single-country) level, both unconditional and conditional õ -convergence cannot be significantly accepted at the single-country level. In essence, while the two forms of convergence are empirically inconsistent with one another, the country-specific growth story holds very well, insofar as its standard determinants widely differ among nations.
In this paper I propose a nonstandard t-test statistic for detecting 1 n ≥ level and trend breaks... more In this paper I propose a nonstandard t-test statistic for detecting 1 n ≥ level and trend breaks of I(0) series. Theoretical and limit-distribution critical values obtained from Montecarlo experimentation are supplied. The null hypothesis of anthropogenic versus natural causes of global warming is then tested for the period 1850-2006 by means of a dynamic GMM model which incorporates the null of 1 n ≥ breaks of anthropogenic origin. World average temperatures are found to be tapering off since a few decades by now, and to exhibit no significant breaks attributable to human activities. While these play a minor causative role in climate changes, most natural forcings and in particular solar sunspots are major warmers. Finally, in contrast to widely held opinions, greenhouse gases are in general temperature dimmers.
ABSTRACT A portfolio-balance model is applied to nominal demand for money (M1, M2 and M3) and emp... more ABSTRACT A portfolio-balance model is applied to nominal demand for money (M1, M2 and M3) and empirically tested for West Germany, 1971.I - 1981.IV. Some major hypotheses about the different time-space domain covered by money aggregates are considered, such as the a8set versus transactions purpose, the impact of foreign financial markets and the structural stability issue. Empirical findings widely support the contention that the larger the money aggregate the wider the time-space domain: hence, M2 and M3, as opposed to M1, are significantly related to permanent income, are direct substitutes of durable goods, and are somehow affected by changes in International financial markets. Also, the average adjustment period tends to infinity when passing from M1 to M2. It is concluded, by finding f or all aggregates no foreign-bonds substitutability and no structural instability, that domestic monetary policy is fairly effective in pursuing stabilization objectives either through stock or interest rate instruments.
ABSTRACT A dynamic GDP growth-optimizing program with a predator-prey and discrete logistic growt... more ABSTRACT A dynamic GDP growth-optimizing program with a predator-prey and discrete logistic growth constraint is introduced and empirically tested f or a few major countries. Its Euler equation representation is derived in order to examine the degrees of international dependence, convergence, stability and time drift. It is shown that dependence is crucial to each country ‘s growth and that convergence relative to the US, albeit different among countries, takes place due both to a negative time drift in the TJS and a positive one elsewhere. Probably this is due to the discount rate of some countries that value future more than present. Certainly, most countries have attained GDP growth rates and levels higher than predicted by their own optimizing programs.
The goal of this paper is to empirically test for structural breaks of world mean temperatures th... more The goal of this paper is to empirically test for structural breaks of world mean temperatures that may have ignited at some date the phenomenon known as "Climate Change" or "Global Warming". Estimation by means of the dynamic Generalized Method of Moments is conducted on a large dataset spanning the recordable period from 1850 until present, and different tests and selection procedures among competing model specifications are utilized, such as Principal Component and Principal Factor Analysis, instrument validity, overtime changes in parameters and in shares of both natural and anthropogenic forcings. The results of estimation unmistakably show no involvement of anthropogenic forcings and no occurrence of significant breaks in world mean temperatures. Hence the hypothesis of a climate change in the last 150 years, suggested by the advocates of Global Warming, is rejected. Pacific Decadal Oscillations, sunspots and the major volcanic eruptions play the lion's share in determining world temperatures, the first being a dimmer and the others substantial warmers.
This paper combines two major strands of literature: structural breaks and Taylor rules. At first... more This paper combines two major strands of literature: structural breaks and Taylor rules. At first, I propose a nonstandard t-test statistic for detecting multiple level and trend breaks of I(0) series by supplying theoretical and limit-distribution critical values obtained from Montecarlo experimentation. Thereafter, I introduce a forwardlooking Taylor rule expressed as a dynamic model which allows for multiple breaks and reaction-function coefficients of the leads of inflation, of the output gap and of an equity market index. Sequential GMM estimation of the model, applied to the Effective Federal Funds Rate for the period 1984:01-2001:06, produces three main interesting results: the existence of significant structural breaks, the substantial role played by inflation in the FOMC decisions and a marked equity targeting policy approach. Such results reveal departures from rationality, determined by structured and unstructured uncertainty, which the Fed systematically attempts at reducing by administering inflation scares and misinformation about the actual Phillips curve, in order to keep the output and equity markets under control.
ABSTRACT Supervised Principal Component Analysis (SPCA) and Factor Instrumental Variables (FIV) a... more ABSTRACT Supervised Principal Component Analysis (SPCA) and Factor Instrumental Variables (FIV) are competing methods addressed at estimating models affected by regressor collinearity and at detecting a reduced-size instrument set from a large database, possibly dominated by non-exogeneity and weakness. While the first method stresses the role of regressors by taking account of their data-induced tie with the endogenous variable, the second places absolute relevance on the data-induced structure of the covariance matrix and selects the true common factors as instruments by means of formal statistical procedures. Theoretical analysis and Montecarlo simulations demonstrate that FIV is more efficient than SPCA and standard Generalized Method of Moments (GMM) even when the instruments are few and possibly weak. The prefered FIV estimation is then applied to a large dataset to test the more recent theories on the determinants of total violent crime and homicide trends in the United States for the period 1982-2005. Demographic variables, and especially abortion, law enforcement and unchecked gun availability are found to be the most significant determinants.
A comparison between Principal Component Analysis (PCA) and Factor Analysis (FA) is performed bot... more A comparison between Principal Component Analysis (PCA) and Factor Analysis (FA) is performed both theoretically and empirically for a random matrix ( ) : X n p × , where n is the number of observations and both coordinates may be very large. The comparison surveys the asymptotic properties of the factor scores, of the singular values and of all other elements involved, as well as the characteristics of the methods utilized for detecting the true dimension of X. In particular, the norms of the FA scores, whichever their number, and the norms of their covariance matrix are shown to be always smaller and to decay faster as n → ∞ . This causes the FA scores, when utilized as regressors and/or instruments, to produce more efficient slope estimators in instrumental variable estimation. Moreover, as compared to PCA, the FA scores and factors exhibit a higher degree of consistency because the difference between the estimated and their true counterparts is smaller, and so is also the corresponding variance. Finally, FA usually selects a much less encumbering number of scores than PCA, greatly facilitating the search and identification of the common components of X.
In this paper a Cobb-Douglas utility function is introduced and solved for a dynamic equation of ... more In this paper a Cobb-Douglas utility function is introduced and solved for a dynamic equation of property crime supply and its determinants, namely deterrents and income. Thereafter, all variables are empirically tested by means of a simultaneous equations model, for the sign and magnitude of their mutual relationships in a panel of Italy and its two economically and culturally different areas, the North and the South.
Uploads
Papers by Guido Travaglini