Theses and Dissertations
Permanent URI for this collection
Browse
Browsing Theses and Dissertations by Title
Now showing 1 - 20 of 55
Results Per Page
Sort Options
Item A comprehensive approach to spatial and spatiotemporal dependence modeling(2000) Baggett, Larry Scott; Ensor, Katherine B.One of the most difficult tasks of modeling spatial and spatiotemporal random fields is that of deriving an accurate representation of the dependence structure. In practice, the researcher is faced with selecting the best empirical representation of the data, the proper family of parametric models, and the most efficient method of parameter estimation once the model is selected. Each of these decisions has direct consequence on the prediction accuracy of the modeled random field. In order to facilitate the process of spatial dependence modeling, a general class of covariogram estimators is introduced. They are derived by direct application of Bochner's theorem on the Fourier-Bessel series representation of the covariogram. Extensions are derived for one, two and three dimensions and spatiotemporal extensions for one, two and three spatial dimensions as well. A spatial application is demonstrated for prediction of the distribution of sediment contaminants in Galveston Bay estuary, Texas. Also included is a spatiotemporal application to generate predictions for sea surface temperatures adjusted for periodic climatic effects from a long-term study region off southern California.Item A stochastic approach to prepayment modeling(1996) Overley, Mark S.; Thompson, James R.A new type of prepayment model for use in the valuation of mortgage-backed securities is presented. The model is based on a simple axiomatic characterization of the prepayment decision by the individual in terms of a continuous time, discrete state stochastic process. One advantage of the stochastic approach compared to a traditional regression model is that information on the variability of prepayments is retained. This information is shown to have a significant effect on the value of mortgage-backed derivative securities. Furthermore, the model explains important path dependent properties of prepayments such as seasoning and burnout in a natural way, which improves fit accuracy for mean prepayment rates. This is demonstrated by comparing the stochastic mean to a nonlinear regression model based on time and mortgage rate information for generic Ginnie Mae collateral.Item A time series approach to quality control(1991) Dittrich, Gayle Lynn; Ensor, Katherine B.One way that a process may be said to be "out-of-control" is when a cyclical pattern exists in the observations over time. It is necessary that an accurate control chart be developed to signal when a cycle is present in the process. Two control charts have recently been developed to deal with this problem. One, based on the periodogram, provides a test based on a finite number of frequencies. The other method uses a test which estimates a statistic which covers all frequency values. However, both methods fail to estimate the frequency value of the cycle and are computationally difficult. A new control chart is proposed which not only covers a continuous range of frequency values, but also estimates the frequency of the cycle. It in addition is easier to understand and compute than the two other methods.Item An Approach for the Adaptive Solution of Optimization Problems Governed by Partial Differential Equations with Uncertain Coefficients(2012-09-05) Kouri, Drew; Heinkenschloss, Matthias; Sorensen, Danny C.; Riviere, Beatrice M.; Cox, Dennis D.Using derivative based numerical optimization routines to solve optimization problems governed by partial differential equations (PDEs) with uncertain coefficients is computationally expensive due to the large number of PDE solves required at each iteration. In this thesis, I present an adaptive stochastic collocation framework for the discretization and numerical solution of these PDE constrained optimization problems. This adaptive approach is based on dimension adaptive sparse grid interpolation and employs trust regions to manage the adapted stochastic collocation models. Furthermore, I prove the convergence of sparse grid collocation methods applied to these optimization problems as well as the global convergence of the retrospective trust region algorithm under weakened assumptions on gradient inexactness. In fact, if one can bound the error between actual and modeled gradients using reliable and efficient a posteriori error estimators, then the global convergence of the proposed algorithm follows. Moreover, I describe a high performance implementation of my adaptive collocation and trust region framework using the C++ programming language with the Message Passing interface (MPI). Many PDE solves are required to accurately quantify the uncertainty in such optimization problems, therefore it is essential to appropriately choose inexpensive approximate models and large-scale nonlinear programming techniques throughout the optimization routine. Numerical results for the adaptive solution of these optimization problems are presented.Item An approach to modeling a multivariate spatial-temporal process(2000) Calizzi, Mary Anne; Ensor, Katherine B.Although modeling of spatial-temporal stochastic processes is a growing area of research, one underdeveloped area in this field is the multivariate space-time setting. The motivation for this research originates from air quality studies. By treating each air pollutant as a separate variable, the multivariate approach will enable modeling of not only the behavior of the individual pollutants but also the interaction between pollutants over space and time. Studying both the spatial and the temporal aspects of the process gives a more accurate picture of the behavior of the process. A bivariate state-space model is developed and includes a covariance function which can account for the different cross-covariances across space and time. The Kalman filter is used for parameter estimation and prediction. The model is evaluated through the prediction efforts in an air-quality application.Item An examination of some open problems in time series analysis(2005) Davis, Ginger Michelle; Ensor, Katherine B.We investigate two open problems in the area of time series analysis. The first is developing a methodology for multivariate time series analysis when our time series has components that are both continuous and categorical. Our specific contribution is a logistic smooth transition regression (LSTR) model whose transition variable is related to a categorical variable. This methodology is necessary for series that exhibit nonlinear behavior dependent on a categorical variable. The estimation procedure is investigated both with simulation and an economic example. The second contribution to time series analysis is examining the evolving structure in multivariate time series. The application area we concentrate on is financial time series. Many models exist for the joint analysis of several financial instruments such as securities due to the fact that they are not independent. These models often assume some type of constant behavior between the instruments over the time period of analysis. Instead of imposing this assumption, we are interested in understanding the dynamic covariance structure in our multivariate financial time series, which will provide us with an understanding of changing market conditions. In order to achieve this understanding, we first develop a multivariate model for the conditional covariance and then examine that estimate for changing structure using multivariate techniques. Specifically, we simultaneously model individual stock data that belong to one of three market sectors and examine the behavior of the market as a whole as well as the behavior of the sectors. Our aims are detecting and forecasting unusual changes in the system, such as market collapses and outliers, and understanding the issue of portfolio diversification in multivariate financial series from different industry sectors. The motivation for this research concerns portfolio diversification. The false assumption that investment in different industry sectors is uncorrelated is not made. Instead, we assume that the comovement of stocks within and between sectors changes with market conditions. Some of these market conditions include market crashes or collapses and common external influences.Item An Old Dog Learns New Tricks: Novel Applications of Kernel Density Estimators on Two Financial Datasets(2017-12-01) Ginley, Matthew Cline; Ensor, Katherine B.; Scott, David W.In our first application, we contribute two nonparametric simulation methods for analyzing Leveraged Exchange Traded Fund (LETF) return volatility and how this dynamic is related to the underlying index. LETFs are constructed to provide the indicated leverage multiple of the daily total return on an underlying index. LETFs may perform as expected on a daily basis; however, fund issuers state there is no guarantee of achieving the multiple of the index return over longer time horizons. Most, if not all LETF returns data are difficult to model because of the extreme volatility present and limited availability of data. First, to isolate the effects of daily, leveraged compounding on LETF volatility, we propose an innovative method for simulating daily index returns with a chosen constraint on the multi-day period return. By controlling for the performance of the underlying index, the range of volatilities observed in a simulated sample can be attributed to compounding with leverage and the presence of tracking errors. Second, to overcome the limited history of LETF returns data, we propose a method for simulating implied LETF tracking errors while still accounting for their dependence on underlying index returns. This allows for the incorporation of the complete history of index returns in an LETF returns model. Our nonparametric methods are flexible-- easily incorporating any chosen number of days, leverage ratios, or period return constraints, and can be used in combination or separately to model any quantity of interest derived from daily LETF returns. For our second application, we tackle binary classification problems with extremely low class 1 proportions. These ``rare events'' problems are a considerable challenge, which is magnified when dealing with large datasets. Having a minuscule count of class 1 observations motivates the implementation of more sophisticated methods to minimize forecasting bias towards the majority class. We propose an alternative approach to established up-sampling or down-sampling algorithms driven by kernel density estimators to transform the class labels to continuous targets. Having effectively transformed the problem from classification to regression, we argue that under the assumption of a monotonic relationship between predictors and the target, approximations of the majority class are possible in a rare events setting with the use of simple heuristics. By significantly reducing the burden posed by the majority class, the complexities of minority class membership can be modeled more effectively using monotonically constrained nonparametric regression methods. Our approach is demonstrated on a large financial dataset with an extremely low class 1 proportion. Additionally, novel features engineering is introduced to assist in the application of the density estimator used for class label transformation.Item Approximate dynamic factor models for mixed frequency data(2015-10-15) Zhao, Xin; Ensor, Katherine; Kimmel, Marek; Sizova, NataliaTime series observed at different temporal scales cannot be simultaneously analyzed by traditional multivariate time series methods. Adjustments must be made to address issues of asynchronous observations. For example, many macroeconomic time series are published quarterly and other price series are published monthly or daily. Common solutions to the analysis of asynchronous time series include data aggregation, mixed frequency vector autoregressive models, and factor models. In this research, I set up a systematic approach to the analysis of asynchronous multivariate time series based on an approximate dynamic factor model. The methodology treats observations of various temporal frequencies as contemporaneous series. A two-step model estimation and identification scheme is proposed. This method allows explicit structural restrictions that account for appropriate temporal ordering of the mixed frequency data. The methodology consistently estimates the dynamic factors, however, no prior knowledge on the factors is required. To ensure a computationally efficient robust algorithm and model specification, I make use of modern penalized likelihood methodologies. The fitted model captures the effects of temporal relationships across the asynchronous time series in an interpretable manner. The methodology is studied through simulation and applied to several examples. The simulations and examples demonstrate good performance in model specification, estimation and out-of-sample forecasting.Item Autocorrelated data in quality control charts(1994) Hood, Terri Frantom; Ensor, Katherine B.Control charts are regularly developed with the assumption that the process observations have an independent relationship. However, a common occurrence in certain industries is the collection of autocorrelated data. Two approaches are investigated that deal with this issue. The time series approach is based on modeling the data with an appropriate time series model to remove the autocorrelative structure. The EWMA approach is based on modeling the observations as a weighted average of previous data. The residuals from the two approaches are plotted on control charts and the average run lengths are compared. Both methods are applied to simulations that generate in-control data and data that have strategically located nonstandard conditions. The nonstandard conditions simulated are process change, linear drift, mean shift, and variance shift. It is proposed that the time series approach tends to perform better in these situations.Item Characterizing Production in the Barnett Shale Resource: Essays on Efficiency, Operator Effects and Well Decline(2016-04-21) Seitlheko, Likeleli; Hartley, Peter RThis dissertation is composed of three papers in the field of energy economics. The first paper estimates revenue and technical efficiency for more than 11,000 wells that were drilled in the Barnett between 2000 and 2010, and also examines how the efficiency estimates differ among operators. To achieve this objective, we use stochastic frontier analysis and a two-stage semi-parametric approach that consists of data envelopment analysis in the first stage and a truncated linear regression in the second stage. The stochastic frontier analysis (SFA) and data envelopment analysis (DEA) commonly identify only two operators as more revenue and technically efficient than Devon, the largest operator in the Barnett. We further find that operators have generally been effective at responding to market incentives and producing the revenue-maximizing mix of gas and oil given the reigning prices. Furthermore, coupled with this last result is the insight that most of the revenue inefficiency is derived from technical inefficiency and not allocative inefficiency. The second paper uses multilevel modeling to examine relative operator effects on revenue generation and natural gas output during the 2000-2010 period. The estimated operator effects are used to determine which operators were more effective at producing natural gas or generating revenue from oil and gas. The operators clump together into three groups – average, below average, and above average – and the effects of individual operators within each group are largely indistinguishable from one another. Among the operators that are estimated to have above average effects in both the gas model and the revenue model are Chesapeake, Devon, EOG and XTO, the top four largest operators in the Barnett. The results also reveal that between-operator differences account for a non-trivial portion of the residual variation in gas or revenue output that remains after controlling for well-level characteristics, and prices in the case of the revenue model. In the third paper, we estimate an econometric model describing the decline of a “typical” well in the Barnett shale. The data cover more than 15,000 wells drilled in the Barnett between 1990 and mid-2011. The analysis is directed at testing the hypothesis proposed by Patzek, Male and Marder (2014) that linear flow rather than radial flow – the latter of which is consistent with Arps (1945) system of equations – governs natural gas production within hydraulically fractured wells in extremely low permeability shale formations. To test the hypothesis, we use a fixed effects linear model with Driscoll-Kraay standard errors, which are robust to autocorrelation and cross-sectional correlation, and estimate the model separately for horizontal and vertical wells. For both horizontal and vertical shale gas wells in the Barnett, we cannot reject the hypothesis of a linear flow regime. This implies that the production profile of a Barnett well can be projected – within some reasonable margin of error – using the decline curve equation of Patzek, Male and Marder (2014) once initial production is known. We then estimate productivity tiers by sampling from the distribution of the length normalized initial production of horizontal wells and generate type curves using the decline curve equation of Patzek, Male and Marder (2014). Finally, we calculate the drilling cost per EUR (expected ultimate recovery) and the breakeven price of natural gas for all the tiers.Item Computational and Statistical Methodology for Highly Structured Data(2020-09-15) Weylandt, Michael; Ensor, Katherine BModern data-intensive research is typically characterized by large scale data and the impressive computational and modeling tools necessary to analyze it. Equally important, though less remarked upon, is the important structure present in large data sets. Statistical approaches that incorporate knowledge of this structure, whether spatio-temporal dependence or sparsity in a suitable basis, are essential to accurately capture the richness of modern large scale data sets. This thesis presents four novel methodologies for dealing with various types of highly structured data in a statistically rich and computationally efficient manner. The first project considers sparse regression and sparse covariance selection for complex valued data. While complex valued data is ubiquitous in spectral analysis and neuroimaging, typical machine learning techniques discard the rich structure of complex numbers, losing valuable phase information in the process. A major contribution of this project is the development of convex analysis for a class of non-smooth "Wirtinger" functions, which allows high-dimensional statistical theory to be applied in the complex domain. The second project considers clustering of large scale multi-way array ("tensor") data. Efficient clustering algorithms for convex bi-clustering and co-clustering are derived and shown to achieve an order-of-magnitude speed improvement over previous approaches. The third project considers principal component analysis for data with smooth and/or sparse structure. An efficient manifold optimization technique is proposed which can flexibly adapt to a wide variety of regularization schemes, while efficiently estimating multiple principal components. Despite the non-convexity of the manifold constraints used, it is possible to establish convergence to a stationary point. Additionally, a new family of "deflation" schemes are proposed to allow iterative estimation of nested principal components while maintaining weaker forms of orthogonality. The fourth and final project develops a multivariate volatility model for US natural gas markets. This model flexibly incorporates differing market dynamics across time scales and different spatial locations. A rigorous evaluation shows significantly improved forecasting performance both in- and out-of-sample. All four methodologies are able to flexibly incorporate prior knowledge in a statistically rigorous fashion while maintaining a high degree of computational performance.Item Denoising by wavelet thresholding using multivariate minimum distance partial density estimation(2006) Scott, Alena I.; Scott, David W.In this thesis, we consider wavelet-based denoising of signals and images contaminated with white Gaussian noise. Existing wavelet-based denoising methods are limited because they make at least one of the following three unrealistic assumptions: (1) the wavelet coefficients are independent, (2) the signal component of the wavelet coefficient distribution follows a specified parametric model, and (3) the wavelet representations of all signals of interest have the same level of sparsity. We develop an adaptive wavelet thresholding algorithm that addresses each of these issues. We model the wavelet coefficients with a two-component mixture in which the noise component is Gaussian but the signal component need not be specified. We use a new technique in density estimation which minimizes an distance criterion (L2E) to estimate the parameters of the partial density that represents the noise component. The L2E estimate for the weight of the noise component, w&d4;L2E , determines the fraction of wavelet coefficients that the algorithm considers noise; we show that w&d4;L2E corresponds to the level of complexity of the signal. We also incorporate information on inter-scale dependencies by modeling across-scale (parent/child) groups of adjacent coefficients with multivariate densities estimated by L 2E. To assess the performance of our method, we compare it to several standard wavelet-based denoising algorithms on a number of benchmark signals and images. We find that our method incorporating inter-scale dependencies gives results that are an improvement over most of the standard methods and are comparable to the rest. The L2E thresholding algorithm performed very well for 1-D signals, especially those with a considerable amount of high frequency content. Our method worked reasonably well for images, with some apparent advantage in denoising smaller images. In addition to providing a standalone denoising method, L2E can be used to estimate the variance of the noise in the signal for use in other thresholding methods. We also find that the L2E estimate for the noise variance is always comparable and sometimes better than the conventional median absolute deviation estimator.Item Design and Validation of Ranking Statistical Families for Momentum-Based Portfolio Selection(2013-07-24) Tooth, Sarah; Thompson, James R.; Dobelman, John A.; Williams, Edward E.In this thesis we will evaluate the effectiveness of using daily return percentiles and power means as momentum indicators for quantitative portfolio selection. The statistical significance of momentum strategies has been well-established, but in this thesis we will select the portfolio size and holding period based on current (2012) trading costs and capital gains tax laws for an individual in the United States to ensure the viability of using these strategies. We conclude that the harmonic mean of daily returns is a superior momentum indicator for portfolio construction over the 1970-2011 backtest period.Item Dynamic Characterization of Multivariate Time Series(2017-12-01) MELNIKOV, OLEG; Ensor, Katherine BThe standard non-negative matrix factorization focuses on batch learning assuming that the fixed global latent parameters completely describe the observations. Many online extensions assume rigid constraints and smooth continuity in observations. However, the more complex time series processes can have multivariate distributions switch between a finite number of states or regimes. In this paper we proposes a regime-switching model for non-negative matrix factorization and present a method of forecasting in this lower-dimensional regime-dependent space. The time dependent observations are partitioned into regimes to enhance factors' interpretability inherent in non-negative matrix factorization. We use weighted non-negative matrix factorization to handle missing values and to avoid needless contamination of observed structure. Finally, we propose a method of forecasting from the regime components via threshold autoregressive model and projecting the forecasts back to the original target space. The computation speed is improved by parallelizing weighted non-negative matrix factorization over multiple CPUs. We apply our model to hourly air quality measurements by building regimes from deterministically identified day and night observations. Air pollutants are then partitioned, factorized and forecasted, mostly outperforming the results standard non-negative matrix factorization with respect of the Frobenius norm of the error. We also discuss the shortcomings of the new model.Item Dynamic Multivariate Wavelet Signal Extraction and Forecasting with Applications to Finance(2020-04-16) Raath, Kim C; Ensor, Katherine BOver the past few years, we have seen an increased need for analyzing the dynamically changing behaviors of economic and financial time series. These needs have led to significant demand for methods that denoise non-stationary time series across time and for specific investment horizons (scales) and localized windows (blocks) of time. This thesis consists of a three-part series of papers. The first paper develops a wavelet framework for the finance and economics community to quantify dynamic, interconnected relationships between non-stationary time series. The second paper introduces a novel continuous wavelet transform, dynamically-optimized, multivariate thresholding method to extract the optimal signal from multivariate time series. Finally, the third paper presents an augmented stochastic volatility wavelet-based forecasting method building on the partial mixture distribution modeling framework introduced in the second paper. Promising results in economics and finance have come from implementing wavelet analysis, however more advanced wavelet techniques are needed as well as more robust statistical analysis tools. In support of this expansion effort, we developed a comprehensive and user-friendly R package, CoFESWave, containing our newly developed thresholding and forecasting methods.Item Essays in financial risk management(2010) Ergen, Ibrahim; El-Gamal, Mahmoud A.In Chapter 1, the usefulness of Extreme Value Theory (EVT) methods, GARCH models, and skewed distributions in market risk measurement is shown by predicting and backtesting the one-day-ahead VaR for emerging stock markets and the S&P 500 index. It has been found that the conventional risk measurement methods, which rely on normal distribution assumption, grossly underestimate the downside risk. In Chapter 2, the dependence of the extreme losses of the emerging stock market indices is analyzed. It is shown that the dependence in the tails of their loss distributions is much stronger than that implied by a correlation analysis. Economically speaking, the benefits of portfolio diversification are lost when investors need them most. The standard methodology for bivariate extremal dependence analysis is slightly generalized into a multi-asset setting. The concept of hidden extremal dependence for a multi-asset portfolio is introduced to the literature and it is shown that the existence of such hidden dependence reduces the diversification benefits. In Chapter 3, the mechanisms that drive the international financial contagion are discussed. Trade competition and macroeconomic similarity channels are identified as significant drivers of financial contagion as measured by extremal dependence. In Chapter 4, the determinants of short-term volatility for natural gas futures are investigated within a GARCH framework augmented with market fundamentals. New findings include the asymmetric effect of storage levels and maturity effect across seasons. More importantly, I showed that, the augmentation of GARCH models with market fundamentals improves the accuracy of out-of-sample volatility forecasts.Item Essays in semiparametric and nonparametric estimation with application to growth accounting(2001) Jeon, Byung Mok; Brown, Bryan W.This dissertation develops efficient semiparametric estimation of parameters and expectations in dynamic nonlinear systems and analyzes the role of environmental factors in productivity growth accounting. The first essay considers the estimation of a general class of dynamic nonlinear systems. The semiparametric efficiency bound and efficient score are established for the problems. Using an M-estimator based on the efficient score, the feasible form of the semiparametric efficient estimators is worked out for several explicit assumptions regarding the degree of dependence between the predetermined variables and the disturbances of the model. Using this result, the second essay develops semiparametric estimation of the expectation of known functions of observable variables and unknown parameters in the class of dynamic nonlinear models. The semiparametric efficiency bound for this problem is established and an estimator that achieves the bound is worked out for two explicit assumptions. For the assumption of independence, the residual-based predictors proposed by Brown and Mariano (1989) are shown to be semiparametric efficient. Under unconditional mean zero assumption, I proposed an improved heteroskedastic autocorrelation consistent estimator. The third essay explores the directional distance function method to analyze productivity growth. The method explicitly evaluates the role of undesirable outputs of the economy, such as carbon dioxide and other green-house gases, have on the frontier production process which we specify as a piecewise linear and convex boundary function. We decompose productivity growth into efficiency change (catching up) and technology change (innovation). We test the statistical significance of the estimates using recently developed bootstrap method. We also explore implications for growth of total factor productivity in the OECD and Asia economies.Item Essays in Structural Econometrics of Auctions(2012-09-05) Bulbul Toklu, Seda; Sickles, Robin C.; Medlock, Kenneth B., III; Cox, Dennis D.The first chapter of this thesis gives a detailed picture of commonly used structural estimation techniques for several types of auction models. Next chapters consist of essays in which these techniques are utilized for empirical analysis of auction environments. In the second chapter we discuss the identification and estimation of the distribution of private signals in a common value auction model with an asymmetric information environment. We argue that the private information of the informed bidders are identifiable due to the asymmetric information structure. Then, we propose a two stage estimation method, which follows the identification strategy. We show, with Monte-Carlo experiments, that the estimator performs well. Third chapter studies Outer Continental Shelf drainage auctions, where oil and gas extraction leases are sold. Informational asymmetry across bidders and collusive behavior of informed firms make this environment very unique. We apply the technique proposed in the second chapter to data from the OCS drainage auctions. We estimate the parameters of a structural model and then run counterfactual simulations to see the effects of the informational asymmetry on the government's auction revenue. We find that the probability that information symmetry brings higher revenue to the government increases with the value of the auctioned tract. In the fourth chapter, we make use of the results in the multi-unit auction literature to study the Balancing Energy Services auctions (electricity spot market auctions) in Texas. We estimate the marginal costs of bidders implied by the Bayesian-Nash equilibrium of the multi-unit auction model of the market. We then compare the estimates to the actual marginal cost data. We find that, for the BES auction we study, the three largest bidders, Luminant, NRG and Calpine, have marked-down their bids more than the optimal amount implied by the model for the quantities where they were short of their contractual obligations, while they have put a mark-up larger than the optimal level implied by the model for quantities in excess of their contract obligations. Among the three bidders we studied, Calpine has come closest to bidding its optimal implied by the Bayesian-Nash equilibrium of the multi-unit auction model of the BES market.Item Essays on Crude Oil Markets and Electricity Access(2019-05-13) Volkmar, Peter; Hartley, Peter RIn the first chapter I discuss how OPEC's internal costs restrict their ability collude. Where membership in 2007 was anchored by three large, low-cost producers in Iran, Venezuela and Saudi Arabia, by 2015 Venezuela and Iran were no longer large producers due to lack of investment and sanctions, respectively. This left Saudi Arabia and Iraq as the only large producers. Using a game theory model, I show that together they did not have the power to enforce quotas among themselves and other OPEC members without Russia's participation. More generally, my model implies that at present, OPEC is unable to enforce quotas without full participation of either Iran or Russia. This situation has been exacerbated, from their perspective, by improved medium- and long-term price responsiveness of non-OPEC crude oil supply, which erodes OPEC market power. However, the model implies that this change in non-OPEC supply is not necessary for destroying OPEC's ability to cartelize. OPEC's current composition has reduced its ability to enforce production quotas among its membership. Many analysts have suggested that OPEC's role as swing producer will be supplanted by tight oil production from the United States. After exploring this possibility in my second chapter I find that tight oil production has not increased non-OPEC supply's short-term price responsiveness, while demand has simultaneously grown more brittle. This implies OPEC's role as swing producer is more important for stabilizing price now than at any point in the past decade. Yet my first chapter shows they are currently ill-prepared for the task. The final chapter constructs a measure of the relative success of different countries in providing access to electrical power, which is in turn a critical determinant of energy poverty. Measuring energy poverty indexing is only helpful if it allows us to discern how lagging countries may be able to attain outcomes like their peers. More specifically, I utilize frontier analysis to develop efficiency ratings in meeting electricity demand that highlight inputs countries are not using efficiently to that end. Charts in the chapter compare 71 countries' existing infrastructure relative to their electrification rates. The Data Envelopment Analysis employed allows comparison between an inefficient country and a group of it economic peers. Additionally, it shows how far from the frontier an under performing country is in each type of input, tracks progress over years and puts bootstrapped confidence intervals on all point estimates.Item Essays on Productivity Analysis(2012) Hao, Jiaqi; Sickles, Robin C.In Chapter One, to measure the efficiency changes in the U.S. banking industry after structural changes since the late 1970s, we utilize a set of panel data stochastic frontier models of varying parametric assumptions and function specifications. Our estimates support the opinion of improving efficiency in the banking industry in the period from 1984 to early 1990s. The first chapter raises two research questions. First, the comparison of different estimates shows that the choice of methodologies has significant impacts on the levels and dynamics of estimation results. How should we consider a more general approach to incorporate modeling uncertainty? Second, to fit in a broader picture, how can we extend our tools of estimating industry-level efficiencies to measure efficiency changes of countries and regions? These two questions motivated us to conduct researches which are in the second and third chapters. In Chapter Two, we propose the construction of a consensus estimate to extract information from all involved studies. Insights from different fields of economics supporting aggregating estimators are provided. We discuss three methodologies in detail: model averaging, combining forecast and rule-based methods using Meta-Regression Analysis. Two Monte Carlo experiments are conducted to examine the finite-sample performance of the combined estimators. In Chapter Three, we accommodate the models discussed in Chapter One to measure the Total Factor Productivity (TFP) changes. Discussions of various theories explaining economic growth and productivity measurements are provided. We decompose the change of TFP into technical efficiency change and innovational change. Estimations are also combined according to principles in Chapter Two. Two studies utilizing the World Productivity Database from the UNIDO are conducted. In the first study, we find out that from 1972 to 2000 the Asian region had the highest Total Factor Productivity growth, which was mainly contributed to innovation progress instead of efficiency catch-up. In the second study, we find out that between 1970 and 2000, Asia Four Tigers and new tiger countries (China, India, Indonesia, Malaysia, and Thailand) had substantial TFP advancements, mainly due to innovations. The other four groups of countries including developed and developing countries had downward trends in TFP growth.
- «
- 1 (current)
- 2
- 3
- »