Browsing by Author "Sickles, Robin C."
Now showing 1 - 20 of 23
Results Per Page
Sort Options
Item Airline Travel Demand, the Derived Demand for Aircraft Fuel, and Fuel Utilization Forecasts Using Structural and Atheoretical Approaches(2012) Fang, Ying; Sickles, Robin C.In the first chapter, we develop a dynamic model of collusion in city-pair routes for selected US airlines and specify the first order conditions using a state-space representation that is estimated by Kalman-filtering techniques using the Databank 1A (DB1A) Department of Transportation (DOT) data during the period 1979I-1988IV. We consider two airlines, American (AA) and United (UA) and four city pairs. Our measure of market power is based on the shadow value of long-run profits in a two person strategic dynamic game and we find evidence of relative market power of UA in three of the four city pairs we analyze. The second chapter explores three models of forecasting airline energy demand: Trend line, ARIMA and Structural Model based on results from Chapter 1 and find that none of them is a dominant winner in American (AA) and United (UA) between Chicago and Salt Lake City. In the third chapter, we use Model Averaging and Forecast Combination Techniques to provide a decisive conclusion focusing on discussing Equal Weighted Averaging, Mean Square Weighted Averaging and Optimized Weighted Averaging on UA and AA in City-Pairs Chicago -Seattle and Chicago-San Diego.Item Bayesian Treatments for Panel Data Stochastic Frontier Models with Time Varying Heterogeneity(MDPI, 2017) Liu, Junrong; Sickles, Robin C.; Tsionas, E.G.This paper considers a linear panel data model with time varying heterogeneity. Bayesian inference techniques organized around Markov chain Monte Carlo (MCMC) are applied to implement new estimators that combine smoothness priors on unobserved heterogeneity and priors on the factor structure of unobserved effects. The latter have been addressed in a non-Bayesian framework by Bai (2009) and Kneip et al. (2012), among others. Monte Carlo experiments are used to examine the finite-sample performance of our estimators. An empirical study of efficiency trends in the largest banks operating in the U.S. from 1990 to 2009 illustrates our new estimators. The study concludes that scale economies in intermediation services have been largely exploited by these large U.S. banks.Item Convergence, Regulatory Distortions, Deregulatory Dynamics and Growth Experiences of the Latin American and Brazilian EconomiesSickles, Robin C.; Hultberg, Patrik T.; Ruiz, Fernando Orozco; Mukerjie, Joya; James A. Baker III Institute for Public PolicyItem Convergent Economies: Implications for World Energy UseSickles, Robin C.; Hultberg, Patrik T.; James A. Baker III Institute for Public PolicyItem Dynamic Treatments of Heterogeneity(2014-04-23) Dinh, Trang; Sickles, Robin C.; Diamond, John W.; Wilson, Rick K.In this thesis we are interested in how unobserved heterogeneity of agents affects the predictions from several different classical dynamic models that are widely used in economics. First, we capture heterogeneity in users’ preferences in order to obtain a better prediction for their movie ratings as our solution for the Netflix Prize competition. Our method combines user-based and item (movie) based methods in a spatial regression framework. Next, we introduce heterogeneous income profiles in a model of housing choices where households have options of renting, buying a house, and/or keeping the old house (if they already have one). While most lifecycle models of consumption and saving assume that individuals are ex-ante identical and face the same income process, we allow for the more realistic setting where each individual faces a different income process. We next investigate lifetime saving and investing behaviors of US households using the Panel Study of Income Dynamics (PSID) to detect changes in those behaviors due to retirement. Addressing heterogeneity in households’ saving and investing decision is essential in order to separate the aging effect from the household and cohort effect.Item The effects of efficiency and TFP growth on pollution in Europe: a multistage spatial analysis(Springer, 2014) Adetutu, Morakinyo; Glass, Anthony J.; Kenjegalieva, Karligash; Sickles, Robin C.It is common in efficiency studies which analyse the environment for pollution to form part of the production technology. Pollution therefore affects efficiency and the TFP growth decomposition. As an alternative approach we draw on theoretical studies from the environmental economics literature, which demonstrate that TFP affects environmental quality. Along these lines we adopt a two-stage empirical methodology. Firstly, we obtain two estimates of productive performance (efficiency and TFP growth) using a stochastic production frontier framework in Stage 1 for European countries (1995–2008), from which we omit emissions. Secondly, in Stage 2 these measures of productive performance are used as regressors in spatial models of per capita nitrogen and sulphur emissions for European countries. From our preferred Stage 2 spatial models we find that a country’s TFP growth must fall to reduce its per capita nitrogen and sulphur emissions. This is likely to be because nitrogen and sulphur emissions in the EU have been tightly regulated for a long period of time via air quality standards and consequently, substantial reductions in emissions from cleaner and more productive technology were achieved some time ago.Item Efficiency and Productivity Analysis of Multidivisional Firms(2016-04-14) Gong, Binlei; Sickles, Robin C.Multidivisional firms are those who have footprints in multiple segments and hence using multiple technologies to convert inputs to outputs, which makes it difficult to estimate the resource allocations, aggregated production functions, and technical efficiencies of this type of companies. This dissertation aims to explore and reveal such unobserved information by several parametric and semiparametric stochastic frontier analyses and some other structural models. In the empirical study, this dissertation analyzes the productivity and efficiency for firms in the global oilfield market.Item Essays in Efficiency Analysis(2013-09-16) Demchuk, Pavlo; Sickles, Robin C.; Hartley, Peter R.; Scott, David W.Today a standard procedure to analyze the impact of environmental factors on productive efficiency of a decision making unit is to use a two stage approach, where first one estimates the efficiency and then uses regression techniques to explain the variation of efficiency between different units. It is argued that the abovementioned method may produce doubtful results which may distort the truth data represents. In order to introduce economic intuition and to mitigate the problem of omitted variables we introduce the matching procedure which is to be used before the efficiency analysis. We believe that by having comparable decision making units we implicitly control for the environmental factors at the same time cleaning the sample of outliers. The main goal of the first part of the thesis is to compare a procedure including matching prior to efficiency analysis with straightforward two stage procedure without matching as well as an alternative of conditional efficiency frontier. We conduct our study using a Monte Carlo study with different model specifications and despite the reduced sample which may create some complications in the computational stage we strongly agree with a notion of economic meaningfulness of the newly obtained results. We also compare the results obtained by the new method with ones previously produced by Demchuk and Zelenyuk (2009) who compare efficiencies of Ukrainian regions and find some differences between the two approaches. Second part deals with an empirical study of electricity generating power plants before and after market reform in Texas. We compare private, public and municipal power generators using the method introduced in part one. We find that municipal power plants operate mostly inefficiently, while private and public are very close in their production patterns. The new method allows us to compare decision making units from different groups, which may have different objective schemes and productive incentives. Despite the fact that at a certain point after the reform private generators opted not to provide their data to the regulator we were able to construct tree different data samples comprising two and three groups of generators and analyze their production/efficiency patterns. In the third chapter we propose a semiparametric approach with shape constrains which is consistent with monotonicity and concavity constraints. Penalized splines are used to maintain the shape constrained via nonlinear transformations of spline basis expansions. The large sample properties, an effective algorithm and method of smoothing parameter selection are presented in the paper. Monte Carlo simulations and empirical examples demonstrate the finite sample performance and the usefulness of the proposed method.Item Essays in Structural Econometrics of Auctions(2012-09-05) Bulbul Toklu, Seda; Sickles, Robin C.; Medlock, Kenneth B., III; Cox, Dennis D.The first chapter of this thesis gives a detailed picture of commonly used structural estimation techniques for several types of auction models. Next chapters consist of essays in which these techniques are utilized for empirical analysis of auction environments. In the second chapter we discuss the identification and estimation of the distribution of private signals in a common value auction model with an asymmetric information environment. We argue that the private information of the informed bidders are identifiable due to the asymmetric information structure. Then, we propose a two stage estimation method, which follows the identification strategy. We show, with Monte-Carlo experiments, that the estimator performs well. Third chapter studies Outer Continental Shelf drainage auctions, where oil and gas extraction leases are sold. Informational asymmetry across bidders and collusive behavior of informed firms make this environment very unique. We apply the technique proposed in the second chapter to data from the OCS drainage auctions. We estimate the parameters of a structural model and then run counterfactual simulations to see the effects of the informational asymmetry on the government's auction revenue. We find that the probability that information symmetry brings higher revenue to the government increases with the value of the auctioned tract. In the fourth chapter, we make use of the results in the multi-unit auction literature to study the Balancing Energy Services auctions (electricity spot market auctions) in Texas. We estimate the marginal costs of bidders implied by the Bayesian-Nash equilibrium of the multi-unit auction model of the market. We then compare the estimates to the actual marginal cost data. We find that, for the BES auction we study, the three largest bidders, Luminant, NRG and Calpine, have marked-down their bids more than the optimal amount implied by the model for the quantities where they were short of their contractual obligations, while they have put a mark-up larger than the optimal level implied by the model for quantities in excess of their contract obligations. Among the three bidders we studied, Calpine has come closest to bidding its optimal implied by the Bayesian-Nash equilibrium of the multi-unit auction model of the BES market.Item Essays Investigating Extreme Events in Financial Markets(2015-04-21) Gualtieri, James N; Sickles, Robin C.; Sizova, Natalia M; Weston, James PThis thesis, through three empirical applications, provides an analysis of extreme events in financial markets. Robust growth in financial markets has greatly increased the ability of economic agents to share risk according to their preferences or tastes. Despite this, many markets have demonstrated extreme instability at times. These events have the potential to shake the confidence of investors and this fear can lead to inefficient outcomes with respect to risk sharing and resource allocation. By investigating the dynamics of securities during extreme events one can gain intuition as to their root causes and a better understanding of the inherent risk. The first chapter analyzes how international equity markets interact during extreme events. Using a novel set of high-frequency data on exchange traded funds (ETFs), designed to track international equity markets, I examine the dynamics of intra-day returns between 11 countries. Using non-parametric tests designed to identify jumps in the price process I examine the dynamics across markets during jumps, as well as continuous movements. Contrary to other literature that uses coarser data, I find a high-degree of commonality in the jump components. Specifically, there are many instances when different markets co-jump and returns are significantly more correlated on jump days. I also find substantial evidence of self and cross excitation across markets and that international markets respond to US macroeconomic news announcements. These findings suggest that international financial markets are heavily intertwined and that shocks propagate across markets. This information is valuable from a modeling perspective as it provides evidence of channels through which economies are linked that must be accounted for. Further, it provides valuable information to investors into the benefits and risks associated with international diversification that allows them to take a more proactive, rather than a reactionary, approach to risk management. The second chapter, based on Gualtieri and Sizova (2015), investigates the joint dynamics of portfolios considered to represent priced risk in asset markets. Specifically, it considers the joint modeling of the market return, and two zero net cost portfolios that are used as proxies for systematic risk factors: Value and Momentum. As in the case of chapter 1, we allow for a separation between continuous and jump dynamics. We find a number of interesting relationships between factor dynamics that have implications for risk-based explanations of factor risk premia as well as factor investing. Specifically, we find that although volatilities are highly correlated, the orthogonal (to the Market volatility) component of Momentum volatility contains information about the Market's dynamics. With respect to extreme events, we find that volatility co-jumps are present in both-return pairs (Market-Momentum and Market-Value). We find that Value does not jump independent of the Market, whereas Momentum does. We also find that a number of the Momentum jumps occur in bear markets, which is consistent with documented Momentum crashes (see for example Daniel and Moskowitz (2013). We also use the model output to investigate the merits of factor investing. We estimate a variety of metrics on jump days to analyze the benefits of diversifying away from the market and into additional stylized portfolios. We find that the a combination position in the Market and Value significantly improves performance during extreme events in terms of average loss, volatility and value-at-risk. Aside from the empirical analysis we also provide a generalization of the univariate stochastic volatility conditional jump (SVCJ) model of Eraker et al. (2003) to the multivariate case. We provide a detailed appendix documenting the sampling scheme that can be used to investigate joint dynamics in extreme events. The third chapter, based on Bada et al. (2015), examines whether algorithmic trading (AT) has a time varying effect on measures of liquidity such as bid-ask spreads and volatility. Specifically, in the context of a panel model with individual and time fixed effects we allow for structural breaks in the slope parameters at an unknown number of times and automatically detect the break points. The model is free from any ad-hoc identification of break points or restrictions on the number of breaks imposed by the econometrician a priori. The study is the first to use this estimator (in any context) and the results show clear evidence of breaks in the relationship between AT and liquidity during the financial crisis. These results are in contrast to prior literature that demonstrates a clear positive relationship between AT and market liquidity. The timing of the breaks is important as the merits of added liquidity during relatively stable periods versus its withdrawal during periods when it is in high demand are somewhat ambiguous and may possibly present a net welfare loss to society. The results indicate the presence of a state contingent relationship between AT and liquidity.Item Essays on Causal Inference and Treatment Effects in Productivity and Finance: Double Robust Machine Learning with Deep Neural Networks and Random Forests(2021-04-28) Varaku, Kerda; Sickles, Robin C.; Tang, XunIn this dissertation, I use novel methodologies that incorporate machine learning into causal policy evaluation such as double robust machine learning to study some key issues in Productivity and Finance. In the first chapter, I evaluate the impacts of European public subsidies on innovation. I use double machine learning with deep neural networks to explore the effects of public subsidies on firms’ R&D input and output. I find that public subsidies increase both R&D intensity and R&D output and these results remain economically and statistically significant even after accounting for treatment endogeneity. In the second chapter, I evaluate the effects of public subsidies and collaboration agreements on innovation output. Many public schemes related to R&D have pushed towards collaborative agreements between firms/organizations and this chapter studies whether subsidies not promoting collaboration perform as well in terms of stimulating R&D output. Results show that subsidized noncollaborative firms would have gained in terms of R&D output had they collaborated. I also find that collaboration alone seems to generate significantly higher (double) R&D output compared to subsidies alone. In the third chapter, I analyze the impacts of offering non-core and non-financial ("plus") services in addition to core financial services on Microfinance Institutions' (MFIs) performance using a double machine learning model with random forests. The results indicate no differences in the performance of MFIs offering core financial and microfinance plus services, however, MFIs that offer non-core financial services together with non-financial services are serving less poor clients, suggesting a rather surprising "mission drift". In the fourth chapter, I analyze the impacts of regulation on MFIs' performance. I provide evidence of the impact of regulation on the double bottom line of the microfinance industry using double machine learning with neural networks. Results show that regulation does not affect financial results but affects the outreach of savings-and-loan MFIs. Regulation increases the depth of outreach of this group, indicating fewer poor clients, and suggesting a mission drift. In the fifth chapter, I investigate the link between the term structure of sovereign credit default swaps and the market efficiency of carry trades. I use Kneip et al. (2012) factor model to deal with large dimensions and unknown forms of unobservable heterogeneous effects. I document a divergent pattern of carry trade risk for developed and developing countries. In the sixth chapter, I use recurrent neural networks and feed forward deep networks, to predict NYSE, NASDAQ and AMEX stock prices from historical data. I experiment with different architectures and compare data normalization techniques. Then, I leverage those findings to question the efficient-market hypothesis through a formal statistical test and I find evidence of an inefficient stock market. Each of these studies requires the implementation of new methods of estimation and inference that have not been utilized to examine these important economic policy issues. My research points to many advantages of the approaches that I introduce in my dissertation. Robustness of inferences is a crucial dimension to acceptable policy recommendations and my development of semi/nonparametric estimators and their applications to crucial evaluations of public policy and regulatory oversight provides evidence that they are well-motivated theoretically, that they can be feasibly implemented in empirical applications, and they are in many cases, a dominant strategy in regard to model specification and estimation.Item Essays on Labor Supply Dynamics, Home Production, and Case-based Preferences(2013-07-24) Naaman, Michael; Sickles, Robin C.; Boylan, Richard T.; Weston, JamesIn this paper we examine models that incorporate CBDT. In the first chapter, we will examine CBDT more thoroughly including a reinterpretation of the standard labor supply problem under a wage tax in a partial equilibrium model where preferences exhibit characteristics of CBDT. In the second chapter, we extend the labor supply decision under a wage tax by incorporating a household production function. Utility maximization by repeated substitution is applied as a novel approach to solving dynamic optimization problems. This approach allows us to find labor supply elasticities that evolve over the life cycle. In the third chapter, CBDT will be explored in more depth focusing on its applicability in representing people's preferences over movie rentals in the Netflix competition. This chapter builds on the theoretical model introduced in chapter 1, among other things, expressing the rating of any customer movie pair using the ratings of similar movies that the customer rated and the ratings of the movie in question by similar customers. We will also explore in detail the econometric model used in the Netflix competition which utilizes machine learning and spatial regression to estimate customer's preferences.Item Essays on Productivity Analysis(2012) Hao, Jiaqi; Sickles, Robin C.In Chapter One, to measure the efficiency changes in the U.S. banking industry after structural changes since the late 1970s, we utilize a set of panel data stochastic frontier models of varying parametric assumptions and function specifications. Our estimates support the opinion of improving efficiency in the banking industry in the period from 1984 to early 1990s. The first chapter raises two research questions. First, the comparison of different estimates shows that the choice of methodologies has significant impacts on the levels and dynamics of estimation results. How should we consider a more general approach to incorporate modeling uncertainty? Second, to fit in a broader picture, how can we extend our tools of estimating industry-level efficiencies to measure efficiency changes of countries and regions? These two questions motivated us to conduct researches which are in the second and third chapters. In Chapter Two, we propose the construction of a consensus estimate to extract information from all involved studies. Insights from different fields of economics supporting aggregating estimators are provided. We discuss three methodologies in detail: model averaging, combining forecast and rule-based methods using Meta-Regression Analysis. Two Monte Carlo experiments are conducted to examine the finite-sample performance of the combined estimators. In Chapter Three, we accommodate the models discussed in Chapter One to measure the Total Factor Productivity (TFP) changes. Discussions of various theories explaining economic growth and productivity measurements are provided. We decompose the change of TFP into technical efficiency change and innovational change. Estimations are also combined according to principles in Chapter Two. Two studies utilizing the World Productivity Database from the UNIDO are conducted. In the first study, we find out that from 1972 to 2000 the Asian region had the highest Total Factor Productivity growth, which was mainly contributed to innovation progress instead of efficiency catch-up. In the second study, we find out that between 1970 and 2000, Asia Four Tigers and new tiger countries (China, India, Indonesia, Malaysia, and Thailand) had substantial TFP advancements, mainly due to innovations. The other four groups of countries including developed and developing countries had downward trends in TFP growth.Item Essays on Productivity and Panel Data Econometrics(2014-03-24) Liu, Junrong; Sickles, Robin C.; Sizova, Natalia M.; Scott, David W.There are four essays on productivity and Panel data econometrics in this dissertation, with the first two essays on empirical research and the last two more focused on theory improvement. The first chapter is study of productivity and efficiency in the Mexican Energy Industry. The second chapter analyzes the productivity and efficiency of U.S. largest banks productivity and efficiency. The third incorporates a Bayesian treatment to two different panel data models. The last chapter introduces a semi-nonparametric method in panel data models. These four chapters have been developed into four working papers. They are Liu et al. (2011), Inanoglu et al. (2012), Liu et al. (2013) and Liu et al. (2014). The first chapter studies the optimizing behavior of Pemex by estimating a cost model of Pemex's production of energy. The estimation using duality between the cost and production function is undertaken, which facilitates the specification. This approach makes it convenient to find the cost shares under different levels of returns to scale. The results indicate the presence of substantial distortions in cost shares. That would be brought back to equilibrium were the Mexican government willing to allow more foreign investment in its energy extraction industry and thus increase the capital use and decrease the labor use. The second chapter utilizes a suite of panel data models in order to examine the extent to which scale economy and efficiencies exist in the largest U.S. banks. The empirical results are assessed based on the consensus among the findings from the various econometric treatments and models. This empirical study is based on a newly developed dataset based on Call Reports from the FDIC for the period 1994-2013. The analyses point to a number of conclusions. First, despite rapid growth over the last 20 years, the largest surviving banks in the U.S. have decreased their level of efficiency as they took on increasing levels of risk (credit, market and liquidity). Second, no measurable scale economies and scope economies are found across our host of models and econometric treatments. In addition to the broad policy implications, this essay also provides an array of econometric techniques, findings from which can be combined to provide a set of robust consensus-based conclusions that can be a valuable analytical tool for supervisors and others involved in the regulatory oversight of financial institutions. The third chapter considers two models for uncovering information about technical change in large heterogeneous panels. The first is a panel data model with nonparametric time effects. The second is a panel data model with common factors whose number is unknown and whose effects are firm-specific. This chapter proposes a Bayesian approach to estimate the two models. Bayesian inference techniques organized around MCMC are applied to implement the models. Monte Carlo experiments are performed to examine the finite-sample performance of this approach and have shown that the method proposed is comparable to the recently proposed estimator of Kneip et al. (2012) (KSS) and dominates a variety of estimators that rely on parametric assumptions. In order to illustrate the new method, the Bayesian approach has been applied to the analysis of efficiency trends in the U.S. largest banks using a dataset based on the Call Report data from FDIC over the period from 1990 to 2009. The fourth chapter introduces a new estimation method under the framework of the stochastic frontier production model. The noise term is assumed to have the traditional normal density but the inefficiency term is spanned by Laguerre Polynomials. This method is a Semi-nonparametric method and follows the spirit of Gallant and Nychka (1987). Finite sample performance of this estimator is shown to dominate the nonparametric estimators via Monte Carlo simulations.Item Essays on the Use of Duality, Robust Empirical Methods, Panel Treatments, and Model Averaging with Applications to Housing Price Index Construction and World Productivity Growth(2015-04-16) Shang, Chenjun; Sickles, Robin C.; El-Gamal, Mahmoud; Ensor, Katherine B.This dissertation focuses on analyzing the production side of the economy, and aims to provide robust estimates of the parameters of interest. In a production process, the output level is mainly determined by two parts: inputs and productivity. Compared with the inputs, which are concrete and measurable, productivity is an unobservable factor that relies on economic models for estimation. An appropriate and robust modeling method is essential if we want to accurately capture the productivity term. Chapter 1 reviews the research on productivity with a focus on stochastic frontier analysis, which is a classic framework in the productivity literature. This chapter starts with the definition and decomposition of productivity. Measured as a ratio of the outputs to the inputs, productivity can be divided into two main parts: innovations and technical efficiencies. The growth of technologies and innovations depends heavily on education and research, and the technical efficiencies of firms vary with their administration, management skills, and allocation of inputs etc. In studies analyzing these two components, stochastic frontier models have gradually become the standard method. This chapter briefly introduces the development of stochastic frontier models, with an emphasis on the panel data setting. Twelve specifications, as well as their implementation methods, are then discussed in detail. These representative models make different assumptions about the efficiency term, aiming to provide better approximations of the underlying data generating process without adding too many constraints. Comparing all these models, we expect different estimates of productivity from different specifications. The evaluation and selection of a suitable model for empirical analysis become a problem. Standard information criteria provide measures of the performance of each candidate model, but multiple criteria can lead to contradicting conclusions about which model is the best one. In addition, the model selection approach itself ignores the risk of model uncertainty. This issue of dealing with multiple competing models will be addressed in Chapter 3. While Chapter 1 concentrates on the methods of estimating productivity, Chapter 2 focuses on the role of proper specification of the inputs used in generating the output. Though the inputs of a production process are usually observable, their effects on the outputs are often not clear and straightforward. The allocation of different inputs are affected by both the production technology and market prices. Chapter 2 utilizes the duality between the production maximization problem and cost minimization problem to uncover the shadow prices of inputs, and constructs corresponding price indexes for further analysis. This chapter is motivated by recent housing bubbles and considers the housing market for the empirical application. The housing market is an important component of the economy, and constantly attracts interests of researchers. Diewert (2010) for example has provided a comparison of various methods of constructing property price indexes using index number and hedonic regression methods, which he illustrates using data over a number of quarters from a small Dutch town. Chapter 2 provides an alternative approach based on Shephard's dual lemma and I apply it to the same data used by Diewert. This method avoids the multicollinearity problem associated with traditional hedonic regression, and the resulting prices of property characteristics show smoother trends than Diewert's results. The chapter also revisits the Diewert and Shimizu (2013) study that employed hedonic regressions to decompose the price of residential property in Tokyo into land and structure components and that constructed constant quality indexes for land and structure prices respectively. I use three models from Diewert and Shimizu (2013) to fit our real estate data from town ‘A’ in Netherlands, and also construct the price indices for land and structure, which are compared with results derived using the duality theory. Again, we have multiple models in the study of housing market. As in the case of productivity, the shadow prices of property characteristics are unobservable (due to the nature of the input or intermediate good, there may not exist an explicit market.) Thus, we rely on certain methods for estimation, and there are a set of candidate models. Chapters 1 and 2 leave us in a dilemma. Which model is correct? Which model do we choose? Is any model actually the correct one or are we choosing among misspecified models? Do we simply choose one model and ignore results from the others? These issues are addressed in Chapter 3 wherein a model averaging approach is explored to provide estimates that are robust to various model specifications. Model averaging methods can be used to provide robust estimates by combining a set of competing models through certain optimization mechanisms. Chapter 3 pursues robust estimates of world productivity levels as well as its growth rates. Various structural and reduced form models of productivity growth have been proposed in the literature. In either class of models, reduced form measurements of productivity and efficiency are obtained. As the true data generating process of productivity cannot be observed, this chapter examines model averaging approaches that can provide a vehicle to weight predictions (in the form of productivity and efficiency measurements) from different reduced form methods. The reduced form models, typically stochastic frontier methods, have a variety of different settings, which have been discussed in Chapter 1. This chapter considers the jackknife model averaging estimator proposed by Hansen and Racine (2012) and illustrates how to apply the technique to a set of competing stochastic frontier estimators. The derived method is employed to analyze the productivity and efficiency development in three country groups worldwide. The results of the empirical application show that the model averaging method provides more stable estimates. The model selection method, on the other hand, tends to select a model with superficially high goodness of fit, which results from the match between some specific model setting and the data set. A brief discussion of alternative structural models from which a reduced form forecast can be derived is provided to illustrate a different perspective for productivity analysis.Item Essays on Treatment Effects Evaluation(2012-09-05) Guo, Ronghua; Sickles, Robin C.; Sizova, Natalia M.; Scott, David W.The first chapter uses the propensity score matching method to measure the average impact of insurance on health service utilization in terms of office-based physician visits, total number of reported visits to hospital outpatient departments, and emergency room visits. Four matching algorithms are employed to match propensity scores. The results show that insurance significantly increases office-based physician visits, and its impacts on reported visits to hospital outpatient departments and emergency room visits are positive, but not significant. This implies that physician offices will receive a substantial increase in demand if universal insurance is imposed. Government will need to allocate more resources to physician offices relative to outpatient or emergency room services in the case of universal insurance in order to accommodate the increased demand. The second chapter studies the sensitivity of propensity score matching methods to different estimation methods. Traditionally, parametric models, such as logit and probit, are used to estimate propensity score. Current technology allows us to use computationally intensive methods, either semiparametric or nonparametric, to estimate it. We use the Monte Carlo experimental method to investigate the sensitivity of the treatment effect to different propensity score estimation models under the unconfoundedness assumption. The results show that the average treatment effect on the treated (ATT) estimates are insensitive to the estimation methods when index function for treatment is linear, but logit and probit model do better jobs when the index function is nonlinear. The third chapter proposes a Cross-Sectionally Varying (CVC) Coefficient method to approximate individual treatment effects with nonexperimental data, the distribution of treatment effects, the average treatment effect on the treated and the average treatment effect. The CVC method reparameterizes the outcome of no treatment and the treatment effect in terms of observable variables, and uses these observables together with a Bayesian estimator of their coefficients to approximate individual treatment effects. Monte Carlo simulations demonstrate the efficacy and applicability of the proposed estimator. This method is applied to two datasets: data from the U.S. Job Training Partnership ACT (JTPA) program and a dataset that contains firms’ seasoned equity offerings and operating performances.Item Frontier efficiency, capital structure, and portfolio risk: An empirical analysis of U.S. banks(Elsevier, 2018) Ding, Dong; Sickles, Robin C.Firm’ ability to effectively allocate capital and manage risks is the essence of their production and performance. This study investigated the relationship between capital structure, portfolio risk levels and firm performance using a large sample of U.S. banks from 2001 to 2016. Stochastic frontier analysis (SFA) was used to construct a frontier to measure the firm's cost efficiency as a proxy for firm performance. We further look at their relationship by dividing the sample into different size and ownership classes, as well as the most and least efficient banks. The empirical evidence suggests that more efficient banks increase capital holdings and take on greater credit risk while reducing risk-weighted assets. Moreover, it appears that increasing the capital buffer impacts risk-taking by banks depending on their level of cost efficiency, which is a placeholder for how productive their intermediation services are performed. An additional finding, is that the direction of the relationship between risk-taking and capital buffers differs depending on what measure of risk is used.Item Optimal Dynamic Production Policy: The Case of a Large Oil Field in Saudi ArabiaGao, Weiyu; Hartley, Peter R.; Sickles, Robin C.; James A. Baker III Institute for Public PolicyWe model the optimal dynamic oil production decisions for a stylized oilfield resembling the largest developed light oil field in Saudi Arabia, Ghawar. We use data from a number of sources to estimate the cost and revenue functions used in the dynamic programming model. We also pay particular attention to the dynamic aspects of oil production. We use a nonparametric smoothing technique – tensor splines – to approximate the value function. The optimal solution depends on assumptions about various exogenous variables such as the discount rate and the timing of breakthroughs in the use of alternative energy, which we take to be solar energy. We account for uncertainty about the forecasts by examining the solutions under a number of scenarios. Our model is based on the hypothesis that oil production is chosen to maximize the discounted value of profits. Saudi oil policy reflects many political and strategic motives. Our analysis enables one to quantify the cost of pursuing these non-economic objectives.Item Optimal Dynamic Production Policy: The Case of a Large Oil Field in Saudi Arabia(James A. Baker III Institute for Public Policy) Gao, Weiyu; Hartley, Peter R.; Sickles, Robin C.; James A. Baker III Institute for Public PolicyWe model the optimal dynamic oil production decisions for a stylized oilfield resembling the largest developed light oil field in Saudi Arabia, Ghawar. We use data from a number of sources to estimate the cost and revenue functions used in the dynamic programming model. We also pay particular attention to the dynamic aspects of oil production. We use a nonparametric smoothing technique – tensor splines – to approximate the value function. The optimal solution depends on assumptions about various exogenous variables such as the discount rate and the timing of breakthroughs in the use of alternative energy, which we take to be solar energy. We account for uncertainty about the forecasts by examining the solutions under a number of scenarios. Our model is based on the hypothesis that oil production is chosen to maximize the discounted value of profits. Saudi oil policy reflects many political and strategic motives. Our analysis enables one to quantify the cost of pursuing these non-economic objectives.Item Probabilistic Seismic Hazard Perspectives on Japan's Nuclear Energy Policy: Implications for Energy Demand and Economic Growth(James A. Baker III Institute for Public Policy) Sofiolea, Eleftheria; Sickles, Robin C.; Spanos, Pol; James A. Baker III Institute for Public PolicyThis report reflects an effort to assess the status of seismic risk implications on the nuclear plants providing energy in Japan. In this regard, existing and projected plants along with their power capacity have been identified and cataloged. Further, historical data of seismic events deemed significant for the functionality and safety of the plants are included in terms of the Richter magnitude. Also, documents describing standard procedures for aseismic design of power plants have been perused. The coping of the Japanese industry with the major seismic event of Kobe (January 17, 1995) has been considered. It is believed that the procedures followed in designing and operating nuclear power plants reflect sound engineering practices. Barring an extraordinary seismic event, it is expected that the nuclear plants based energy supply in Japan can be maintained with manageable disruptions. Nevertheless, it is recommended that more focused studies regarding individual plants, especially the older ones, be undertaken in the future, regarding the probability of ‘incapacitating’ seismic events. In this manner, a reasonable, reliable model can be calibrated providing the expected percentage of nuclear power loss in Japan, in any given time period. In view of Japan’s stated policy of heavy reliance on nuclear energy, it is nonetheless prudent to plan for aseismic events that could significantly reduce its electricity generating capacity. Such a shortfall would have substantial impacts on world energy markets, on Japan’s ability to provide clean energy in line with its commitments in the Kyoto Protocols, and on Japan’s economic growth. Under standard growth scenarios, we estimate that seismic events that prevent planned new capacity from being brought on line would reduce growth in total factor productivity by about ½ percent per year. This would dampen Japanese energy demand to a level of 2400 (1013 Btu) instead of a level of 2488.5 (1013 Btu) that we forecast in 2010. The impact on economic growth is due to the increase in CO2 emissions caused by substitute energy sources, particularly imported oil. Such increases would need to be moderated by modifying the aggregate production process, and such a change has implications for technical and efficiency change and thus for growth in total factor productivity.