Browsing by Author "Felleisen, Matthias"
Now showing 1 - 20 of 20
Results Per Page
Sort Options
Item A correspondence between Scheme and the lambda(,upsilon)-CS-calculus(1989) Arbilla, Laura; Felleisen, MatthiasWe study the relationship between the programming language Scheme and the $\lambda\sb{v}$-CS-calculus. To this end, we define a correspondence between Scheme and $\Lambda\sb{CS}$-terms--the language of the calculus--where every Scheme term is an abbreviation of a $\Lambda\sb{CS}$-term. Although Scheme and $\Lambda\sb{CS}$ have constructs that roughly correspond to each other, the relationship is rather complex: the inclusion of domain predicates in Scheme enforces the uniform treatment of all values in Scheme as procedural abstractions. Therefore, we must conclude that $\Lambda\sb{CS}$ cannot express Scheme as a notational definition, but only simulate it through a translation. On the other hand, the embedding reveals Scheme's object-oriented nature relative to $\Lambda\sb{CS}$, and provides a formal basis for the development and formal investigation of an object-oriented extension of Scheme.Item Behavioral Interface Contracts for Java(2005-08-25) Felleisen, Matthias; Findler, Robert BrucePrograms should consist of off-the-shelf, interchangeable, black-box components that are produced by a network of independent software companies. These components should not only come with type signatures but also with contracts that describe other aspects of their behavior. One way to express contracts is to state preand postconditions for externally visible functions. These preand post-conditions should then be validated during evaluation or possibly even during compilation. If a function call fails to satisfy its contract, the run-time system should blame the faulty program component. Behavioral contracts in the form of assertions are well-understood in the world of procedural languages. Their addition to class and interface hierarchies in object-oriented programming languages, however, raises many new and interesting questions. The most complicating factor is that objects can pass between components and trigger call-backs. Another problem is that object-oriented languages allow objects to satisfy several interfaces at once. In this paper, we analyze existing approaches to adding contracts to class-based languages and show how they blame the wrong component in certain situations for breach of contract. We then present a conservative extension of Java that allows programmers to specify method contracts in interfaces. The extension is a compromise between a consistent enforcement of contracts and language design concerns. In the future, we plan to clarify the relationship between contracts and contract violations with a rigorous analysis.Item Behavioral software contracts(2002) Findler, Robert Bruce; Felleisen, MatthiasTo sustain a market for software components, component producers and consumers must agree on contracts. These contracts must specify each party's obligations. To ensure that both sides meet their obligations, they must also agree on standards for monitoring contracts and assigning blame for contract violations. This dissertation explores these issues for contracts that specify the sequential behavior of methods and procedures as pre- and post-conditions. In the process, it makes three main contributions: (1) First, this dissertation shows how existing contract checking systems for object-oriented languages incorrectly enforce contracts in the presence of subtyping. This dissertation shows how to check such contracts properly. (2) Second, this dissertation shows how to enforce pre- and post-condition style contracts on higher-order procedures and correctly assign blame for contract violations in that context. (3) Finally, this dissertation lays the groundwork for a theory of contract checking, in the spirit of the theory for type checking. In particular, it states and proves the first soundness result for contracts, guaranteeing that the contract checker properly enforces contracts and properly assigns blame for contract violations.Item Classes and Mixins(1999) Felleisen, Matthias; Flatt, Matthew; Krishnamurthi, ShriramWhile class-based object-oriented programming languages provide a flexible mechanism for re-using and managing related pieces of code, they typically lack linguistic facilities for specifying a uniform extension of many classes with one set of fields and methods. As a result, programmers are unable to express certain abstractions over classes. In this paper we develop a model of class-to-class functions that we refer to as mixins. A mixin function maps a class to an extended class by adding or overriding fields and methods. Programming with mixins is similar to programming with single inheritance classes, but mixins more directly encourage programming to interfaces. The paper develops these ideas within the context of Java. The results are an intuitive rewriting model of an essential Java subset; an extension that explains and models mixins; and type soundness theorems for these languages.Item Fully Abstract Semantics for Observably Sequential Languages(1994-01) Cartwright, Robert; Curien, Pierre-Louis; Felleisen, MatthiasOne of the major challenges in denotational semantics is the construction of a fully abstract semantics for a higher-order sequential programming language. For the past fifteen years, research on this problem has focused on developing a semantics forPCF, an idealized functional programming language based on the typed lambda calculus. Unlike most practical languages, PCF has no facilities for observing and exploiting the evaluation order of arguments to procedures. Since we believe that these facilities play a crucial role in sequential computation, this paper focuses on a sequential extension of PCF, called SPCF, that includes two classes of control operators: a possibly empty set of error generators and a collection of catch and throw constructs. For each set of error generators, the paper presents a fully abstract semantics for SPCF. If the set of error generators is empty, the semantics interprets all procedures—including catch and throw—as Berry-Curien sequential algorithms. If the language contains error generators, procedures denote {\it manifestly sequential} functions. The manifestly sequential functions form aScott domain that is isomorphic to a domain of decision trees, which is the natural extension of the Berry-Curien domain of sequential algorithms in the presence of errors.Item Implementing a Static Debugger for a First-Order Functional Programming Language(2001-04) Felleisen, Matthias; Steckler, Paul A.A static debugger assists a programmer in finding potential errors in programs. The key to a static debugger is set-based analysis (SBA). Many authors have described formulations of SBA, but leave open gaps among that theory, its implementation, and its use for a particular purpose. An implementation needs to confront these practical issues. While some of the implementation proceeds directly from the formal description of the analysis, there is much fine detail in the code. With a series of reports, we intend to bridge the gap between theory and implementation. In this first report, we implement an analyzer for a simple, first-order functional language and show how to use the analysis in a static debugger.Item Intro to Logic(Rice University, 2010-06-11) Barland, Ian; Kolaitis, Phokion; Vardi, Moshe; Felleisen, Matthias; Greiner, JohnAn introduction to reasoning with propositional and first-order logic, with applications to computer science. Part of the TeachLogic Project (www.teachlogic.org).Item Linguistic reuse(2001) Krishnamurthi, Shriram; Felleisen, MatthiasProgrammers employ a multitude of languages to build systems. Some are general-purpose languages. Others are specific to individual domains. These assist programmers with at least three different tasks: domain modeling, system validation and representing the structure of their general purpose program. As a result, programming languages have become key factors in the software engineering process. They are, however, rarely codified into the process and treated systematically. My dissertation develops a framework to treat programming languages as software engineering artifacts. In this framework, languages are identifiable, reusable entities that programmers can compose and link to produce larger languages; furthermore, languages themselves meet the properties of software components. Programmers can augment this lateral growth of languages with vertical growth, by producing languages that synthesize languages. Thus, software construction becomes a multi-phase process. In later phases, programmers use languages to build programs; in earlier phases, they employ languages to construct languages. This treatment of languages as artifacts addresses several open questions.Item Modeling an algebraic stepper(2001) Clements, John Brinckerhoff; Felleisen, MatthiasProgrammers rely on the correctness of their tools. Semanticists have long studied the correctness of compilers, but we make the case that other tools deserve semantic models, too, and that using these models can help in developing these tools. We examine these ideas in the context of DrScheme's stepper. The stepper operates within the existing evaluator, placing breakpoints and reconstructing source expressions from information placed on the stack. We must ask whether we can prove the correspondence between the source expressions emitted by the stepper and the steps in the formal reduction semantics. To answer this question, we develop a high-level semantic model of the extended compiler and run-time machinery. Rather than modeling the evaluation as a low-level machine, we model the relevant low-level features of the stepper's implementation in a high-level reduction semantics. The higher-level model greatly simplifies the correctness proof. We expect the approach to apply to other semantics-based tools.Item Object-oriented Programming Languages Need Well-founded Contracts(2001-01-01) Felleisen, Matthias; Findler, Robert Bruce; Latendresse, MarioOver the past few years, the notion of building software from components has become popular again. The goal is to produce systems by adapting and linking off-the-shelf modules from a pool of interchangeable components. To turn this idea into reality, the formal descriptions of software components need to specify more than the type signatures of their exported services. At a minimum, they should contain assertions about critical properties of a component's behavior. By monitoring such behavioral contracts at run-time, language implementations can pinpoint faulty components, and programmers can replace them with different ones. In this paper, we study the notion of behavioral contracts in an object-oriented setting. While the use of behavioral contracts is well-understood in the world of procedural languages, their addition to object-oriented programming languages poses remarkably subtle problems. All existing contract enforcement tools for Java fail to catch flaws in contracts or blame the wrong component for contractual violations. The failures point to a lack of foundational research on behavioral contracts in the OOP world.Item Parameter-passing and the lambda calculus(1991) Crank, Erik T.; Felleisen, MatthiasThe choice of a parameter-passing technique is an important decision in the design of a high-level programming language. To clarify some of the semantic aspects of the decision, we develop, analyze, and compare modifications of the $\lambda$-calculus for the most common parameter-passing techniques. More specifically, for each parameter-passing technique we provide (1) a program rewriting semantics for a language with side-effects and first-class procedures based on the respective parameter-passing technique; (2) an equational theory derived from the rewriting semantics; (3) a formal analysis of the correspondence between the calculus and the semantics; and (4) a strong normalization theorem for the largest possible imperative fragment of the theory. A comparison of the various systems reveals that Algol's call-by-name indeed satisfies the well-known $\beta$ rule of the original $\lambda$-calculus, but at the cost of complicated axioms for the imperative part of the theory. The simplest and most appealing axiom system appears to be the one for a call-by-value language with reference cells as first-class values.Item Practical soft typing(1995) Wright, Andrew Kevin; Cartwright, Robert S.; Felleisen, MatthiasSoft typing is an approach to type checking for dynamically typed languages. Like a static type checker, a soft type checker infers syntactic types for identifiers and expressions. But rather than reject programs containing untypable fragments, a soft type checker inserts explicit run-time checks to ensure safe execution. Soft typing was first introduced in an idealized form by Cartwright and Fagan. This thesis investigates the issues involved in designing a practical soft type system. A soft type system for a purely functional, call-by-value language is developed by extending the Hindley-Milner polymorphic type system with recursive types and limited forms of union types. The extension adapts Remy's encoding of record types with subtyping to union types. The encoding yields more compact types and permits more efficient type inference than Cartwright and Fagan's early technique. Correctness proofs are developed by employing a new syntactic approach to type soundness. As the type inference algorithm yields complex internal types that are difficult for programmers to understand, a more familiar language of presentation types is developed along with translations between internal and presentation types. To address realistic programming languages like Scheme, the soft type system is extended to incorporate assignment, continuations, pattern matching, data definition, records, modules, explicit type annotations, and macros. Imperative features like assignment and continuations are typed by a new, simple method of combining imperative features with Hindley-Milner polymorphism. The thesis shows soft typing to be practical by illustrating a prototype soft type system for Scheme. Type information determined by the prototype is sufficiently precise to provide useful diagnostic aid to programmers and to effectively minimize run-time checking. The type checker typically eliminates 90% of the run-time checks that are necessary for safe execution with dynamic typing. This reduction in run-time checking leads to significant speedup for some bench marks. Through several examples, the thesis shows how prototypes, developed using a purely semantic understanding of types as sets of values, can be transformed into robust maintainable, and efficient programs by rewriting them to accommodate better syntactic type assignment.Item Programming languages for reusable software components(2000) Flatt, Matthew Raymond; Felleisen, MatthiasProgramming languages offer a variety of constructs to support code reuse. For example, functional languages provide function constructs for encapsulating expressions to be used in multiple contexts. Similarly, object-oriented languages provide class (or class-like) constructs for encapsulating sets of definitions that are easily adapted for new programs. Despite the variety and abundance of such programming constructs, however, existing languages are ill-equipped to support component programming with reusable software components. Component programming differs from other forms of reuse in its emphasis on the independent development and deployment of software components. In its ideal form, component programming means building programs from off-the-shelf components that are supplied by a software-components industry. This model suggests a strict separation between the producer and consumer of a component. The separation, in turn, implies separate compilation for components, allowing a producer to test and distribute compiled components rather than proprietary source code. Since the consumer cannot modify a compiled software component, each component must be defined and compiled in a way that gives the consumer flexibility in linking components together. This dissertation shows how a language for component programming can support both separate compilation and flexible linking. To that end, it expounds the principle of external connections: A language should separate component definitions from component connections. Neither conventional module constructs nor conventional object-oriented constructs follow the principle of external connections, which explains why neither provides an effective language for component programming. We describe new language constructs for modules and classes---called units and mixins, respectively---that enable component programming in each domain. The unit and mixin constructs modeled in this dissertation are based on constructs that we implemented for the MzScheme programming language, a dialect of the dynamically-typed language Scheme. To demonstrate that units and mixins work equally well for statically-typed languages, such as ML or Java, we provide typed models of the constructs as well as untyped models, and we formally prove the soundness of the typed models.Item Programming the Web with high-level programming languages(2001) Graunke, Paul Thorsen; Felleisen, Matthias; Cartwright, Robert S.Concepts from high-level languages can greatly simplify the design and implementation of CGI programs. This dissertation develops two systems for implementing these programs. The first technique* relies on a custom Web server that dynamically loads CGI programs using the operating system-style services of MrEd, an extension of Scheme. The server implements programming mechanisms using continuations that allow the CGI program to interact with the user in a natural manner. The second technique relies on program transformations from functional language compilation.† It allows the use of standard servers and alleviates most of the memory consumption on the server. In my thesis I discuss the advantages and disadvantages of each approach. I conclude with suggestions for further investigations into this topic. *The first technique previously appeared at the European Symposium on Programming, 2001 A.D., in a paper with the same title as this dissertation, coauthored with Shriram Krishnamurthi, Steve van der Hoeven, and Matthias Felleisen. †The compilation based technique was submitted to the International Conference on Functional Programming in a paper titled, "How to Design and Generate CGI Programs: Applying Functional Compiler Techniques to the Real World," coauthored with Robert Bruce Findler, Shriram Krishnamurthi, and Matthias Felleisen.Item Selector-based versus conditional-constraint-based value-flow analysis of programs(2002) Meunier, Philippe Bernard; Felleisen, MatthiasMrSpidey, a program debugger for PLT Scheme, infers the flow of values in a program. It uses Flanagan's selector-based analysis framework. Unfortunately, due to limitations of that framework, the debugger often flags potential errors where none exists. In particular, it is too conservative when analyzing n-ary functions, functions with rest arguments, and arity-overloaded functions (case-lambda). Flanagan's analysis can be extended to give more precise results, but at the cost of a high running time. We therefore conclude that this framework is not well suited to analyzing functions in real-world programming languages. To overcome the limitations of Flanagan's framework, we develop an alternative based on Palsberg and Schwartzbach's conditional constraint rules. After scaling the analysis to the full R5RS Scheme language (adding primitives using types, multiples values, imperativeness, and generative structures), experiments show that it infers value sets as precisely as the extended selector-based analysis and runs significantly faster.Item Semantic program dependence graphs(1992) Parsons, Rebecca Jane; Felleisen, Matthias; Cartwright, Robert S.Semantic program dependence graphs, or semantic pdgs, are an attractive intermediate program representation for use in advanced optimizing and parallelizing compilers. The semantic pdg, which is based on the program dependence graph, has a compositional semantics that provides an elegant characterization of the types of dependences that arise in imperative programming languages. In addition, the semantic pdg has a simple operational semantics which serves as the basis of an equational calculus reasoning about semantic pdgs. Finally, the algorithms for creating the semantic pdg are efficient enough to allow the use of this program representation in actual compilers. The semantic pdg is the result of a study, using denotational semantics, into the notions of data and control dependence in imperative programming languages. Semantic pdgs include a new component, the valve node, which ensures the data flow character of the semantic pdg, even in the presence of conditional assignments, and provides the control information necessary to perform many important program optimizations. The valve node is a natural result of the derivation step that addresses the data dependence relation. The semantic pdg utilizes the new concept of a partial array to allow for optimizations of array accesses while maintaining the mathematical elegance of the data flow semantics of the pdg. The semantic pdg is not only an elegant representation from a mathematical perspective, but it is also a useful representation from a practical perspective. This structure is particularly well-suited for use in optimizing and parallelizing compilers since it explicates the important relationships among the different statements in a program. We have developed a program representation that is powerful enough to represent the behavior of a program, that provides the information needed to optimize the program, and that has a precise mathematical description. The development of the semantic pdg reconciles the often contradictory requirements of mathematical elegance and practicality.Item Set-Based Analysis for Full Scheme and Its Use in Soft-Typing(1995-10) Felleisen, Matthias; Flanagan, CormacSet-Based Analysis is an efficient and accurate program analysis for higher-order languages. It exploits an intuitive notion of approximation that treats program variables as sets of values. We present a new derivation of set-based analysis, based on a reduction semantics, that substantially simplifies previous formulations. Most importantly, the derivation easily extends from a functional core language to include imperative features such as assignments and first-class continuations, and supports the first correctness proof of set-based analysis for these imperative features. The paper includes an implementation of the derived analysis for a Scheme-like language, and describes a soft-typing algorithm that eliminates type-checks based on the information produced by the analysis.Item The formal relationship between direct and continuation-passing style optimizing compilers: A synthesis of two paradigms(1995) Sabry, Amr Afaf; Felleisen, MatthiasCompilers for higher-order programming languages like Scheme, ML, and Lisp can be broadly characterized as either "direct compilers" or "continuation-passing style (CPS) compilers", depending on their main intermediate representation. Our central result is a precise correspondence between the two compilation strategies. Starting from the theoretical foundations of direct and CPS compilers, we develop relationships between the main components of each compilation strategy: generation of the intermediate representation, simplification of the intermediate representation, code generation, and data flow analysis. For each component, our results pinpoint the superior compilation strategy, the reason for which it dominates the other strategy, and ways to improve the inferior strategy. Furthermore, our work suggests a synthesis of the direct and CPS compilation strategies that combines the best aspects of each. The contributions of this thesis include a comprehensive analysis of the properties of the CPS intermediate representation, a new optimal CPS transformation and its inverse, a new intermediate representation for direct compilers, an equivalence between the canonical equational theories for reasoning about continuations and general computational effects, a sound and complete equational axiomatization of the semantics of call-by-value control operators, a methodology for deriving equational logics for imperative languages, and formal relationships between code generators and data flow analyzers for direct and CPS compilers. These contributions unify concepts in two distinct compilation strategies, and can be used to compare specific compilers.Item The Semantics of Futures(1994-10) Felleisen, Matthias; Flanagan, CormacThe future annotation introduced by MultiLisp provides a simple method for taming the implicit parallelism of functional programs. Prior research on futures has concentrated on implementation and design issues, and has largely ignored the development of a semantic characterization of futures. This paper presents four operational semantics for an idealized functional language with futures with varying degrees of intensionality. The first semantics defines future to be a semantically-transparent annotation. The second semantics interprets a future expression as a potentially parallel task. The third semantics explicates the coordination of parallel tasks and the need for touch operations on placeholder-strict arguments to certain primitive operations by introducing placeholder objects. The fourth and last semantics is a low-level refinement of the third semantics, which explicates just enough information to permit the smooth derivation of program analyses. The paper includes proofs showing the equivalence of these semantics.Item Well-Founded Touch Optimization for Futures(1994-10-01) Felleisen, Matthias; Flanagan, CormacThe future annotations of MultiLisp provide a simple method for taming the implicit parallelism of functional programs, but require touch operations at all placeholder-strict positions of program operations to ensure proper synchronization between threads. These touch operations contribute substantially to a program's execution time. We use an operational semantics of future, developed in a previous paper, to derive a program analysis algorithm and an optimization algorithm based on the analysis that removes provably-redundant touch operations. Experiments with the Gambit compiler indicate that this optimization significantly reduces the overhead imposed by touch operations.