Browsing by Author "Vardi, Moshe Y."
Now showing 1 - 20 of 46
Results Per Page
Sort Options
Item Algorithmic Improvements in Approximate Counting for Probabilistic Inference: From Linear to Logarithmic SAT Calls*(2016-11-28) Chakraborty, Supratik; Meel, Kuldeep S.; Vardi, Moshe Y.Probabilistic inference via model counting has emerged as a scalable technique with strong formal guarantees, thanks to recent advances in hashing-based approximate counting. State-of-the-art hashing-based counting algorithms use an NP oracle (SAT solver in practice), such that the number of oracle invocations grows linearly in the number of variables n in the input constraint. We present a new approach to hashing-based approximate model counting in which the number of oracle invocations grows logarithmically in n, while still providing strong theoretical guarantees. We use this technique to design an algorithm for #CNF with probably approximately correct (PAC) guarantees. Our experiments show that this algorithm outperforms state-of-the-art techniques for approximate counting by 1-2 orders of magnitude in running time. We also show that our algorithm can be easily adapted to give a new fully polynomial randomized approximation scheme (FPRAS) for #DNFItem Assertion-Based Flow Monitoring of SystemC Models(2014-04-22) Dutta, Sonali; Vardi, Moshe Y.; Chaudhuri, Swarat; Nakhleh, Luay K.SystemC is the de facto system modeling language, and verification of SystemC models is a major research direction. Assertion-Based Monitoring is a dynamic verification technique that allows the user to dynamically verify formal properties of the system by automatically generating runtime monitors from them. A typical hardware-software system is concurrent and reactive. Examples of such systems can be a computer, an ATM server etc. Such systems perform multiple jobs of different types during their execution. For example, different types of jobs in a computer can be ‘launching a web browser’, ‘searching the file system’ etc. A job can be submitted by an external user or generated by an internal component of the system. A job can begin at any point in time during the execution of the system, the beginning time being completely unknown beforehand. A job begins with a set of inputs, travels from one system component to another to generate a set of outputs and ends after a finite amount of time. Since a job “flows” among the system components, we call it a flow. In a concurrent system multiple flows can begin and travel though the system at the same time. This work focuses on verifying formal properties about these dynamic and concurrent flows (called flow properties) in a concurrent reactive system, modeled in SystemC. The contribution of this thesis is three fold: First, a light-weight C++ library, called iii Flow Library, that enables modeling of flows in SystemC in a structured manner. Second, an algorithm, implemented in the FlowMonGen tool, to generate C++ monitor class from a flow property, which is an LTL formula interpreted over the finite trace of a flow. Third, a dynamic and decentralized algorithm to monitor the concurrent flows in a SystemC model. Our completely automated and efficient Flow Monitoring Framework implements this algorithm.Item Automata Linear Dynamic Logic on Finite Traces(2021-08-27) Smith, Kevin Wayne; Vardi, Moshe Y.Temporal logics are widely used by the Formal Methods and AI communities. Linear Temporal Logic is a popular temporal logic and is valued for its ease of use as well as its balance between expressiveness and complexity. LTL is equivalent in expressiveness to Monadic First-Order Logic and satisfiability for LTL is PSPACE-complete. Linear Dynamic Logic (LDL), another temporal logic, is equivalent to Monadic Second-Order Logic, but its method of satisfiability checking cannot be applied to a nontrivial subset of LDL formulas. In this thesis I introduce Automata Linear Dynamic Logic on Finite Traces (ALDLf) and show that satisfiability for ALDLf formulas is in PSPACE. A variant of Linear Dynamic Logic on Finite Traces (LDLf), ALDLf combines propositional logic with nondeterministic finite automata (NFA) to express temporal constraints. ALDLf is equivalent in expressiveness to Monadic Second-Order Logic. This is a gain in expressiveness over LTL at no cost.Item Automated Abstraction of Manipulation Domains for Cost-Based Reactive Synthesis(IEEE, 2019) He, Keliang; Lahijanian, Morteza; Kavraki, Lydia E.; Vardi, Moshe Y.When robotic manipulators perform high-level tasks in the presence of another agent, e.g., a human, they must have a strategy that considers possible interferences in order to guarantee task completion and efficient resource usage. One approach to generate such strategies is called reactive synthesis. Reactive synthesis requires an abstraction, which is a discrete structure that captures the domain in which the robot and other agents operate. Existing works discuss the construction of abstractions for mobile robots through space decomposition; however, they cannot be applied to manipulation domains due to the curse of dimensionality caused by the manipulator and the objects. In this work, we present the first algorithm for automatic abstraction construction for reactive synthesis of manipulation tasks. We focus on tasks that involve picking and placing objects with possible extensions to other types of actions. The abstraction also provides an upper bound on path-based costs for robot actions. We combine this abstraction algorithm with our reactive synthesis planner to construct correct-by-construction plans. We demonstrate the power of the framework on a UR5 robot, completing complex tasks in face of interferences by a human.Item BDD-Based Boolean Synthesis(2018-04-18) Martinelli Tabajara, Lucas; Vardi, Moshe Y.Synthesizing a Boolean function satisfying a given relation between inputs and outputs is a problem with many applications in the verification and design of hardware and software systems. In digital logic, Boolean synthesis can be used to automatically design circuits that produce the desired behavior. In program synthesis, Boolean functions can represent programs manipulating bit vectors and other data over finite domains. Additionally, Boolean synthesis is an essential component of reactive synthesis from temporal specifications, a problem that can be applied to automate the design of safety-critical systems. Binary Decision Diagrams (BDDs) have historically been popular data structures for representing Boolean functions, and BDDs are especially useful for the application of reactive synthesis, where they are particularly well-suited for fixpoint computations over sets of states. However, recent works in Boolean synthesis have raised concerns about the scalability of BDDs and chosen to use alternative approaches, such as SAT solvers. In this thesis, we show that BDDs remain viable structures for Boolean synthesis, by developing a BDD-based synthesis framework that can in many cases outperform alternative approaches. For cases where efficient BDD representations are hard to construct, we demonstrate that techniques for decomposing a Boolean relation into multiple smaller BDDs can be used to make BDD-based approaches competitive.Item BDD-based decision procedures for modal logic K(2003) Pan, Guoqiang; Vardi, Moshe Y.We describe BDD-based decision procedures for K . Our approach is inspired by the automata-theoretic approach, but we avoid explicit automata construction. Our algorithms compute the fixpoint of a set of types, which are sets of formulas satisfying some consistency conditions. We use BDDs to represent and manipulate such sets. By viewing the sets of types as symbolic encoding of all possible models of a formula, we developed particle-based and lean-vector-based representation techniques which gives more compact representations. By taking advantage of the finite-tree-model property of K , we introduced a level-based evaluation scheme to speed up construction and reduce memory consumption. We also studied the effect of formula simplification on the decision procedures. As part of the benching procedure, we compared the BDD-based approach with a representative selection of current approaches, as well as developing an algorithm to translate K to QBF based on our decision procedure. Experimental results show that the BDD-based approach dominates for modally heavy formulas, while search-based approaches dominate for propositionally-heavy formulas.Item Bisimulation Minimization in an Automata-Theoretic Verification Framework(1998-10-27) Fisler, Kathi; Vardi, Moshe Y.Bisimulation is a seemingly attractive state-space minimization technique because it can be computed automatically and yields the smallest model preserving all mu -calculus formulas. It is considered impractical for symbolic model checking, however, because the required BDDs are prohibitively large for most designs. We revisit bisimulation minimization, this time in an automata-theoretic framework. Bisimulation has potential in this framework because after intersecting the design with the negation of the property, minimization can ignore most of the atomic propositions. We compute bisimulation using an algorithm due to Lee and Yannakakis that represents bisimulation relations by their equivalence classes and only explores reachable classes. This greatly improves on the time and memory usage of naive algorithms. We demonstrate that bisimulation is practical for many designs within the automata-theoretic framework. In most cases, however, the cost of performing this reduction still outweighs that of conventional model checking.Item Büchi Automata as Specifications for Reactive Systems(2013-06-05) Fogarty, Seth; Vardi, Moshe Y.; Cooper, Keith D.; Nakhleh, Luay K.; Simar, RayComputation is employed to incredible success in a massive variety of applications, and yet it is difficult to formally state what our computations are. Finding a way to model computations is not only valuable to understanding them, but central to automatic manipulations and formal verification. Often the most interesting computations are not functions with inputs and outputs, but ongoing systems that continuously react to user input. In the automata-theoretic approach, computations are modeled as words, a sequence of letters representing a trace of a computation. Each automaton accepts a set of words, called its language. To model reactive computation, we use Büchi automata: automata that operate over infinite words. Although the computations we are modeling are not infinite, they are unbounded, and we are interested in their ongoing properties. For thirty years, Büchi automata have been recognized as the right model for reactive computations. In order to formally verify computations, however, we must also be able to create specifications that embody the properties we want to prove these systems possess. To date, challenging algorithmic problems have prevented Büchi automata from being used as specifications. I address two challenges to the use of Buechi automata as specifications in formal verification. The first, complementation, is required to check program adherence to a specification. The second, determination, is used in domains such as synthesis, probabilistic verification, and module checking. I present both empirical analysis of existing complementation constructions, and a new theoretical contribution that provides more deterministic complementation and a full determination construction.Item Buchi containment and size-change termination(2009) Fogarty, Seth; Vardi, Moshe Y.We compare tools for complementing nondeterministic Buchi automata with a recent termination-analysis algorithm. Complementation of Buchi automata is a well-explored problem in program verification. Early solutions using a Ramsey-based combinatorial argument have been supplanted by rank-based constructions with exponentially better bounds. In 2001 Lee et al. presented the size-change termination (SCT) problem, along with both a reduction to Buchi automata and a Ramsey-based algorithm This algorithm strongly resembles the initial complementation constructions for Buchi automata. This leads us to wonder if theoretical gains in efficiency are mirrored in empirical performance. We prove the SCT algorithm is a specialized realization of the Ramsey-based complementation construction. Doing so allows us to generalize SCT solvers to handle Buchi automata. We experimentally demonstrate that, surprisingly, Ramsey-based approaches are superior over the domain of SCT problems, while rank-based approaches dominate automata universality tests. This reveals several interesting properties of the problem spaces and both approaches.Item Comparator automata in quantitative verification(EPI Sciences, 2022) Vardi, Moshe Y.; Chaudhuri, Swarat; Bansal, SugumanThe notion of comparison between system runs is fundamental in formal verification. This concept is implicitly present in the verification of qualitative systems, and is more pronounced in the verification of quantitative systems. In this work, we identify a novel mode of comparison in quantitative systems: the online comparison of the aggregate values of two sequences of quantitative weights. This notion is embodied by comparator automata (comparators, in short), a new class of automata that read two infinite sequences of weights synchronously and relate their aggregate values. We show that aggregate functions that can be represented with B\"uchi automaton result in comparators that are finite-state and accept by the B\"uchi condition as well. Such $\omega$-regular comparators further lead to generic algorithms for a number of well-studied problems, including the quantitative inclusion and winning strategies in quantitative graph games with incomplete information, as well as related non-decision problems, such as obtaining a finite representation of all counterexamples in the quantitative inclusion problem. We study comparators for two aggregate functions: discounted-sum and limit-average. We prove that the discounted-sum comparator is $\omega$-regular iff the discount-factor is an integer. Not every aggregate function, however, has an $\omega$-regular comparator. Specifically, we show that the language of sequence-pairs for which limit-average aggregates exist is neither $\omega$-regular nor $\omega$-context-free. Given this result, we introduce the notion of prefix-average as a relaxation of limit-average aggregation, and show that it admits $\omega$-context-free comparators i.e. comparator automata expressed by B\"uchi pushdown automata.Item Complexity and structural heuristics for propositional and quantified satisfiability(2007) Pan, Guoqiang; Vardi, Moshe Y.Decision procedures for various logics are used as general-purpose solvers in computer science. A particularly popular choice is propositional logic, which is simultaneously powerful enough to model problems in many application domains, including formal verification and planning, while at the same time simple enough to be efficiently solved for many practical cases. Similarly, there are also recent interests in using QBF, an extension of propositional logic, as a modeling language to be used in a similar fashion. The hope is that QBF, being a more powerful language, can compactly encode, and in turn, be used to solve, a larger range of applications. Still, propositional logic and QBF are respectively complete for the complexity classes NP and PSPACE, thus, both can be theoretically considered intractable. A popular hypothesis is that real-world problems contain underlying structure that can be exploited by the decision procedures. In this dissertation, we study the impact of structural constraints (in the form of bounded width) and heuristics on the performance of propositional and QBF decision procedures. The results presented in this dissertation can be seen as a contrast on how bounded-width impacts propositional and quantified problems differently. Starting with a size bound on BDDs under bounded width, we proceed to compare symbolic decision procedures against the standard DPLL search-based approach for propositional logic, as well as compare different width-based heuristics for the symbolic approaches. In general, symbolic approaches for propositional satisfiability are only competitive for a small range of problems, and the theoretical tractability for the bounded-width case rarely applies in practice. However, the picture is very different for quantified satisfiability. To that end, we start with a series of "intractability in tractability" results which shows that although the complexity of QBF with constant width and alternation is tractable, there is an inherent non-elementary blowup in the width and alternation depth such that a width-bound that is slightly above constant leads to intractability. To contrast the theoretical intractability, we apply structural heuristics to a symbolic decision procedure of QBF and show that symbolic approaches complement search-based approaches quite well for QBF.Item Constrained Counting and Sampling: Bridging the Gap Between Theory and Practice(2017-09-29) Meel, Kuldeep Singh; Chakraborty, Supratik; Chaudhuri, Swarat; Duenas-Osorio, Leonardo; Seshia, Sanjit A.; Vardi, Moshe Y.Constrained counting and sampling are two fundamental problems in Computer Science with numerous applications, including network reliability, privacy, probabilistic reasoning, and constrained-random verification. In constrained counting, the task is to compute the total weight, subject to a given weighting function, of the set of solutions of the given constraints. In constrained sampling, the task is to sample randomly, subject to a given weighting function, from the set of solutions to a set of given constraints. Consequently, Constrained counting and sampling have been subject to intense theoretical and empirical investigations over the years. Prior work, however, offered either heuristic techniques with poor guarantees of accuracy or approaches with proven guarantees but poor performance in practice. In this thesis, we introduce a novel hashing-based algorithmic framework for constrained sampling and counting that combines the classical algorithmic technique of universal hashing with the dramatic progress made in Boolean reasoning solving, in particular, {\SAT} and {\SMT}, over the past two decades. By exploiting the connection between definability of formulas and variance of the distribution of solutions in a cell defined by 3-universal hash functions, we introduced an algorithmic technique, {\MIS}, that reduced the size of XOR constraints employed in the underlying universal hash functions by as much as two orders of magnitude. The resulting frameworks for counting ( {\ScalApproxMC}) and sampling ({\UniGen}) can handle formulas with up to million variables representing a significant boost up from the prior state of the art tools' capability to handle few hundreds of variables. If the initial set of constraints is expressed as Disjunctive Normal Form (DNF), {\ScalApproxMC} is the only known Fully Polynomial Randomized Approximation Scheme (FPRAS) that does not involve Monte Carlo steps. We demonstrate the utility of the above techniques on various real applications including probabilistic inference, design verification and estimating the reliability of critical infrastructure networks during natural disasters. The high parallelizability of our approach opens up new directions for development of artificial intelligence tools that can effectively leverage high-performance computing resources.Item Dynamic Assertion-Based Verification for SystemC(2011) Tabakov, Deian; Vardi, Moshe Y.SystemC has emerged as a de facto standard modeling language for hardware and embedded systems. However, the current standard does not provide support for temporal specifications. Specifically, SystemC lacks a mechanism for sampling the state of the model at different types of temporal resolutions, for observing the internal state of modules, and for integrating monitors efficiently into the model's execution. This work presents a novel framework for specifying and efficiently monitoring temporal assertions of SystemC models that removes these restrictions. This work introduces new specification language primitives that (1) expose the inner state of the SystemC kernel in a principled way, (2) allow for very fine control over the temporal resolution, and (3) allow sampling at arbitrary locations in the user code. An efficient modular monitoring framework presented here allows the integration of monitors into the execution of the model, while at the same time incurring low overhead and allowing for easy adoption. Instrumentation of the user code is automated using Aspect-Oriented Programming techniques, thereby allowing the integration of user-code-level sample points into the monitoring framework. While most related approaches optimize the size of the monitors, this work focuses on minimizing the runtime overhead of the monitors. Different encoding configurations are identified and evaluated empirically using monitors synthesized from a large benchmark of random and pattern temporal specifications. The framework and approaches described in this dissertation allow the adoption of assertion-based verification for SystemC models written using various levels of abstraction, from system level to register-transfer level. An advantage of this work is that many existing specification languages call be adopted to use the specification primitives described here, and the framework can easily be integrated into existing implementations of SystemC.Item Eliminating incoherence from subjective estimates of chance(2003) Tsavachidis, Spiridon; Vardi, Moshe Y.Human expertise is a significant source of information about environments with inherent uncertainty. However, it is well documented that subjective estimates of chance tend to violate the mathematical axioms of probability, that is, they are incoherent. This fact makes the use of such estimates problematic for statistical inference, decision analysis, economic modelling or aggregation of expert opinions. In order for the subjective probability estimates to be used in a correct and meaningful way, they must be reconstructed so that they are coherent. The proposed algorithms for coherent reconstruction are based on heuristic search methods, namely, Genetic Algorithms and Simulated Annealing. These algorithms are combined with efficient data structures that compactly represent probability distributions. The reconstructed estimates are coherent and close to the initial judgments with respect to some distance measure, maintaining the insight of the expert. Empirical studies shown that the coherent approximations are more stochastically accurate than the original subjective estimates.Item Experimental evaluation of explicit and symbolic automata-theoretic algorithms(2006) Tabakov, Deian; Vardi, Moshe Y.The automata-theoretic approach to the problem of program verification requires efficient minimization and complementation of nondeterministic finite automata. This work presents a direct empirical comparison of well-known automata minimization algorithms, and also of a symbolic and an explicit approach to complementing automata. I propose a probabilistic framework for testing the performance of automata-theoretic algorithms, and use it to compare empirically Brzozowski's and Hopcroft's minimization algorithms. While Hopcroft's algorithm has better overall performance, the experimental results show that Brzozowski's algorithm performs better for "high-density" automata. In this work I also analyze complementation by considering automaton universality as a model-checking problem. A novel encoding presented here allows this problem to be solved symbolically via a model-checker. I compare the performance of this approach to that of the standard explicit algorithm which is based on the subset construction, and show that the explicit approach unexpectedly performs an order of magnitude better.Item Explicit or Symbolic Translation of Linear Temporal Logic to Automata(2013-07-24) Rozier, Kristin Yvonne; Vardi, Moshe Y.; Kavraki, Lydia E.; Varman, Peter J.Formal verification techniques are growing increasingly vital for the development of safety-critical software and hardware in practice. Techniques such as requirements-based design and model checking for system verification have been successfully used to verify systems for air traffic control, airplane separation assurance, autopilots, CPU logic designs, life-support, medical equipment, and other functions that ensure human safety. Formal behavioral specifications written early in the system-design process and communicated across all design phases increase the efficiency, consistency, and quality of the system under development. We argue that to prevent introducing design or verification errors, it is crucial to test specifications for satisfiability. We advocate for the adaptation of a new sanity check via satisfiability checking for property assurance. Our focus here is on specifications expressed in Linear Temporal Logic (LTL). We demonstrate that LTL satisfiability checking reduces to model checking and satisfiability checking for the specification, its complement, and a conjunction of all properties should be performed as a first step to LTL model checking. We report on an experimental investigation of LTL satisfiability checking. We introduce a large set of rigorous benchmarks to enable objective evaluation of LTL-to-automaton algorithms in terms of scalability, performance, correctness, and size of the automata produced. For explicit model checking, we use the Spin model checker; we tested all LTL-to-explicit automaton translation tools that were publicly available when we conducted our study. For symbolic model checking, we use CadenceSMV, NuSMV, and SAL-SMC for both LTL-to-symbolic automaton translation and to perform the satisfiability check. Our experiments result in two major findings. First, scalability, correctness, and other debilitating performance issues afflict most LTL translation tools. Second, for LTL satisfiability checking, the symbolic approach is clearly superior to the explicit approach. Ironically, the explicit approach to LTL-to-automata had been heavily studied while only one algorithm existed for LTL-to-symbolic automata. Since 1994, there had been essentially no new progress in encoding symbolic automata for BDD-based analysis. Therefore, we introduce a set of 30 symbolic automata encodings. The set consists of novel combinations of existing constructs, such as different LTL formula normal forms, with a novel transition-labeled symbolic automaton form, a new way to encode transitions, and new BDD variable orders based on algorithms for tree decomposition of graphs. An extensive set of experiments demonstrates that these encodings translate to significant, sometimes exponential, improvement over the current standard encoding for symbolic LTL satisfiability checking. Building upon these ideas, we return to the explicit automata domain and focus on the most common type of specifications used in industrial practice: safety properties. We show that we can exploit the inherent determinism of safety properties to create a set of 26 explicit automata encodings comprised of novel aspects including: state numbers versus state labels versus a state look-up table, finite versus infinite acceptance conditions, forward-looking versus backward-looking transition encodings, assignment-based versus BDD-based alphabet representation, state and transition minimization, edge abbreviation, trap-state elimination, and determinization either on-the-fly or up-front using the subset construction. We conduct an extensive experimental evaluation and identify an encoding that offers the best performance in explicit LTL model checking time and is constantly faster than the previous best explicit automaton encoding algorithm.Item Exploring Finite-Word Automata for Reactive Synthesis(2021-08-13) Martinelli Tabajara, Lucas; Vardi, Moshe Y.Formal verification can provide confidence in the correctness of a system by checking that its implementation satisfies a formal specification of its desired behavior. Yet, a system might have to be implemented and reimplemented many times before passing verification. Program synthesis, on the other hand, presents an alternative workflow where the implementation is directly and algorithmically generated from the formal specification. One widely-studied example is reactive synthesis, which aims to synthesize a reactive system from a specification in some form of temporal logic. So far, reactive synthesis has largely resisted practical implementation, not only because of the problem's 2EXPTIME worst-case complexity, but also because algorithms often rely on manipulation of automata over infinite words, for which there are no known efficient algorithms. The goal of this thesis is to take steps towards bringing reactive synthesis to the realm of practical application by exploring the potential of synthesis algorithms using automata over finite words. Not only are finite-word automata sufficient for many use cases of reactive synthesis - for example in robotics, where systems are built to perform finite tasks - but they support algorithms that are far more efficient and amenable to implementation in practice than automata over infinite words. The work presented in this thesis demonstrates how specialized synthesis algorithms making use of automata over finite words perform significantly better in practice than general algorithms based on infinite-word automata, despite having the same theoretical complexity. It also explores how to improve the construction of such automata in a way that benefits synthesis algorithms. Finally, it shows how the algorithmic simplicity of finite-word automata allows the implementation for the first time of useful extensions of reactive synthesis that in the past have been limited purely to the realm of theory, such as synthesis under partial observability, allowing us to identify significant differences between the theoretical analysis and practical performance of the algorithms.Item Fixpoint Logics, Relational Machines, and Computational Complexity(1996-10-11) Abiteboul, Serge; Vardi, Moshe Y.; Vianu, VictorWe establish a general connection between fixpoint logic and complexity. On one side, we have fixpoint logic, parameterized by the choices of 1st-order operators (inflationary or noninflationary) and iteration constructs (deterministic, nondeterministic, or alternating). On the other side, we have the complexity classes between P and EXPTIME. Our parameterized fixpoint logics capture the complexity classes P, NP, PSPACE, and EXPTIME, but equality is achieved only over ordered structures. There is, however, an inherent mismatch between complexity and logic -while computational devices work on encodings of problems, logic is applied directly to the underlying mathematical structures. To overcome this mismatch, we use a theory of relational complexity, which bridges the gap between standard complexity and fixpoint logic. On one hand, we show that questions about containments among standard complexity classes can be translated to questions about containments among relational complexity classes. On the other hand, the expressive power of fixpoint logic can be precisely characterized in terms of relational complexity classes. This tight, three-way relationship among fixpoint logics, relational complexity and standard complexity yields in a uniform way logical analogs to all containments among the complexity classes P, NP, PSPACE, and EXPTIME. The logical formulation shows that some of the most tantalizing questions in complexity theory boil down to a single question: the relative power of inflationary vs. noninflationary 1st-order operators.Item Iterative Temporal Motion Planning for Hybrid Systems in Partially Unknown Environments(ACM, 2013) Maly, Matthew R.; Lahijanian, Morteza; Kavraki, Lydia E.; Kress-Gazit, Hadas; Vardi, Moshe Y.This paper considers the problem of motion planning for a hybrid robotic system with complex and nonlinear dynamics in a partially unknown environment given a temporal logic specification. We employ a multi-layered synergistic framework that can deal with general robot dynamics and combine it with an iterative planning strategy. Our work allows us to deal with the unknown environmental restrictions only when they are discovered and without the need to repeat the computation that is related to the temporal logic specification. In addition, we define a metric for satisfaction of a specification. We use this metric to plan a trajectory that satisfies the specification as closely as possible in cases in which the discovered constraint in the environment renders the specification unsatisfiable. We demonstrate the efficacy of our framework on a simulation of a hybrid second-order car-like robot moving in an office environment with unknown obstacles. The results show that our framework is successful in generating a trajectory whose satisfaction measure of the specification is optimal. They also show that, when new obstacles are discovered, the reinitialization of our framework is computationally inexpensive.Item Linear Temporal Logic and Linear Dynamic Logic on Finite Traces(Association for Computing Machinery, 2013) De Giacomo, Giuseppe; Vardi, Moshe Y.In this paper we look into the assumption of interpreting LTL over finite traces. In particular we show that LTLf, i.e., LTL under this assumption, is less expressive than what might appear at first sight, and that at essentially no computational cost one can make a significant increase in expressiveness while maintaining the same intuitiveness of LTLf. Indeed, we propose a logic, LDLf for Linear Dynamic Logic over finite traces, which borrows the syntax from Propositional Dynamic Logic (PDL), but is interpreted over finite traces. Satisfiability, validity and logical implication (as well as model checking) for LTLf. are PSPACE-complete as for LTLf. (and LTL).
- «
- 1 (current)
- 2
- 3
- »