Browsing by Author "Burke, Michael G."
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Automatic Detection of Inter-application Permission Leaks in Android Applications(2013-01-23) Burke, Michael G.; Guarnieri, Salvatore; Pistoia, Marco; Sarkar, Vivek; Sbîrlea, DragoșDue to their growing prevalence, smartphones can access an increasing amount of sensitive user information. To better protect this information, modern mobile operating systems provide permission-based security, which restricts applications to only access a clearly defined subset of system APIs and user data. The Android operating system builds upon already successful permission systems, but complements them by allowing application components to be reused within and across applications through a single communication mechanism, called the Intent mechanism. In this paper we identify three types of inter-application Intent-based attacks that rely on information flows in applications to obtain unauthorized access to permission-protected information. Two of these attacks are of previously known types: confused deputy and permission collusion attacks. The third attack, private activity invocation, is new and relies on the existence of difficult-to-detect misconfigurations introduced because Intents can be used for both intra-application and inter-application communication. Such misconfigured applications allow protected information meant for intraapplication communication to leak into unauthorized applications. This breaks a fundamental security guarantee of permissions systems: that applications can only access information if they own the corresponding permission. We formulate the detection of the vulnerabilities on which these attacks rely as a static taint propagation problem based on rules. We show that the rules describing the permission protected information can be automatically generated though static analysis of the Android libraries an improvement over previous work. To test our approach we built Permission Flow, a tool that can reliably and accurately identify the presence of vulnerable information flows in Android applications. Our automated analysis of popular applications found that 56% of the top 313 Android applications actively use inter-component information flows; by ensuring the absence of inter-application permission leaks, the proposed analysis would be highly beneficial to the Android ecosystem. Of the tested applications, Permission Flow found four exploitable vulnerabilities.Item Parallel Flow-Sensitive Points-to Analysis(2017-02-01) Zhao, Jisheng; Burke, Michael G.; Sarkar, VivekPoints-to analysis is a fundamental requirement for many program analyses, optimizations, and debugging/verification tools. However, finding an effective balance between performance, scalability and precision in points-to analysis remains a major challenge. Many flow-sensitive algorithms achieve a desirable level of precision, but are impractical for use on large software. Likewise, many flow-insensitive algorithms scale to large software, but do so with major limitations on precision. Further, given the recent multicore hardware trends, more attention needs to be paid to the use of parallelism for improved performance. In this paper, we introduce a new pointer analysis based on Pointer SSA form (an extension of Array SSA form,) which is flow-sensitive, memory efficient, and can readily be parallelized. It decomposes the points-to analysis into fine-grained units of work that can be easily implemented in an asynchronous task-parallel programming model. More specifically, our contributions are as follows: 1. A Pointer SSA (PSSA)-based scalable interprocedural flow-sensitive context-insensitive pointer analysis (PSSAPT) that produces both points-to and heap def-use information, and supports the task parallel programming model; 2. a preliminary evaluation, including scalability and precision, of the implementation of parallel PSSAPT using a lightweight task-parallel library. Our experimental results with 6 real world applications (including the 2.2MLOC Tizen OS framework) on a 12-core machine show an average speedup of 4.45 and maximum speedup of 7.35. Our evaluation also includes precision results for an inlinable indirect call analysis.Item The Concurrent Collections Programming Model(2010-01-04) Budimlić, Zoran; Burke, Michael G.; Cavé, Vincent; Knobe, Kathleen; Lowney, Geoff; Palsberg, Jens; Peixotto, David; Sarkar, Vivek; Schlimbach, Frank; Taşırlar, SağnakWe introduce the Concurrent Collections (CnC) programming model. In this model, programs are written in terms of high-level operations. These operations are partially ordered according to only their semantic constraints. These partial orderings correspond to data dependences and control dependences. The role of the domain expert, whose interest and expertise is in the application domain, and the role of the tuning expert, whose interest and expertise is in performance on a specific architecture, can be viewed as separate concerns. The CnC programming model pro vides a high-level specification that can be used as a common language between the two experts, raising the level of their discourse. The model facilitates a significant degree of separation, which simplifies the task of the domain expert, who can focus on the application rather than scheduling concerns and mapping to the target architecture. This separation also simplifies the work of the tuning expert, who is given the maximum possible freedom to map the computation onto the target architecture and is not required to understand the details of the domain. However, the domain and tuning expert may still be the same person. We formally describe the execution semantics of CnC and prove that this model guarantees deterministic computation. We evaluate the performance of CnC implementations on several applications and show that CnC can effectively exploit several different kinds of parallelism and offer performance and scalability that is equivalent to or better than that offered by the current low-level parallel programming models. Further, with respect to ease of programming, we discuss the tradeoffs between CnC and other parallel program ming models on these applications.Item The Concurrent Collections Programming Model(2010-12-16) Burke, Michael G.; Knobe, Kathleen; Newton, Ryan; Sarkar, VivekParallel computing has become firmly established since the 1980’s as the primary means of achieving high performance from supercomputers. 1 Concurrent Collections (CnC) was developed to address the need for making parallel programming accessible to non-professional programmers. One approach that has historically addressed this problem is the creation of domain specific languages (DSLs) that hide the details of parallelism when programming for a specific application domain. In contrast, CnC is a model for adding parallelism to any host language (which is typically serial and may be a DSL). In this approach, the parallel implementation details of the application are hidden from the domain expert, but are instead addressed separately by users (and tools) that serve the role of tuning experts. The basic concepts of CnC are widely applicable. Its premise is that domain experts can identify the intrinsic data dependences and control dependences in an application, irrespective of lower-level implementation choices. The dependences are specified in a CnC graph for an application. Parallelism is implicit in a CnC graph. A CnC graph has a deterministic semantics, in that all executions are guaranteed to produce the same output state for the same input. This deterministic semantics and the separation of concerns between the domain and tuning experts are the primary characteristics that differentiate CnC from other parallel programming models.Item The Platform-Aware Compilation Environment: Preliminary Design Document(2010-09-15) Cooper, Keith D.; Mellor-Crummey, John; Merényi, Erzsébet; Sadayappan, P.; Sarkar, Vivek; Torczon, Linda; Burke, Michael G.The Platform-Aware Compilation Environment (PACE) is an ambitious attempt to construct a portable compiler that produces code capable of achieving high levels of performance on new architectures. The key strategies in PACE are the design and development of an optimizer and runtime system that are parameterized by system characteristics, the automatic measurement of those characteristics, the extensive use of measured performance data to help drive optimization, and the use of machine learning to improve the long-term effectiveness of the compiler and runtime system.