|LADC 2009 Program|
Download the full program here
8:30-9:20 Sessão Técnica 1 - Algoritmos Distribuídos / Redes Móveis Ad Hoc
Chair – Raul Ceretta Nunes
Protocolo Assíncrono para a Gestão da Filiação ao Grupo em Redes Móveis Ad Hoc
Resolução do Consenso em Redes Móveis Ad-Hoc a Partir de um Conjunto Dominante Conexo
9:20 – 10:30 Palestra Internacional
Assessment of Fault-Tolerant and Dependable Computing:
11:00-11:50 Sessão Técnica 2 – Semântica e Detecção de Falhas/Defeitos
Chair – Sérgio Gorender
Uma Proposta de Detector de Defeitos Autonômico Usando Engenharia de Controle
Integrando uma Semântica de Falhas Consistente na Comunicação Assíncrona de Objetos Distribuídos
11:50 – 12:15 Sessão Técnica 3 - Middlewares e Web Services
Chair – Raul Ceretta Nunes
Failover transparente ao cliente em Serviços Web: Uma extensão ao WS-Addressing
14:00-14:50 Sessão Técnica 4 - Ferramentas de Testes
Chair – Marinho Barcellos
Uma Nova Estratégia para o Diagnóstico de Falhas Baseado em Comparações
Avaliando Diferentes Estratégia de Redução de Custo do Teste de Mutação
14:50 – 15:40 Sessão Técnica 5 – Injeção de Falhas
Chair – Fernando Doti
Injeção Distribuída de Falhas de Comunicação com Suporte à Controle e Coordenação de Experimentos
Aumentando a Expressividade da Descrição de Cargas de Falhas de Comunicação para Testes com Injetores de Falhas
15:40 16:30 Sessão Técnica 6 - Tolerância a Intrusões
Chair – Taisy Silva Weber
Resistindo a Ataques de Personificação no Gerenciamento de Chaves Públicas em Redes Ad Hoc Móveis: Virtual Public-Key Management System
An Investigation of Java Faults Operators derived from a Field Data Study on Java Software Faults
17:00-17:25 Sessão Técnica 7 - Tempo Real
Chair - Eliane Martins
Deriving a Fault Resilience Metric for Real-Time Systems
17:30 Reunião da Comissão Especial de Sistemas Tolerantes a Falhas da SBC.
T1. From Object Replication to Database Replication
Fernando Pedone, University of Lugano and Rui Oliveira, University of Minho
In this tutorial, I intend to review some of the work done in the distributed systems community on database replication, focusing on group communication-based protocols. Designing database replication systems and algorithms based on group communication leads to modular approaches, in which synchronization among servers, for example, is encapsulated in the group communication primitives. As a result, reasoning about the correctness of such systems is simpler. In addition to discussing group communication-based database replication, the tutorial is intended to illustrate the use of group communication in the design of fault tolerant distributed systems.
Fernando Pedone received his Ph.D. degree in computer science from Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, in 1999. Before becoming Assistant and Associate Professor at the University of Lugano, he worked as a researcher at the Hewlett-Packard Laboratories in Palo Alto, California, and as a senior researcher at EPFL. His professional interests include the theory and practice of distributed systems and distributed data management systems. Fernando Pedone has authored more than 50 scientific papers. In November 2007 he co- chaired the Monte Verita seminar “A 30-year perspective on replication”.
Rui Oliveira graduated (1991) in Electrotechnic and Computers Engineering by Universidade do Porto, has a Master (1994) in Computer Science from the Universidade do Minho and a PhD (2000) from the Swiss Federal Institute of Technology in Lausanne. He is associate Professor in Computer Science at Universidade do Minho, teaching dependable distributed systems to Master and Doctoral Programmes. He currently leads the Computer Science and Technology Center of Universidade do Minho. Rui Oliveira was the Project Manager of the GORDA project an European research project devoted to consistent database replication systems. His current research interests include fault-tolerant and large scale distributed systems, distributed data management and peer-to-peer computing. He is a member of ACM and IEEE.
T2. Experimental Methods for Computer Science Research
Roy Maxion, Carnegie Melon University
Experimental methods comprise the set of skills and techniques for minimizing error in acquiring and communicating measurements.
The first part of the tutorial will cover a range of methodological details that are critical to good experimentation. This will be done in the context of writing a conference or journal paper that includes such details. This part of the tutorial is geared particularly toward students, but professionals may benefit from the different perspective that the tutorial offers.
Roy Maxion is a Research Professor in the Computer Science and Machine Learning Departments at Carnegie Mellon University. He is also director of the CMU Dependable Systems Laboratory where the range of activities includes computer security, biometric authentication, insider/masquerader detection, usability, and keystroke forensics in addition to the more-general issues of hardware/software system reliability and information assurance. A primary interest/concern is the integrity of experimental methodologies. Dr. Maxion teaches a course on Research Methods for Experimental Computer Science.
Dr. Maxion has been program chair of the International Conference on Dependable Systems and Networks, a member of the executive board of the IEEE Technical Committee on Fault Tolerance, the United States Defense Science Board, the European Commission AMBER advisory board, and other professional organizations. He has consulted for the US Department of State as well as for numerous industry and government bodies. He is on the editorial boards of the IEEE Transactions on Dependable and Secure Computing, the IEEE Transactions on Information Forensics and Security, and the International Journal of Security and Networks. Dr. Maxion is a Fellow of the IEEE.
IT1. Dependability in the Time of Forensics
Roy Maxion, Carnegie Melon University
More and more, the artifacts of our trade -- computers and the software that drives them -- are ending up in the courtroom, not necessarily as plaintiffs or defendants, but as aids in decision making or claims of effectiveness for triers of fact. Some examples of artifacts are biometric systems whose data are acquired and analyzed by computers; intrusion-detection systems that decide whether or not an attack occurred, and what its origins might be; and fault-detection systems that decide what went wrong, why, and what to do about it. Examples of claims of effectiveness might include reliability of software, how many bugs are in it, when it will be ready for release, and that it will work in exceptional conditions.
When digital forensic evidence is used to incriminate or exonerate real people and the artifacts of their trade, that evidence must be credible and valid. Even the courts have weighed in on how to establish the suitability of evidence introduced into legal proceedings. The Daubert case (United States Supreme Court, 1993) set a requirement that a technique (such as biometric authentication) cannot be used in court unless its error rates are known. Error rates can be used to judge the validity of evidence and the artifact that produced it. Establishing error rates and artifactual validity is a tall standard, and can be quite hard to achieve. Even when reporting experimental results in our own conferences and journals, we sometimes fall short of that standard, jeopardizing our own science.
This talk will focus on validity as a procedural factor that can cast a claim into doubt or into certitude. A lack of experimental validity can prevent a result from generalizing beyond the strictures of a test procedure, or can completely demolish a claim. The talk will provide examples of experimental or procedural invalidities and how to avoid them, thereby improving experimental outcomes.
Among the hallmarks of experimentation, validity is the keystone that helps to meet the challenge of producing dependable evidence to support the claims we make. When legal proceedings can determine people's futures, dependability is of foremost importance in the time of forensics. The dependability community should lead the way. If they don't, who will?
IT2. Empirical Data-driven Modeling for Dependability Enhancement
Miroslaw Malek, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin
We argue that three major “tyrants,” namely, complexity, time, and unpredictability continuously make dependability a permanently formidable challenge.
With current complexity levels, in addition to classical synthesis and analysis methods, we need to turn to empirical data-driven approaches which require runtime monitoring, online measurement, online analysis, diagnosis, failure prediction and decision making to support recovery and nonstop computing and communication. Also, to better understanding of system behavior in presence of faults, fault injection methods are equally relevant. We need to use the natural science approach, similar to the one in physics or biology, which is based on observations and measurements in order to confirm certain hypotheses, create relevant behavioral models and, ideally, derive laws or principles that relate the observed variables to the inputs.
The second case study illustrates how by observation and measurement a generator for realistic topologies of ad hoc networks has been developed. A number of topology generation algorithms for simulation of wireless multihop networks have been proposed but as shown in literature most of the existing node placement models create topologies that are considerably different from topologies of real networks. In order to address this issue we have developed a novel node placement algorithm - NPART that creates topologies that resemble the real topologies and helps in resilience analysis.
Finally, we argue why models derived from monitoring and measurement will gain on significance and impact and list the major challenges for empirical research on dependability.
TS1. Dependability in Database Systems
Chair: Fernando Pedone (USI, Suiça)
Benchmarking Untrustworthiness in DBMS Configurations
TS2. Algorithms and Methods for Dependable Computing
Chair - Emmanuele Anceaume (CNRS, IRISA, França)
Adaptive Sabotage-Tolerant Scheduling for Peer-to-Peer Grids
Probabilistic Estimation of Network Size and Diameter
A Timer-free Fault Tolerant K-Mutual Exclusion Algorithm
TS3. Dependability and Security Benchmarking
Chair - Jean Arlat (LAAS, CNRS, França)
Using Dependability, Performance, Area and Energy Consumption Experimental Measures to Benchmark IP Cores
Appraisals based on Security Best Practices for Software Configurations
BitTorrent Needs Psychiatric Guarantees: Quantifying How Vulnerable BitTorrent Swarms Are to Sybil Attacks
TS4. Dependability of Software
Chair - Taisy Weber (UFRGS, Brazil)
Mapping Web-based Applications Failures to Faults
Comparative analysis on the impact of defensive programming techniques for safety-critical systems
Architectural-Based Validation of Fault-Tolerant Software
TS5. Design of Dependable Systems
Chair – Regina Moraes (UNICAMP, Brazil)
Implementing Retry - Featuring AOP
Structuring Specifications with Modes
TS6. Engineering Dependable Systems
Chair - Luigi Romano (UNINA, Napoli, Itália)
A Low-Cost On-Line Monitoring Mechanism for the FlexRay Communication Protocol
Dealing with Driver Failures in the Storage Stack
A Proof-carrying-code Infrastructure for Resources
The Student Forum
The Student Forum at LADC'2009 provides an opportunity for students currently working in the area of dependable and secure computing to present and discuss their research objectives, approaches and preliminary results. This year eight student papers were selected to be presented at the Student Forum sessions, covering several fields of Dependable Computing. The Student Forum Co-Chairs would like to thank all the students who submitted papers this year, and the Organization Committees of both JEMS and LADC'2009 for the support.
SF1: Testing & Dependability
Checking Code Against Design Rules with Design Tests
Interoperability and Robustness Test Generation for Timed System Integration
Improving the Dependability of Tests Involving Asynchronous Operations
Fault Diagnosis in Computational Grids Through Automated Tests
SF2: Dependability of Complex Systems
Dependability Analysis of the Controller-Pilot Data Link Communications Application
Automock: Interaction-Based Mock Code Generation
Providing Security and a Consistent Failure Detection Semantic for Asynchronous Distributed Objects
Influence of the Computing System Configuration on the Tolerance Parameters
Chair – Rui Oliveira (U. Minho, Portugal)
FS1: Fast Abstract I
Automated System Testing of Distributed Software using Virtual Environments
A Simple Approach to Automated Test Effort Estimation
Using Less Links to Improve Fault-Tolerant Aggregation
FS2: Fast Abstracts II
Is the Clustering Coefficient a Measure for Fault Tolerance?
Byzantine Failure Detection for Dynamic Distributed Systems
Influence of the Computing System Configuration on the Tolerance Parameters
Workshop on Dependability and Security of Peer-to-Peer Systems
Security Challenges in P2P Computing
Marinho Barcellos (Federal University of Rio Grande do Sul)
Time: 9:00 - 9:30
The field of P2P computing was conceived ten years ago, through the creation of decentralized, user-centered file sharing applications. Like the Web in the nineties, P2P applications soon reached great popularity, with millions of users. This led to the realization that new user-level protocols and distributed systems would be required to tackle efficiently Internet-wide systems, which spurred a great level of activity in the scientific community. In parallel, new applications started to emerge. As the first generation of P2P applications reaches its maturity, we face the question about the current state-of-the-art and the path ahead. Among the open challenges in P2P, security is a chief one: how can secure and dependable systems be engineered when hardly any assumption can be made about users and the underlying system? In this talk, I will summarize current investigation on security aspects of P2P, and then will attempt to lay out some of the trends.
Marinho P. Barcellos is a research collaborator and future Associate Professor at Federal University of Rio Grande do Sul (UFRGS), Brazil. The current focus of his research lies on Peer-to-Peer and Security. He holds BSc and MSc degrees in Computer Science from Universidade Federal do Rio Grande do Sul (1989 and 1993, respectively), and a Ph.D. in Computer Science from University of Newcastle Upon Tyne (1998). Between 1998-2008, he was an Associate Professor at UNISINOS, Brazil, where it helped creating and establishing a Postgraduate Program in Computing. In 2003-2004, Prof. Barcellos worked for an European project with University of Manchester and British Telecomm Research Labs, in UK. In 2008-2009, he worked for PUC/RS and later as a visiting professor at UFRGS (funded by CNPq). Prof. Barcellos published papers and chaired projects in the areas of Computer Networks and Distributed Systems, and was awarded grants from bodies such as ACM, IEEE, CNPq and CAPES. He is vice-chair of the Special Interest Group on Security of the Brazilian Computing Society (2009-2010) and co-chair of SBRC 2010, the Brazilian Symposium on Computer Networks and Distributed Systems.
Long-term Digital Archiving Based on Selection of Repositories Over P2P Networks
Luis Carlos Erpen de Bona (Federal University of Parana)
Time: 9:30 - 10:00
The goal of digital archiving systems is to preserve large volumes of data that need to be stored safely for an indefinitely long period of time. Archiving systems can be built by the replication of the information in multiple storage repositories, consisting of conventional and low cost computers. Peer-to-Peer (P2P) networks are natural candidate to organize systems with these characteristics, since they are highly scalable for the distribution and retrieval of data. The main contribution of this work is the creation of a totally distributed P2P archiving system. In this system, the repositories are organized by a distributed hash table (DHT) and multiple hash functions are used as mechanisms for replication. The P2P digital archiving system proposed motivates the definition of a data replication model. We propose a replication model where reliability metric is associated with each repository. Furthermore, each item (digital information) needs to be stored with a desired reliability that reflects the importance of the item. To ensure the desired reliability of an item a set of repositories should be selected, three different strategies for determine this subset of repositories are also presented. We believe the proposed model and the algorithms combined with the structured P2P scalability are a promising approach for the construction of fully distributed digital archiving system.
Luis C. E. Bona obtained his PhD from Federal University of Technology of Paraná (UTFPR) in 2006 and his M. Sc. degree in Computer Science from Federal University of Parana (UFPR). He is professor at Department of Informatics of the Federal University of Paraná. Bona has been working with research in Computer Science in the area of Distributed Systems since 2000. His research interests include Grid and Cloud Computing, Peer-to-Peer Systems, Digital Preservation and Open Source Software.
Looking for a model that characterizes the connectivity of self-organized dynamic distributed systems
Luciana Arantes (University of Paris 6)
Time: 10:00 - 10:30
Due to failures, disconnections, arrivals, departure, or mobility of nodes, connections in self-organized dynamic distributed systems (e.g. MANET, P2P, VANET, etc.) change over time. The temporal variations in the network topology therefore implies that dynamic distributed systems cannot be viewed as a static connected graph over which end-to-end paths are established beforehand. A path between two nodes is in fact dynamically built over the time. Another impact of the dynamics of these systems is that lack of links between nodes partition them into components, i.e., a dynamic distributed system should be seen as a partitionable system. We thus believe that there is a need for defining a more suitable and comprehensive distributed computing model that characterizes both the temporal connections and partitionable system aspect of dynamic distributed systems and on top of which well-known distributed algorithms could then be built. In this talk we discuss which points such a model should cover and we present some model propositions.
Luciana Arantes received her Ph.D. in Computer Science from the Université Pierre et Marie Curie - Paris 6, France, in 2000. She is currently an Assistant Professor at Université Pierre et Marie Curie and develops her research at LIP6, Computer Science Laboratory of Paris 6. She is also member of INRIA/LIP6 Regal Team. Her research interests include distributed algorithms, fault-tolerance, grid computing, and distributed dynamic systems.
Dynamic Adaptation with Mutable Protocols
Rui Oliveira (U. Minho, Portugal)
Time: 11:00 - 11:30
|Last Updated on Thursday, 10 September 2009 20:05|