DIMACS Workshop on Usable Privacy and Security Software

July 7 - 8, 2004
DIMACS Center, CoRE Building, Rutgers University, Piscataway, NJ

Organizers:
Lorrie Cranor, Chair, Carnegie Mellon University, lorrie@acm.org, lorrie.cranor.org
Mark Ackerman, University of Michigan, ackerm@umich.edu, www.eecs.umich.edu/~ackerm/
Fabian Monrose, Johns Hopkins University, fabian@cs.jhu.edu, www.cs.jhu.edu/~fabian/
Andrew Patrick, NRC Canada, Andrew.Patrick@nrc-cnrc.gc.ca, www.andrewpatrick.ca/
Norman Sadeh, Carnegie Mellon University, sadeh@cs.cmu.edu, almond.srv.cs.cmu.edu/~sadeh/
Presented under the auspices of the Special Focus on Communication Security and Information Privacy.

Abstracts:

Len Bass, Carnegie Mellon University

Title: Standard Usability Design Processes

Standard usability design processes involve testing and modifying designs based on the results of usability tests. These processes are not well suited for correcting usabilty problems that have their roots in basic system design decisions. For the last several years, Bonnie John and I have been working on identifying, validating, and documenting usabilty issues that depend on basic system design decisions and that, consequently, are not well handled by standard processes. We have also been working with a NASA project to improve its design for usability and to measure the impact our intervention had with the users of their systems. The name of our project is Usability and Software Architecture and details of our work can be found at www.uandsa.org.

Examples of the usability issues that we have been investigating are the ability to cancel and undo operations, the ability to aggregate data, the ability to present system state or operation progress to a user, and working at the user's pace. We have identified more than two dozen such issues that we have captured in a collection of brief scenarios. For each scenario, we have identified a package that consists of the scenario, the set of requirements satisfying this scenario imposes on any system, and a sample solution that satisfies the requirement. This package is a version of what software engineers call "software architectural patterns."

We have performed (or are in the process of performing) the following activities all oriented toward validating, documenting, or measuring the impact of these scenarios.

  1. An experiment is ongoing to test which of the three elements of our pattern are the most useful for a software engineer to apply the pattern.
  2. We have worked with the NASA Mars Exploration Rover Collaboration Support Board team in a variety of fashions. We presented our scenarios to them and helped them with a software architecture re-design to incoporate those scenarios that they found relevant to their project. We are currently evaluating the results of the usage of the MERBoard by mission scientists to attempt to measure the impact of our intervention.
  3. We have been working with the head of the Hillside Group (Dick Gabriel) to document our patterns and possibly use them as exemplars of how to document a pattern. The Hillside Group has as its mission "to improve the quality of life of everyone who uses, builds, and encounters software systems-users, developers, managers, owners, educators, students, and society as a whole."

Designing systems to provide security and privacy by necessity involves decisions at the earliest stage of design and, as such, falls squarely in the agenda of our project. I can bring to the workshop long and rich experience both as a software architecture researcher and as a software architecture designer and evaluator. I can also bring to the workshop experience working in the intersection of software engineering and usability and knowledge of the processes and problems faced by both disciplines.


Lynne Coventry, NCR

Title: Fingerprint authentication: The user experience

Fingerprint authentication: The user experience

Fingerprint authentication is being adopted by many countries around the world to authenticate claimed identity. However, large scale trials with the general population are only just starting and the general population has little experience, or understanding of such technologies. It is assumed that users' intuitive behaviour will be sufficient to use these technologies, but this may not be the case. This paper presents the findings of a recent consumer evaluation of a fingerprint system. It highlights enrolment and usability issues encountered during this trial. The finding suggest that users need an explanation of the fingerprint "core" and what they are trying to achieve and why, rather than just "how to do it" information. The study found significant problems enrolling older users. The findings suggest that supporting the user to position their fingerprint core centrally on the reader rather than using any acceptable image decreases the chances of the system falsely rejecting them. The paper concludes that such technology still has shortcomings to overcome before it can be employed within a self service environment.


Roger Dingledine, Moria Research Labs

Title: Anonymity loves company: usability as a security parameter

In an encryption system, Alice can decide she wants to encrypt her mail to Bob, and assuming Alice and Bob act correctly, it will work. In an anonymity system, on the other hand, Alice cannot simply decide by herself to send anonymous messages --- she must trust the infrastructure to provide protection, and others must use the same infrastructure. This network effect means that the presence of other users improves the security of the system.

So it would seem that by choosing _weaker_ system parameters, such as faster message delivery, we can achieve _stronger_ security because we can attract more users and thus more messages.

I'll explore this issue in the context of Mixminion, an anonymity system with strong system parameters, and Tor, an anonymity system with weak system parameters.


Paul Dourish, UC Irvine

Title: Security as Experience and Practice: Supporting Everyday Security

The interaction between usability and security concerns has been both a long-standing and stubborn problem, but with the increasing deployment of coalition-based information services, from web services to ubiquitous computing, security is an increasingly important concern for end-users. This has caused a number of people to advocate a more rigorous application of HCI analysis and design principles to the technologies that support security management, including technologies for encryption, access control, file sharing, etc.

While such exercises are valuable, we believe that they are too focused on specific technologies to provide solutions broad enough to be effective. In our current work, we are attempting to take a more holistic approach to the problem of security and interaction. We look at security as an everyday practice, one that is integrated into daily life and encompasses not only technical but also physical, social, organizational and cultural resources. We are attempting to understand security as it arises as a routine, practical problem for users in the course of their everyday experiences, and to design tools that provide them with resources for more adequately achieving this integration.


Lenny Foner, MIT

Title: Architectural issues in distributed, privacy-protecting social networking

Anonymity and pseudonymity are crucial enablers in many applications, but they cannot do all the work alone. For systems that handle personal information but which want to make believable assertions about privacy to their users---especially in the face of crackers and subpoenas---a distributed architecture can help. But such systems are also ripe for spamming and similar misbehavior, which argues for a built-in reputation system as well---yet one that can function when there is no central authority to consult about others' ratings. I present the history, motivation, and architecture of a distributed social networking system I developed years before the current rise in (centralized) versions of such systems, and talk about how, for an apparently simple initial idea, every decision down the architectural chain felt both natural and completely forced---I felt that I had precious few other choices without having a system that either failed to work or failed to protect its users' privacy. Along the way, I'll talk about the threat models I perceived, why strict anonymity was less useful than persistent, globally-unique pseudonyms, and what lessons I wish the current crop of social networkers already knew.


Trent Jaeger, IBM

Title: Approaches for Designing Flexible Mandatory System Security Policies

In this presentation, we describe an approach for designing SELinux security policies to meet high-level security goals and a policy analysis tool, called Gokyo, that implements this approach. ╩SELinux provides mandatory access control (MAC) enforcement for a comprehensive set of fine-grained operations on all security-relevant Linux kernel objects, included in the mainline kernel in version 2.6. ╩The fine-grained nature of SELinux enforcement means that the security policy decisions are also fine-grained, so many security decisions and interactions must be considered in the design of an SELinux policy. ╩The SELinux community has tried to ease this burden by providing an SELinux example policy, but in order to ensure that a policy results in a secure system the policy must be customized for each system's security goals. ╩The SELinux example policy is large (30,000+ policy statements) and the policy model is complex (uses many modeling concepts), so manual modification to satisfy security goals is impractical. ╩We have developed a policy analysis tool called Gokyo that: (1) enables SELinux policies to be compared to high-level security goals and (2) supports the resolution of differences between the SELinux example policy and those goals. ╩We use Gokyo to design SELinux policies that aim to provide Clark-Wilson integrity guarantees for key applications. ╩We use Gokyo to identify dependencies of key applications on information flows containing low integrity data, to compute metrics that identify plausible resolutions and the impact of those resolutions. ╩We describe how we design an SELinux security policy starting from the SELinux example policy for Linux 2.4.19 (same techniques can be applied to Linux 2.6).


Marc Langheinrich, ETH Zurich

Title: Privacy Challenges in Ubiquitous Computing

The vision of ubiquitous computing involves integrating tiny microelectronic processors and sensors into everyday objects in order to make them "smart." Smart things can explore their environment, communicate with other smart things, and interact with humans, therefore helping users to cope with their tasks in new, intuitive ways. This digitization of our everyday lives will not only allow computers to better "understand" our actions and goals, but also allow others to inspect and search such electronic records, potentially creating a comprehensive surveillance network of unprecedented scale.

In my talk I want to examine the privacy issues surrounding a broader deployment of ubiquitous computing technology, describe currently discussed technical solutions in areas such as RFID-tracking and location privacy (and comment upon their shortcomings), and introduce an early prototype under development here at the ETH Zurich that tries to provide a more balanced approach to privacy protection in smart environments.


Scott Lederer, UC Berkeley

Title: Knowing What You're Doing: A Design Goal for Usable Ubicomp Privacy

Ubiquitous computing is infusing new interactive technologies into the architecture of everyday life. Many of these are technologies of disclosure whose adoption is reconstituting the means of presentation of self and whose normative effects are reconfiguring social contracts. Designers can help end-users cope with these developments by empowering them to know what they are doing when they use a privacy-affecting ubicomp system.

Knowing and doing are more than mere synonyms for notice and consent or for feedback and control. Reaching beyond policy and transparency, they directly address the experiential needs of end-users. From the user's perspective, knowing means maintaining a reasonable understanding of the potential and actual privacy implications of the use of a system, and doing means comfortably achieving intentional and intuitive privacy management goals through its use. Closing the loop between them, knowing what you're doing, as a design goal for privacy-sensitive ubicomp systems, means designing systems whose implications make plain sense and whose operation is convenient in the course of everyday life in an augmented world.

Knowing what you're doing also represents the recognition that end-users know more about their own privacy management practices than designers can know, even as that knowledge remains implicit in the practices themselves. Rather than loyally adhering to the intended privacy model of any single system, end-users regularly reassemble the technologies and social expectations at hand into privacy management meta-systems, exploiting properties of systems in unforeseen ways to achieve subtle social effects. This is a process that designers can support. A system with an obvious scope and operation can help users determine its place in the assemblies of privacy-affecting systems by which they maneuver through everyday social pressures.

This talk will illustrate the design goal of knowing what you're doing and provide guidelines for achieving it.


Marc Levine, Benetech

Title: Cryptography and Information Sharing in Civil Society

Can encryption technology encourage information sharing? By making cryto easy-to-use the ability to share information between trusted sources becomes available. In my talk, I will describe how independent human rights nongovernmental organizations (NGOs) around the world are now sharing sensitive and confidential data due to advances in usability in privacy enhancing technologies. Specifically, I will share my experiences working with NGOs in the Philippines and Kyrgyzstan that are now sharing information with international NGOs and even governmental human rights commissions using Martus software which makes encryption technology available in an easy-to-use format.


Chris Long, Carnegie Mellon University

Title: Chameleon: Towards Usable RBAC

Chameleon is a desktop environment and security model that seeks to simplify security management for desktop computers to reduce the impact of malicious software (viruses, trojan horses, etc.). The Chameleon security model uses coarse-grained partitioning of applications and data, based on roles, and its user interface is designed for easy management of a role-based desktop. By partitioning the computer, if a virus, for example, infects a role, only the data in the infected role will be vulnerable rather than the whole computer as is common today. Chameleon is inspired by previous work on sandboxing, partitioning, and role-based access control; however, it starts not with security mechanisms but with making the security features intelligible, convenient, and usable to a broad group of users. We believe that it is possible to create a role-based desktop that provides another layer of protection from malicious software and is usable by typical home computer owners. Preliminary user studies with a prototype of the user interface support this belief. We believe studies of security awareness and control in the context of Chameleon will inform security interface design for other applications.


Patrick McDaniel, AT&T Labs -- Research

Title: Useless Metaphors: Why Specifying Policy is So Hard?

Policy has been the subject of intense study for many years. The foundations of policy computation have been clearly established. The systemic and semantic advantages of a wide range of centralized and decentralized policy models are well understood. Many physical policy architectures have been implemented and used. Given all this progress, why are will still configuring environments through obscure and often application specific interfaces? This talk considers the current limitations of policy specification by looking at the goals and solutions of contemporary policy research, and proposes new areas of investigation in policy usability. The subtle interactions between policy representation, understandability, and expressiveness are explored. The speaker concludes with a discussion of his experiences designing the Ismene policy language, and posits ways in which that work could be improved through the application of existing tenets of interface design.


Fabian Monrose, John Hopkins University

Title: On user choice in graphical password schemes

Over the past decade, graphical password schemes have been pushed to the forefront as an alternative to text passwords, particularly in applications that support graphics and mouse or stylus entry. However, few studies so far have addressed both the security and memorability of such graphical schemes for a relatively large population. In this talk we discuss what is, to our knowledge, the largest published empirical evaluation of the effects of user choice on the security of graphical password schemes. We show that permitting user selection of passwords in graphical password schemes can yield passwords with entropy far below the theoretical optimum and, in some cases, that are highly correlated with the race or gender of the user. For one scheme, this effect is so dramatic so as to render the scheme insecure. We discuss why graphical password schemes of the type we study generally require a different posture toward password selection than text passwords, (where selection by the user remains the norm today), and examine ways to foster stronger collaboration between the HCI and security communities.


Sameer Patil, University of California, Irvine

Title: Short talk: Privacy in Instant Messaging

The great promise of collaborative technologies that increase group awareness and communication is often overshadowed by accompanying privacy concerns. In systems devised for communication and collaboration, the privacy concerns in question are primarily with respect to other individuals one interacts with - such as colleagues, superiors, subordinates, friends and family - as opposed to big, nameless entities such as corporations and governments. We use Instant Messaging (IM) as a starting point to explore privacy issues in collaborative systems. We conducted in-depth interviews with seven experienced users of Instant Messaging (IM) systems, focusing on issues relating to privacy. To achieve breadth, the users were chosen to have backgrounds and work characteristics quite different from each other. Based on the findings from the interviews, we designed and administered an online survey to a larger population of IM users. In this talk, I will discuss findings from the interviews and the survey, and suggest some design solutions to address some of these issues.


William Yurcik, NCSA Security Research National Center for Supercomputing Applications (NCSA) University of Illinois at Urbana-Champaign

Title: Better Tools for Security Administration: Enhancing the Human-Computer Interface with Visualization

System Administrators are users too! While the focus of security human-computer interaction has to this point in time been on end-users, an important class of users who manage networked systems should not be ignored. In fact, system administrators may have more effect on security than individual users since they manage larger systems on behalf of users. End-users have become dependent upon the availability of services such as network-attached storage, authentication servers, web servers, and email gateways. These Internet-scale services often have thousands of hardware and software components and require considerable amounts of human effort to plan, configure, install, upgrade, monitor, troubleshoot, and sunset. The complexity of managing these services is alarming in that a recent survey of three Internet site showed that 51% of all failures are caused by operator errors.[1]

Some of security administration is automated, however, the actual degree of automation is much lower than many people assume. Human operators are still very much "in-the-loop" particularly during emergencies. Delay in the human-computer interface can adversely affect system security so an important goal is to enhance this interface to reduce the delay. Methods are needed to help security operators more quickly extract vital information from large amounts of data and translate this information into effective control actions.

Information visualization tools can aid in any situation that is characterized by large amounts of multi-dimensional or rapidly changing data and has rapidly emerged as a potent technology to support system administrators working with complex systems. The latest generation of visual data mining tools and animated GUIs take advantage of human perceptual skills to produce striking results - empowering users to perceive important patterns in large data sets, identifying areas that need further scrutiny, and enabling sophisticated decisions. But looking at information is only a start. Users also need to manipulate and explore the data, using real-time tools to zoom, filter, and relate the information - and undo if they make a mistake. In presentation and discussion I show successful examples of information visualization for security and hints of what is to come.[2,3] My emphasis will be on examples of computer network intrusion detection and will highlight the challenges of providing universally usable interface designs.

for full-text access to NCSA papers in this specific area (13 at present) see http://www.ncassr.org/projects/sift/papers/


Previous: Participation
Next: Registration
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on May 5, 2004.