DIMACS Workshop on Usable Privacy and Security Software

July 7 - 8, 2004
DIMACS Center, CoRE Building, Rutgers University, Piscataway, NJ

Organizers:
Lorrie Cranor, Chair, Carnegie Mellon University, lorrie@acm.org, lorrie.cranor.org
Mark Ackerman, University of Michigan, ackerm@umich.edu, www.eecs.umich.edu/~ackerm/
Fabian Monrose, Johns Hopkins University, fabian@cs.jhu.edu, www.cs.jhu.edu/~fabian/
Andrew Patrick, NRC Canada, Andrew.Patrick@nrc-cnrc.gc.ca, www.andrewpatrick.ca/
Norman Sadeh, Carnegie Mellon University, sadeh@cs.cmu.edu, almond.srv.cs.cmu.edu/~sadeh/
Presented under the auspices of the Special Focus on Communication Security and Information Privacy.

Abstracts:

Len Bass, Carnegie Mellon University

Title: Standard Usability Design Processes

Standard usability design processes involve testing and modifying designs based on the results of usability tests. These processes are not well suited for correcting usabilty problems that have their roots in basic system design decisions. For the last several years, Bonnie John and I have been working on identifying, validating, and documenting usabilty issues that depend on basic system design decisions and that, consequently, are not well handled by standard processes. We have also been working with a NASA project to improve its design for usability and to measure the impact our intervention had with the users of their systems. The name of our project is Usability and Software Architecture and details of our work can be found at www.uandsa.org.

Examples of the usability issues that we have been investigating are the ability to cancel and undo operations, the ability to aggregate data, the ability to present system state or operation progress to a user, and working at the user's pace. We have identified more than two dozen such issues that we have captured in a collection of brief scenarios. For each scenario, we have identified a package that consists of the scenario, the set of requirements satisfying this scenario imposes on any system, and a sample solution that satisfies the requirement. This package is a version of what software engineers call "software architectural patterns."

We have performed (or are in the process of performing) the following activities all oriented toward validating, documenting, or measuring the impact of these scenarios.

  1. An experiment is ongoing to test which of the three elements of our pattern are the most useful for a software engineer to apply the pattern.
  2. We have worked with the NASA Mars Exploration Rover Collaboration Support Board team in a variety of fashions. We presented our scenarios to them and helped them with a software architecture re-design to incoporate those scenarios that they found relevant to their project. We are currently evaluating the results of the usage of the MERBoard by mission scientists to attempt to measure the impact of our intervention.
  3. We have been working with the head of the Hillside Group (Dick Gabriel) to document our patterns and possibly use them as exemplars of how to document a pattern. The Hillside Group has as its mission "to improve the quality of life of everyone who uses, builds, and encounters software systems-users, developers, managers, owners, educators, students, and society as a whole."

Designing systems to provide security and privacy by necessity involves decisions at the earliest stage of design and, as such, falls squarely in the agenda of our project. I can bring to the workshop long and rich experience both as a software architecture researcher and as a software architecture designer and evaluator. I can also bring to the workshop experience working in the intersection of software engineering and usability and knowledge of the processes and problems faced by both disciplines.


James Blustein, Nur-Zincir-Heywood, and Malcolm Heywood, Dalhousie University Faculty of Computer Science

Title: Interfaces for Computer Network Security and Monitoring

Our Research Agenda

We are conducting a wide ranging research project into new methods for network intrusion detection in collaboration with industry partners. One of the ways we expect to improve intrusion detection systems (IDS) is by making their interfaces more suited to the needs and attributes of their users. Between us we supervise four graduate students working on the user interface portion of the project.

Currently we are gathering some basic information about IDS and their use:

Our future plans for the user interface portion of this research include:

Our Goal For The Workshop

We want to share the results of our ongoing survey of IDS needs (early results will have appeared in the IEEE's CCECE04 before DIMACS), and our task analysis. We are most eager to learn from the experience and judgement of others. We will be glad to discuss any aspect of our ongoing (or completed) work that the organizers think will be most relevant.


Lynne Coventry, NCR

Title: Fingerprint authentication: The user experience

Fingerprint authentication: The user experience

Fingerprint authentication is being adopted by many countries around the world to authenticate claimed identity. However, large scale trials with the general population are only just starting and the general population has little experience, or understanding of such technologies. It is assumed that users' intuitive behaviour will be sufficient to use these technologies, but this may not be the case. This paper presents the findings of a recent consumer evaluation of a fingerprint system. It highlights enrolment and usability issues encountered during this trial. The finding suggest that users need an explanation of the fingerprint "core" and what they are trying to achieve and why, rather than just "how to do it" information. The study found significant problems enrolling older users. The findings suggest that supporting the user to position their fingerprint core centrally on the reader rather than using any acceptable image decreases the chances of the system falsely rejecting them. The paper concludes that such technology still has shortcomings to overcome before it can be employed within a self service environment.


Roger Dingledine, Moria Research Labs

Title: Anonymity loves company: usability as a security parameter

In an encryption system, Alice can decide she wants to encrypt her mail to Bob, and assuming Alice and Bob act correctly, it will work. In an anonymity system, on the other hand, Alice cannot simply decide by herself to send anonymous messages --- she must trust the infrastructure to provide protection, and others must use the same infrastructure. This network effect means that the presence of other users improves the security of the system.

So it would seem that by choosing _weaker_ system parameters, such as faster message delivery, we can achieve _stronger_ security because we can attract more users and thus more messages.

I'll explore this issue in the context of Mixminion, an anonymity system with strong system parameters, and Tor, an anonymity system with weak system parameters.


Paul Dourish, UC Irvine

Title: Security as Experience and Practice: Supporting Everyday Security

The interaction between usability and security concerns has been both a long-standing and stubborn problem, but with the increasing deployment of coalition-based information services, from web services to ubiquitous computing, security is an increasingly important concern for end-users. This has caused a number of people to advocate a more rigorous application of HCI analysis and design principles to the technologies that support security management, including technologies for encryption, access control, file sharing, etc.

While such exercises are valuable, we believe that they are too focused on specific technologies to provide solutions broad enough to be effective. In our current work, we are attempting to take a more holistic approach to the problem of security and interaction. We look at security as an everyday practice, one that is integrated into daily life and encompasses not only technical but also physical, social, organizational and cultural resources. We are attempting to understand security as it arises as a routine, practical problem for users in the course of their everyday experiences, and to design tools that provide them with resources for more adequately achieving this integration.


Lenny Foner, MIT

Title: Architectural issues in distributed, privacy-protecting social networking

Anonymity and pseudonymity are crucial enablers in many applications, but they cannot do all the work alone. For systems that handle personal information but which want to make believable assertions about privacy to their users---especially in the face of crackers and subpoenas---a distributed architecture can help. But such systems are also ripe for spamming and similar misbehavior, which argues for a built-in reputation system as well---yet one that can function when there is no central authority to consult about others' ratings. I present the history, motivation, and architecture of a distributed social networking system I developed years before the current rise in (centralized) versions of such systems, and talk about how, for an apparently simple initial idea, every decision down the architectural chain felt both natural and completely forced---I felt that I had precious few other choices without having a system that either failed to work or failed to protect its users' privacy. Along the way, I'll talk about the threat models I perceived, why strict anonymity was less useful than persistent, globally-unique pseudonyms, and what lessons I wish the current crop of social networkers already knew.


Lewis Hassell, Drexel University

Title: Human Factors and Information Security

"[Using home security as an analogy] 95% of security problems are caused by casual walk-in burglars who find you don't bother to shut all the windows and doors when you go out, while only 5% come from more devious and determined thieves... It's only when a culture of security is instilled into an organization-so that every employee is aware of security measures and why they have been put in place-that security can be effective." Paul Rubens, Building a Blueprint for Network Security

The above quote could just as readily have come from a thousand other books, journal articles, or web sites. Everyone knows that changing the way people-ordinary people, not computer scientists or engineers-think about security is the key to the central information security problem: non-technical computer users do not recognize information security for the serious problem that it is. This means that no matter how good the technology of security is, its use will continue to be suboptimal. Even in federal government agencies, where one might expect to find information security taken seriously, the situation has gotten so bad that the Office of Management and Budget has told 18 agencies not to develop, modernize or enhance IT systems until their cybersecurity problems are fixed. Despite the fact that non-technical computer users are the weak link in information systems security, the study of human factors on security measure compliance has remained largely ignored in INFOSec and Information Assurance literature. One might assume that there is an implicit assumption that enough technology will solve the problem-that if we can only remove humans from the equation, we can automate our way to information systems security. While technology is certainly important, the assumption that it will solve the security problem has yet to be justified. Furthermore, it ignores the common dictum that security has three parts: technology, people, and process.

My position is that to improve information systems security one must change the way people conceptualize information systems security and to do that we must change the way people act vis ŗ vis security. In effect, this will create a culture of security. I am interested in investigating how to create this culture-how to manage the way end users think about and act on computer and information security.

Specifically, I wish to focus on humans as social creatures, not specifically as users of technology. Thus I wish to go beyond the typical "usability" studies. I want to address 'compliance' not from the point of view of obedience but from the point of view of human beings doing that which defines them as human-participating in a community, a culture. This obviously has even broader implications for a post 911 America


Trent Jaeger, IBM

Title: Approaches for Designing Flexible Mandatory System Security Policies

In this presentation, we describe an approach for designing SELinux security policies to meet high-level security goals and a policy analysis tool, called Gokyo, that implements this approach.  SELinux provides mandatory access control (MAC) enforcement for a comprehensive set of fine-grained operations on all security-relevant Linux kernel objects, included in the mainline kernel in version 2.6.  The fine-grained nature of SELinux enforcement means that the security policy decisions are also fine-grained, so many security decisions and interactions must be considered in the design of an SELinux policy.  The SELinux community has tried to ease this burden by providing an SELinux example policy, but in order to ensure that a policy results in a secure system the policy must be customized for each system's security goals.  The SELinux example policy is large (30,000+ policy statements) and the policy model is complex (uses many modeling concepts), so manual modification to satisfy security goals is impractical.  We have developed a policy analysis tool called Gokyo that: (1) enables SELinux policies to be compared to high-level security goals and (2) supports the resolution of differences between the SELinux example policy and those goals.  We use Gokyo to design SELinux policies that aim to provide Clark-Wilson integrity guarantees for key applications.  We use Gokyo to identify dependencies of key applications on information flows containing low integrity data, to compute metrics that identify plausible resolutions and the impact of those resolutions.  We describe how we design an SELinux security policy starting from the SELinux example policy for Linux 2.4.19 (same techniques can be applied to Linux 2.6).


Cynthia Kuo, Carnegie Mellon University

Title: The security of today's computing environment

The security of today's computing environment is often limited by user interfaces that make proper configuration and troubleshooting difficult. Despite the importance of security and privacy in many applications, user interfaces garner relatively little attention. In contrast, encryption algorithms and security protocols enjoy widespread attention, even though the security is often compromised due to implementation errors or misconfiguration.

IEEE 802.11 wireless access points are an interesting case to study, as security is widely believed to be an important requirement for deployment. In fact, experts agree that the security vulnerabilities of WEP impeded the deployment of 802.11. Many corporations and individuals were—and continue to be—hesitant to deploy Wi-Fi networks for security reasons.

The majority of IEEE 802.11 networks do not enable WEP, leaving them wide open for public use. Such open networks may be detected and mapped through “wardriving,” where hackers drive around, detect, and map these open networks. The WorldWide WarDrive is an effort by security professionals and hobbyists to raise public awareness of the need to secure Wi-Fi access points. During eight days in 2003, members located over 88,000 access points around the world. Of those networks, 68% did not have WEP enabled.

Why do the majority of networks continue to be unsecured? Some providers deliberately keep their networks open for public use as a political statement. However, it is suspected that most users are either unaware of the security features or unable to configure their access points correctly. Configuring an 802.11 network securely is a complex and challenging task. Novice users encounter many difficulties when setting up these networks; often the resulting configuration is not secured.

The goal of my current work is to test whether home users who are motivated to secure their 802.11 wireless network are able to do so successfully. If they are not, I would like to discover why not and how to improve the interfaces for security configurations.


Marc Langheinrich, ETH Zurich

Title: Privacy Challenges in Ubiquitous Computing

The vision of ubiquitous computing involves integrating tiny microelectronic processors and sensors into everyday objects in order to make them "smart." Smart things can explore their environment, communicate with other smart things, and interact with humans, therefore helping users to cope with their tasks in new, intuitive ways. This digitization of our everyday lives will not only allow computers to better "understand" our actions and goals, but also allow others to inspect and search such electronic records, potentially creating a comprehensive surveillance network of unprecedented scale.

In my talk I want to examine the privacy issues surrounding a broader deployment of ubiquitous computing technology, describe currently discussed technical solutions in areas such as RFID-tracking and location privacy (and comment upon their shortcomings), and introduce an early prototype under development here at the ETH Zurich that tries to provide a more balanced approach to privacy protection in smart environments.


Scott Lederer, UC Berkeley

Title: Knowing What You're Doing: A Design Goal for Usable Ubicomp Privacy

Ubiquitous computing is infusing new interactive technologies into the architecture of everyday life. Many of these are technologies of disclosure whose adoption is reconstituting the means of presentation of self and whose normative effects are reconfiguring social contracts. Designers can help end-users cope with these developments by empowering them to know what they are doing when they use a privacy-affecting ubicomp system.

Knowing and doing are more than mere synonyms for notice and consent or for feedback and control. Reaching beyond policy and transparency, they directly address the experiential needs of end-users. From the user's perspective, knowing means maintaining a reasonable understanding of the potential and actual privacy implications of the use of a system, and doing means comfortably achieving intentional and intuitive privacy management goals through its use. Closing the loop between them, knowing what you're doing, as a design goal for privacy-sensitive ubicomp systems, means designing systems whose implications make plain sense and whose operation is convenient in the course of everyday life in an augmented world.

Knowing what you're doing also represents the recognition that end-users know more about their own privacy management practices than designers can know, even as that knowledge remains implicit in the practices themselves. Rather than loyally adhering to the intended privacy model of any single system, end-users regularly reassemble the technologies and social expectations at hand into privacy management meta-systems, exploiting properties of systems in unforeseen ways to achieve subtle social effects. This is a process that designers can support. A system with an obvious scope and operation can help users determine its place in the assemblies of privacy-affecting systems by which they maneuver through everyday social pressures.

This talk will illustrate the design goal of knowing what you're doing and provide guidelines for achieving it.


Marc Levine, Benetech

Title: Cryptography and Information Sharing in Civil Society

Can encryption technology encourage information sharing? By making cryto easy-to-use the ability to share information between trusted sources becomes available. In my talk, I will describe how independent human rights nongovernmental organizations (NGOs) around the world are now sharing sensitive and confidential data due to advances in usability in privacy enhancing technologies. Specifically, I will share my experiences working with NGOs in the Philippines and Kyrgyzstan that are now sharing information with international NGOs and even governmental human rights commissions using Martus software which makes encryption technology available in an easy-to-use format.


Chris Long, Carnegie Mellon University

Title: Chameleon: Towards Usable RBAC

Chameleon is a desktop environment and security model that seeks to simplify security management for desktop computers to reduce the impact of malicious software (viruses, trojan horses, etc.). The Chameleon security model uses coarse-grained partitioning of applications and data, based on roles, and its user interface is designed for easy management of a role-based desktop. By partitioning the computer, if a virus, for example, infects a role, only the data in the infected role will be vulnerable rather than the whole computer as is common today. Chameleon is inspired by previous work on sandboxing, partitioning, and role-based access control; however, it starts not with security mechanisms but with making the security features intelligible, convenient, and usable to a broad group of users. We believe that it is possible to create a role-based desktop that provides another layer of protection from malicious software and is usable by typical home computer owners. Preliminary user studies with a prototype of the user interface support this belief. We believe studies of security awareness and control in the context of Chameleon will inform security interface design for other applications.


Patrick McDaniel, AT&T Labs -- Research

Title: Useless Metaphors: Why Specifying Policy is So Hard?

Policy has been the subject of intense study for many years. The foundations of policy computation have been clearly established. The systemic and semantic advantages of a wide range of centralized and decentralized policy models are well understood. Many physical policy architectures have been implemented and used. Given all this progress, why are will still configuring environments through obscure and often application specific interfaces? This talk considers the current limitations of policy specification by looking at the goals and solutions of contemporary policy research, and proposes new areas of investigation in policy usability. The subtle interactions between policy representation, understandability, and expressiveness are explored. The speaker concludes with a discussion of his experiences designing the Ismene policy language, and posits ways in which that work could be improved through the application of existing tenets of interface design.


Fabian Monrose, John Hopkins University

Title: On user choice in graphical password schemes

Over the past decade, graphical password schemes have been pushed to the forefront as an alternative to text passwords, particularly in applications that support graphics and mouse or stylus entry. However, few studies so far have addressed both the security and memorability of such graphical schemes for a relatively large population. In this talk we discuss what is, to our knowledge, the largest published empirical evaluation of the effects of user choice on the security of graphical password schemes. We show that permitting user selection of passwords in graphical password schemes can yield passwords with entropy far below the theoretical optimum and, in some cases, that are highly correlated with the race or gender of the user. For one scheme, this effect is so dramatic so as to render the scheme insecure. We discuss why graphical password schemes of the type we study generally require a different posture toward password selection than text passwords, (where selection by the user remains the norm today), and examine ways to foster stronger collaboration between the HCI and security communities.


Sameer Patil, University of California, Irvine

Title: Short talk: Privacy in Instant Messaging

The great promise of collaborative technologies that increase group awareness and communication is often overshadowed by accompanying privacy concerns. In systems devised for communication and collaboration, the privacy concerns in question are primarily with respect to other individuals one interacts with - such as colleagues, superiors, subordinates, friends and family - as opposed to big, nameless entities such as corporations and governments. We use Instant Messaging (IM) as a starting point to explore privacy issues in collaborative systems. We conducted in-depth interviews with seven experienced users of Instant Messaging (IM) systems, focusing on issues relating to privacy. To achieve breadth, the users were chosen to have backgrounds and work characteristics quite different from each other. Based on the findings from the interviews, we designed and administered an online survey to a larger population of IM users. In this talk, I will discuss findings from the interviews and the survey, and suggest some design solutions to address some of these issues.


Robert Reeder, Carnegie Mellon University

Title: Comparing Interfaces for Setting NTFS File Permissions

Designers of security interfaces sometimes face a tradeoff between flexibility and usability of the interface. An interface that exposes details of a complex security model may provide greater flexibility to users, at the cost of being more difficult to comprehend and to use accurately. We studied a real-world example of such a design tradeoff - the Microsoft Windows (R) XP interface (hereafter referred to as XPFP) to the Windows NTFS file permissions model. We conducted a laboratory user study, and found that the complexity exposed by the XP interface does lead to high error rates in certain contexts. In the same study, we evaluated an interface of our own design, called Salmon, that was intended to reduce errors compared to the XP interface, while maintaining the same degree of flexibility. We found that while error rates were somewhat lower with our design, they were far from zero. We categorized the causes of user errors made during our study and linked them to three particularly troublesome aspects of the NTFS security model. We conclude that simplifying the model, rather than continuing to redesign the interface, could be the best route to reducing error rate in setting file permissions.

A computer system using the NTFS file permissions model will be populated with entities and objects. The entities, as recursively defined, are people with accounts on the system or groups of entities on the system. The objects are the files and folders on the system. NTFS defines 13 atomic permissions (including Read Data, Write Data, Execute File, Delete, Change Permissions, etc.) that correspond to actions that entities can perform on the objects. The 13 atomic permissions are grouped into 6 composite permissions (including Full Control, Modify, Read, Write, etc.). Setting file permissions, then, is the task of specifying what entities are allowed or denied permission to perform which actions on which objects.

We identified three aspects of the NTFS file permissions model that seemed potentially confusing. These were group conflicts (what happens when an entity is allowed access through one group of which it is a member, but denied access through another group?), permissions mappings (what atomic permissions correspond to a given composite permission?), and override permissions (what happens if an entity is denied permission to write, but is allowed permission to change their own permissions?).

Our laboratory user study evaluated Salmon and XPFP for setting file permissions under NTFS. Salmon was designed to improve upon XPFP, including two primary improvements: 1) displaying better feedback on how a user's permissions settings would affect true access and 2) eliminating composite permissions by listing atomic permissions only (XPFP allows users to manipulate both). In our study, each interface had 8 participants assigned to it. We designed 7 tasks that varied in difficulty and that exercised the three potentially confusing aspects of the NTFS file permissions model. Each participant performed these 7 tasks on the interface to which they were assigned. We provided no training in the use of the interfaces, but relevant Windows Help files were available to all participants. We collected screen video and verbal protocols for all participants. After the testing, we scored each task for correctness, so that we could determine an error rate for each task for each interface. We then watched the videos and listened to the verbal protocols to determine the causes of errors.

Of 112 total tasks completed, 28 ended in failure. Of the 28 failures, 15 occurred on XPFP and 13 on Salmon. Error rates were highly task-dependent and ranged from 0% for the easiest tasks up to 75% in the most difficult task on XPFP and 0% up to 50% on Salmon. Analysis of video from the study showed that, indeed, confusion over group conflicts, permissions mappings, and override permissions led to most of these failures.

In absolute error rates, Salmon and Windows were about equal. Video analysis showed that Salmon's two primary improvements did mitigate some of the causes of errors we saw in XPFP, but also introduced some new causes of error. Video analysis revealed weak points of both interfaces' design that clearly could be improved. However, it seems that simplifying the underlying system model could be the most efficient path to reduced error rates. In particular, our results suggest that disallowing group conflicts, reducing the number of atomic permissions and eliminating composite permissions altogether, and removing override permissions or at least making their properties more salient could significantly reduce file permissions setting errors.


William Yurcik, NCSA Security Research National Center for Supercomputing Applications (NCSA) University of Illinois at Urbana-Champaign

Title: Better Tools for Security Administration: Enhancing the Human-Computer Interface with Visualization

System Administrators are users too! While the focus of security human-computer interaction has to this point in time been on end-users, an important class of users who manage networked systems should not be ignored. In fact, system administrators may have more effect on security than individual users since they manage larger systems on behalf of users. End-users have become dependent upon the availability of services such as network-attached storage, authentication servers, web servers, and email gateways. These Internet-scale services often have thousands of hardware and software components and require considerable amounts of human effort to plan, configure, install, upgrade, monitor, troubleshoot, and sunset. The complexity of managing these services is alarming in that a recent survey of three Internet site showed that 51% of all failures are caused by operator errors.[1]

Some of security administration is automated, however, the actual degree of automation is much lower than many people assume. Human operators are still very much "in-the-loop" particularly during emergencies. Delay in the human-computer interface can adversely affect system security so an important goal is to enhance this interface to reduce the delay. Methods are needed to help security operators more quickly extract vital information from large amounts of data and translate this information into effective control actions.

Information visualization tools can aid in any situation that is characterized by large amounts of multi-dimensional or rapidly changing data and has rapidly emerged as a potent technology to support system administrators working with complex systems. The latest generation of visual data mining tools and animated GUIs take advantage of human perceptual skills to produce striking results - empowering users to perceive important patterns in large data sets, identifying areas that need further scrutiny, and enabling sophisticated decisions. But looking at information is only a start. Users also need to manipulate and explore the data, using real-time tools to zoom, filter, and relate the information - and undo if they make a mistake. In presentation and discussion I show successful examples of information visualization for security and hints of what is to come.[2,3] My emphasis will be on examples of computer network intrusion detection and will highlight the challenges of providing universally usable interface designs.

for full-text access to NCSA papers in this specific area (13 at present) see http://www.ncassr.org/projects/sift/papers/


Previous: Participation
Next: Registration
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on May 5, 2004.