Title: Botnet 2.0
Networks of malicious code also known as botnets are a form of an organized threat that should not be underestimated. Looking at recent botnets that use P2P command and control mechanisms, we show some approaches for detecting and countering some of these botnets. The limits are not only dictated by technical means such as encryption, but also by ethical concerns.
Title: Threshold Signatures with Efficient Key Redistribution
In this talk, I will focus on the problem of authenticated communication in dynamic federated environments. Our approach extends the conventional threshold signature paradigm by additionally supporting membership changes to the federated system: While traditional systems split the signature key only among an a priori fixed group, our scheme allows evolving membership by repeatedly and securely (re) distributing key shares from the old set of key-holders to the new set of agents. This is realized without resorting to system re-initialization nor relying on a central trusted dealer.
Title: Reducing Spam by Not Sending it
This paper introduces an email sending technique, called trustworthy self regulation (TSR), which enables the receiver of an email message to recognize the sending protocol that generated it. The availability of this sending technique is expected to help induce email users to send messages via spam-immune protocols preferred by their destination users?thus producing less spam.
The TSR-based communication involves no text-based filtering, no dependency on blacklistings, and no coercion by ISPs or ESPs. And it can be deployed incrementally, as a complement to convention anti-spam measures, because it involves no changes to the SMTP protocol.
If widely deployed, TSR-based email is expected to result in a significant reduction of traffic of spam, without triggering an arms race between spamming and filtering, and without incurring undesirable side effects like the blocking of valid mail by filtering. However, wide usage of TSR over the Internet would require a broad deployment of a trusted middleware called LGI. This is a formidable proposition, whose only chance of being carried out is due to the broad range of application of this middleware, well beyond its potential use for email communication
Title: Advances in Privacy-Preserving Machine Learning
This talk introduces the problem of privacy-preserving machine learning, and some recent results. The goal of privacy-preserving machine learning is to provide machine learning algorithms that adhere to strong privacy protocols, yet are useful in practice. As increasing amounts of sensitive data are being digitally stored and aggregated, maintaining the privacy of individuals is critical. However, learning cumulative patterns, such as disease risks from medical records, could benefit society. Our work on privacy- preserving machine learning seeks to facilitate a compromise between these two opposing goals, by providing general techniques, for the design of algorithms to learn from private databases, that manage the inherent trade-off between privacy and learnability.
I will present a new method for designing privacy-preserving machine learning algorithms. Researchers in the cryptography and information security community [Dwork et al. '06] had shown that if any function learned from a database is randomly perturbed in a certain way, the output respects a very strong privacy definition. The amount of perturbation depends on the function however, and could render the output ineffectual for machine learning purposes. We introduce a new paradigm: perturb the optimization problem, instead of its solution, for functions learned via optimization. It turns out that, for a canonical machine learning algorithm, regularized logistic regression, our new method yields a significantly stronger learning performance guarantee, and demonstrates improved empirical performance over the previous approach, while adhering to the same privacy definition. Our techniques also apply to a broad class of convex loss functions.
This talk is based on joint work with Kamalika Chaudhuri (UC San Diego).
Title: Security Risk: Research Directions
Given the proliferation of security technologies and processes, many customers are increasingly demanding that security investments and projects be justified using risk management techniques, and be evaluated using the same yardsticks as other business investments and strategies. Such customers would like security investments to be evaluated using metrics like ROI, the amount of risk mitigated, cost- benefit analysis etc. Although, ideally, security is about risk mitigation, in practice, it is very difficult to do so in a quantitative manner. In this talk, I will describe some of the challenges in making the security measurable and quantitative. I will also describe, some of the benefits if one could get even estimates of security risk in certain settings. In particular, I will describe some of our recent work on how an ability to quantify security risk could be used to significantly improve access control and critical information sharing.
Title: A Map For Security Science
While today much security research is about defending against the attack du jour, there has been theoretical work in computer security and there are the beginnings of a science base for security. This talk will discuss the kinds of questions one might expect a scince base to address. It will also give examples of how such questions could be answered. Basic concepts in security, such as attack, policy, and enforcement turn out to be surprisingly subtle to define.
Title: Cryptographic Provenance Verification Approach in Malware Detection With Trusted User Inputs
We present a malware detection approach focusing on the characteristic behaviors of human users. We explore the human-malware differences and utilize them to aid the detection of infected hosts. There are two main research challenges in this study: one is how to select characteristic behavior features, and the other is how to prevent malware forgeries. We address both questions in this paper.
A cryptographic provenance verification technique is described. Its two applications are demonstrated in keystroke-based bot identification and rootkit traffic detection. Specifically, we first present our design and implementation of a remote authentication framework called TUBA for monitoring a user's typing patterns and verifying their integrity. We evaluate the robustness of TUBA through comprehensive experimental evaluation including two series of simulated bots. We then demonstrate our provenance verification approach by realizing a lightweight framework for blocking outbound rootkit-based malware traffic.
1. Ruilin Liu (Stevens) 2. Tom Reynolds (Albany) 3. Brian Thompson (Rutgers) 4. Chehai Wu (Rutgers) 5. Pranav Jadhav (Stony Brook) 6. Arati Baliga (Rutgers) 7. Rimmi Devgan (Stony Brook) 8. Jeffery Bickford (Rutgers) 9. Aaron Jaggard (Rutgers) 10. Yao Chen (Stony Brook) 11. Vivek Pathak (Rutgers) 12. Minnu Tom (Stony Brook) 13. Ashish Anand (Stony Brook) 14. Tuan Phan (Rutgers) 15. Ajay Venkateshan (Stony Brook) 16. Vinod Ganapathy (Rutgers) 17. Borhan Uddin (Polytechnic Institute of NYU)