We consider safety and security from a systems engineering perspective: concepts, models, languages, architectures, components, patterns, tools, methods, and processes for safer and more secure software systems.
One of the focal points of our research is distributed data usage control. Usage control generalizes access control to the future: what happens to data once it's been given away? Relevant requirements include, "delete data after 30 days," "don't delete data within 5 years," "notify me whenever data is accessed," "pictures in my social network profile must not be printed nor saved," "no data must leave the system un-anonymized." This is relevant in the areas of data protection, compliance with regulatory frameworks, business processes that are implemented in a distributed way (e.g., via SOAs in the cloud), the general management of intellectual property and secrets and, yes, DRM. The fun part is that requirements of this kind can be enforced at all levels of the software stack: the CPU, in a virtualized processor, in the OS, the runtime system, infrastructure applications such as X11, application frameworks, services, and business processes. Even better, the topic spans exciting theoretical, conceptual, methodological, economical and technical challenges. Several demos are available online.
A second focus is on testing, in particular model-based testing. The idea is to generate tests from a model of the system under test (SUT) and its environment: sequences or trees of inout and expected output. Since the model must be more abstract than the SUT, the different levels of abstraction must be bridged - which usually accounts for as much as 50% of the model-based testing effort. We currently work on property-driven (i.e., not purely structural) and random test case generation as well as on generation mechanisms for bridge components.