Program in Detail
The preliminary conference program is sill subject to change
Invited Speakers
Have a look at the details on our invited speakers.
Session 1: Authentication and Resilience
Monday, June 15, 2026, 09:15 a.m. - Chair: tba
| Günter Fahrnberger | Multi-Factor Authentication (MFA) for Secure Shell (SSH) Guards Linux Fleets Against Intrusion and Lateral Movement |
| Abstract: Cyberattacks that exploit weak or reused authentication credentials introduce persistent risks to Linux fleets, especially when intruders leverage Secure Shell (SSH) for unauthorized access and lateral movement among fleet members. This disquisition presents an innovative Identity and Access Management (IAM) approach enforcing centralized Multi-Factor Authentication (MFA) for SSH with up to four independent factors. Unlike conventional host-based setups, the proposed design utilizes a centralized OpenLDAP instance, removes local storage of secrets, mitigates password reuse, and minimizes opportunities for attackers to extract sensitive material from compromised nodes. The implementation accommodates regular, emergency, and Machine-to-Machine (M2M) authentication workflows while maintaining usability across operational environments. A subsequent security evaluation shows resilience against Brute Force Attacks (BFAs), buffer overflows, and exposure of authentication assets, yet acknowledges the continuing challenge of complete compromise, which demands behavioral monitoring and anomaly detection. This IAM approach enhances defenses against intrusion and lateral movement in large-scale Linux fleets and lays groundwork for integration with orchestration tools and zero-trust architectures. | |
| Michael Hofmeier, Isabelle Haunschild and Wolfgang Hommel | Individual Trust and Preferences for Identity Management in Private, Professional, and Community Service Contexts |
| Abstract: Digital identity management systems constitute a fundamental building block of modern digital services. While federated identity management is widely deployed due to its usability and administrative efficiency, self-sovereign identity (SSI) has emerged as a privacy-preserving, user-centric alternative. However, the suitability of these paradigms depends strongly on the context in which digital identities operate. In this paper, we investigate users' trust and preferences regarding identity management approaches across private, professional, and community service contexts. We report on the results of an online survey with 876 valid participants conducted in Germany. The study examines technology commitment, preferences for personal data control, and context-dependent trust in self-managed wallets, internal organizational identity providers, and external federated providers. Our results indicate a general tendency toward SSI for personal data control, which correlates with higher technology commitment and age. At the same time, trust assessments clearly differ by context: while SSI is preferred in private and community settings, internal organizational identity management systems are perceived as most trustworthy in professional environments. These findings underline the importance of context-aware identity management designs and suggest that hybrid approaches combining centralized and self-sovereign elements may best align with user expectations. | |
| Dirk Westhoff | Re-visited: Fountain Code Implementation with Resilience Measurement |
| Abstract: We examine fountain codes (LT-codes) and their suitable parameterization based on our own PoC implementation with regard to their resilience. We use benchmark values from Hyytiä and co-authors for comparison. We achieve i) partially better results and, as a new aspect, also offer ii) detailed evaluations for noise and different jamming types. Moreover, our work addresses iii) previously unaddressed aspects which need to be considered for a performant implementation of on-the-fly decoding and how this impacts jamming. We remark that RaptorQ originate from LT-codes with its inner encoding derived from it. |
Back to Program Overview, Top
Session 2: Cybersecurity
Monday, June 15, 2026, 01:00 p.m. - Chair: tba
| Jan Biermann, Maximilian Greiner, Karl Seidenfad, Benedikt Roosen and Ulrike Lechner | Aligning Governance and Value Exchange in Permissioned Blockchain Consortia: A Model-Based Study in a Regulated Milk Supply Chain |
| Abstract: The digital transformation of supply chains challenges traditional governance structures in decentralized, multi-stakeholder environments. While blockchain technology enables transparent and tamper-resistant coordination, the design of governance mechanisms that support sustainable economic value exchange in consortium blockchains remains insufficiently understood. This paper examines how governance structures in blockchain-based consortia must be designed to enable viable value exchange and long-term cooperation. Using a Design Science Research approach, governance and value models are developed for a permissioned blockchain consortium in the drinking milk supply chain. The study integrates a systematic literature review, expert interviews, and model-based analysis. Governance structures are modeled using the DECENT ontology, while economic value exchanges are analyzed through the E3 value method. The results show that viable blockchain consortia require a tight alignment between formalized governance mechanisms, incentive structures, and value flows. The paper provides design-oriented insights for researchers and practitioners developing sustainable blockchain-based consortia in regulated supply chain contexts. | |
| Mandy Balthasar, Stefan Fleischmann and Ulrike Lechner | WYSIWYHD: Collaborative Action Between Humans and Machines in Critical Safety and Security Environments |
| Abstract: Effective human-computer collaboration (HCT) in collaborative intrusion detection systems (CIDS) presents challenges related to perception and evaluation in time-critical decision-making under uncertainty. The keys to this are situational awareness and a seamless and reliable connection between the hybrid actors. The effectiveness of these fundamentals depends on whether the human factors influencing the analysis of data and information are adequately taken into account. This raises the question of how facts can be visualized in an optimal CIDS. Within the framework of a user-centered design approach, design elements for a dashboard were identified and developed. The result is a portfolio of design objects that can be used in the development and redesign of human-in-the-loop systems (HITL) in CIDS, particularly in critical security environments. After all, WYSIWYHD applies: What you see is what you have to decide. | |
| Markus Rebhan, Jens Holtmannspötter and Ulrike Lechner | Count2zero, a Serious Escape Room Challenge for Cybersecurity Training |
| Abstract: The digitization of military missions and the networking of critical infrastructures lead to threat situations in cyberspace. Situation reports document that both the number and complexity of attacks on information networks are increasing, along with the pivotal role of the human factor. We propose a serious game to raise awareness of cybersecurity in critical infrastructure and military missions. The serious game "Count2zero" is an escape game; as an escape game, it is particularly immersive and combines puzzles, teamwork, and time pressure. This article presents the serious escape game Count2zero, its design process, and selected challenges of the escape game. The escape room is located in an air-raid bunker and replicates a realistic cyber-physical environment in which IT infrastructure, IoT components, and networked end devices are combined. The article describes the background and related work, the didactic and technical concepts of Count2zero, and five exemplary modules, which are called a PC with login data, NFC interactions, a laptop with an HID attack mouse, a bedroom scenario with an emergency email, and a locker with adhesive tape, and discusses them with regard to security-relevant behavior. Finally, initial evaluation results are presented, and implications for research and practice of security awareness programs are derived. |
Back to Program Overview, Top
Session 3: Cyberincident Detection and Response
Tuesday, June 16, 2026, 9:00 a.m. - Chair: tba
| Judith Strussenberg, Ulrike Lechner and Mandy Balthasar | The CONTAIN Response Canvas: An Innovative Template for Cyber Incident Response |
| Abstract: Ransomware attacks on mobile devices pose complex challenges that span individual behavior, organizational processes, and technological contexts for both professional and private use. Although cybersecurity training measures, such as serious games, are effective in raising awareness, they often lack mechanisms to transfer experiential knowledge into operational practice. In contrast, established incident response and business continuity frameworks are comprehensive, but typically text-heavy and difficult to apply under time pressure. This article introduces the CONTAIN Response Canvas (CRC), a visual, canvas-based artifact that adapts principles of the Business Model Canvas to ransomware incident response for personal mobile devices. The CRC structures technical, organizational, and communicative response measures in a compact format, supporting both orientation during incidents and structured reflection after training. The article also presents the serious game A Question of Security, which embeds a reference response model that describes an ideal-typical procedure for handling a ransomware incident on a mobile device. This reference response model is part of the game design and was iteratively clarified and refined during 16 game sessions with 96 participants. Phases 2–4 of this model are represented using three phase-specific CONTAIN Response Canvases. The CRC thus serves as a structured visual representation of the reference response model and as a means of transferring the training results into reusable documentation artifacts. Finally, the article outlines how the CRC can be integrated into organizational awareness and training programs to support knowledge transfer, organizational learning, and sustained preparedness beyond the game-based setting. | |
| Christoph Eigner and Günter Fahrnberger | Comparison of the Endpoint Detection and Response (EDR) Solutions From CrowdStrike and SentinelOne |
| Abstract: Endpoint Detection and Response (EDR) solutions serve as a cornerstone of modern cybersecurity, tackling the growing complexity and frequency of cyberattacks that target corporate networks. This paper compares two leading EDR platforms, CrowdStrike and SentinelOne, to evaluate their effectiveness in detecting common attack patterns such as phishing, malware, and ransomware. Furthermore, the study analyzes how each solution handles vulnerabilities including insecure authentication and outdated software. Both products employ advanced technologies such as Artificial Intelligence (AI) and Machine Learning (ML), which strengthen threat detection and response capabilities. The research seeks to deliver organizations actionable insights that guide the selection of an EDR solution matching their specific security requirements. |
Back to Program Overview, Top
Session 4: Artificial Intelligence for Work Processes
Tuesday, June 16, 2026, 12:30 p.m. - Chair: tba
| Karl-Heinz Lüke and Gerald Eichler | Application Scenarios for the Examination of the Impact of Artificial Intelligence in Industrial Procurement 4.0 Challenges and Processes |
| Abstract: Artificial intelligence (AI) is undoubtedly a critical technology that has been widely adopted in various fields, especially in business, art, and science. AI algorithms perform tasks that require human-like intelligence. These tasks include machine learning, deep learning, and complex decision-making processes. AI applications are used across all industries, particularly in industrial procurement 4.0, which is derived from logistics 4.0. This area is important for optimizing processes. An empirical analysis of AI use cases in industrial procurement shows that supplier selection, supplier performance measurement, and negotiation technologies are among the most frequently used and effective applications. | |
| Siniša Nešković and Kathrin Kirchner | Extending AI-Assisted Software Development Beyond Vibe Coding |
| Abstract: The adoption of Generative AI (GenAI) in software development is currently dominated by the discussion of “vibe coding,” in which developers describe the desired functionality in natural language and let large language models generate the corresponding code with minimal review. This describes a fundamental shift in how software is developed and in the competencies necessary for a software developer. Vibe coding reduces the need for manual coding, leaving the software developer to focus on high-level design and code evaluation. While effective for prototypes, this approach is insufficient for complex software systems, where development spans analysis, architecture, design, testing, and deployment, and where every phase requires human understanding and evaluation. In this position paper, we argue that AI-assisted development of complex systems requires methodology encoding: the systematic translation of a project’s adopted methodology (architecture, patterns, workflows, standards, and roles) into AI tool configuration. This creates a configured agentic development environment that enables consistent AI assistance across the entire software development lifecycle. We define a new professional role, the Agentic Architect, who is responsible for designing this environment. We illustrate the concept using Claude Code and discuss implications for developer competencies and organizational roles. | |
| Burak Toptas, Richard Lenz and Rainer Groß | Toward GenAI-Based Contextual Metadata Generation for Reusability: Evaluating Metadata Extraction from Scientific Publications |
| Abstract: Metadata is crucial to improve the findability and reusability of research outputs, thereby contributing to a more efficient use of research resources. Their effectiveness increases as metadata goes beyond purely descriptive elements and captures richer contextual information. However, creating such context-rich metadata remains a time-consuming task for researchers. Recent advances in Large Language Models (LLMs) offer the potential to provide scalable, automated metadata annotation to support research communities. Therefore, we pursue the development of an LLM-based metadata annotation service that automatically generates structured metadata, aimed at reducing documentation effort for researchers while improving findability and reusability. The deployment of such a service, however, raises critical quality concerns. Manual metadata annotations are commonly treated as ground truth, despite being selective, potentially incomplete, and influenced by interpretation. Conversely, LLM-generated metadata may introduce additional uncertainties, including hallucinated metadata elements. To address this challenge, we conduct a comparative evaluation of metadata annotations generated by human annotators and one LLM (ChatGPT-5.2) from 22 scientific publications using an identical structured schema. Annotation quality is assessed along two complementary dimensions: completeness, capturing coverage of publication-supported metadata elements, and semantic correctness, measuring the degree to which annotated metadata values are textually grounded. Our analysis indicates that LLM-based annotations achieve higher completeness across most metadata categories, particularly for contextual and process-oriented elements that are frequently omitted in manual annotations. Importantly, although the higher completeness of LLM-based annotations might intuitively suggest lower semantic correctness and an increased risk of hallucinated metadata elements, our experimental results do not support this assumption. By operationalizing metadata quality through explicit dimensions, this work provides measurable criteria for evaluating automated metadata annotations. |
Back to Program Overview, Top
Session 5: Artificial Intelligence for Decision Support
Tuesday, June 16, 2026, 02:30 p.m. - Chair: tba
| Andreas Fink | Neural Proposal Generation for Evolutionary Negotiation in Project Portfolio Selection |
| Abstract: Decisions about the provision of community services and infrastructure often involve multiple stakeholders with heterogeneous preferences. We formulate these problems as multi-agent, multi-constraint project portfolio selection tasks with private stakeholder utilities and propose an evolutionary negotiation mechanism for participatory coordination over feasible portfolios. In this approach, stakeholders are represented by software agents that negotiate over candidate solutions within a population-based search process. Proposal generation relies either on classical evolutionary variation operators or on learning-based strategies. Specifically, we iteratively retrain autoencoders and variational autoencoders on the evolving solution population to learn latent structural patterns induced by stakeholder preferences and resource constraints. New proposals are generated by recombining latent codes and decoding the resulting representations into candidate solutions. Computational experiments indicate that the learning-based proposal strategies produce solutions near the Pareto front, with variational autoencoders achieving the best performance in the studied scenarios. | |
| Wesley Preßler, Patrick Seidel and Steffen Späthe | LLM-Supported Excerpt-Level Maturity Assessment: A Conceptual and Technical Proof of Concept |
| Abstract: Maturity models are widely used as instruments for assessing the current state of transformations within organizations. However, traditional assessment approaches are often resource-intensive, subjective, and difficult to scale across domains. Recent advances in large language models (LLMs) and structured reasoning techniques, such as Chain of Thought (CoT) prompting, offer new possibilities for automating and augmenting such assessments. Despite this potential, systematic approaches that leverage LLMs for maturity level determination remain underexplored. In this paper, we propose a concept for LLM-based maturity assessment grounded in a custom-developed maturity model. We design and evaluate a two-stage proof-of-concept pipeline in which a BERT-based encoder performs category-level classification, followed by a fine-tuned LLM decoder, adapted using LoRA+ and Parameter-Efficient Fine-Tuning (PEFT), that generates structured Chain of Thought reasoning sequences to derive maturity ratings and also explains the determination of the maturity level. We further critically examine the conditions under which such a system can and should be deployed, addressing the question of whether full automation is appropriate or whether hybrid human-AI oversight models are preferable. Our results demonstrate the technical feasibility of the proposed approach while surfacing important limitations regarding data quality. We argue that CoT-supported reasoning can improve the verifiability of model outputs for human evaluators and can serve as a key mechanism for calibrating confidence in partially or even fully automated maturity assessments. This work contributes an initial research paper that fundamentally addresses the application of LLMs in the determination of maturity levels. |
Back to Program Overview, Top
Session 6: Smart Urban Infrastructure
Wednesday, June 17, 2026, 9:00 a.m. - Chair: tba
| Michal Hodoň, Peter Ševčík, Matúš Formanek and Peter Šarafín | Distributed Supercapacitor-Based Charging Infrastructure for Urban Micromobility Integrated into Public Lighting Poles |
| Abstract: This paper investigates the integration of supercapacitor-based energy storage directly into public street lighting poles to enable distributed charging of urban micromobility platforms. The proposed architecture transforms existing lighting infrastructure into spatially distributed energy nodes that accumulate energy at low, feeder-friendly power levels and deliver short-duration charging pulses to electric scooters, e-bikes, and small electric vehicles. Based on realistic volumetric and gravimetric constraints of standard lighting poles, the achievable supercapacitor storage capacity per pole is estimated to be approximately 0.1 – 1 kWh using current technology. While this capacity represents only incremental range extension for passenger electric vehicles, it is sufficient for full or near-full recharging of many urban micromobility devices within a single charging session. The operating principle relies on temporal decoupling between grid energy intake and charging delivery. Energy is gradually stored from the distribution grid, surplus photovoltaic sources, or adaptive lighting dimming, and subsequently released as short charging pulses in the 3–8 kW range. This approach increases charging point density without imposing high instantaneous loads on the distribution network. The paper presents storage dimensioning methodology, energy-to-range analysis for different mobility classes, and power-flow considerations relevant to distributed urban deployment. The results indicate that supercapacitor-integrated lighting poles represent a technically feasible and grid-friendly pathway toward scalable micromobility charging infrastructure in dense urban environments. | |
| Leendert W. M. Wienhofen, Audun Vennesland and Soudabeh Khodambashi | From Fragmentation to Federation: Implementing a Single-Window Municipal Data Request and Delivery Pipeline |
| Abstract: Municipalities are increasingly expected to function as data-driven organizations while simultaneously providing access to data for research, innovation, and the creation of public value. Despite having large volumes of potentially high-value data, municipalities face significant barriers to data sharing due to legal fragmentation, vendor lock-in, and limited organizational capacity. This position paper presents the MUNDAT use case from the EU-funded DataPACT project, in which Trondheim Municipality, together with technology providers, explores how a federated single-window data request and delivery pipeline can address these challenges. Today, municipal data requests are often handled ad-hoc and are labor-intensive, leading to inconsistent compliance assessments, high resource consumption, and increased legal risk. MUNDAT reframes the challenge not as a lack of data, but as a lack of an efficient infrastructure between municipal operations and regulatory requirements. The paper examines two contrasting data sharing cases: HR sick leave data, where legal and ethical constraints dominate, and infrastructure IoT data, where vendor lock-in, security, and critical infrastructure concerns prevail. By combining automated legal assessment, policy enforcement, and technical data pipelines, the proposed single-window approach aims to provide a lawful, scalable, and reusable data sharing infrastructure, while supporting consistent, transparent, and auditable compliance with applicable legal and ethical requirements. | |
| Sabrina Hölzer, Lucie Schmidt, Wesley Preßler and Christian Erfurth | Technology-Supported Living and Aging in Place in a Smart Neighborhood: A Multi-Level Qualitative Analysis of Success Factors and Barriers |
| Abstract: Population aging and the growing desire for independence in later life increase pressure on health and social care systems and heighten the need for socio-technical solutions that enable healthy living and aging in place. While Ambient Assisted Living (AAL), smart-home technologies, and digital health services are widely discussed as promising approaches, empirical evidence remains largely confined to pilot settings and often reflects only a single stakeholder perspective. Addressing this gap, this paper examines the implementation of socio-technical solutions in an inhabited smart neighborhood. It investigates (RQ1) which success factors and barriers different stakeholder groups identify regarding implementation of socio-technical solutions in a smart neighborhood and (RQ2) how do these success factors and barriers differ across stakeholder groups. Data from semi-structured interviews, focus groups and document analysis were examined using thematic analysis. Results reveal a clear cross-level asymmetry. Macro-level actors emphasize innovation, transferability, and system transformation. Micro-level actors engage pragmatically, prioritizing usability, perceived usefulness, and privacy. Meso-level actors perform the translation work between these domains. Facilitation, spatial infrastructures, and on-site support prove more decisive for stabilizing use than technological sophistication alone. Success factors at one level frequently coexist with barriers at another. By conceptualizing the smart neighborhood as a socio-technical system shaped by cross-level tensions, the study challenges technology-centric narratives of digital aging and highlights the central role of meso-level alignment in bridging strategic ambition and lived experience. |
Back to Program Overview, Top
Session 7: Distributed Systems
Wednesday, June 17, 2026, 01:00 p.m. - Chair: tba
| Peter Šarafín, Michal Hodoň, Lukáš Formanek and Matúš Formanek | A Web Platform for Multi-Sensor Monitoring and Analysis of Large-Area Glass Surfaces |
| Abstract: Monitoring large-area glass elements (e.g., window panes and facade panels) benefits from distributed sensing and reproducible post-processing workflows. This paper presents a prototype platform that integrates ESP32-based sensor nodes, an HTTP ingestion service, a time series database, and a web application for interactive exploration of heterogeneous measurements collected on a spatial sensor grid. The system supports time-stamped ingestion of mixed sensor modalities with different sampling periods, including both slow environmental variables (e.g., temperature, humidity, illuminance) and faster structural-response signals (e.g., strain, displacement, acceleration-derived metrics). The user interface provides synchronized time series views, spatial heatmaps, time-window selection, configurable filtering, and dataset export with processing provenance. The platform is validated using simulated signals and demonstrated on glass dynamic property measurements acquired from a grid-mounted sensor setup. | |
| Ramzi Boutahala, Cyril Rabat and Hacène Fouchal | Distributed Cluster-Based Scheme for Priority-Aware Management of Road Intersections |
| Abstract: Cooperative Intelligent Transport Systems (C-ITS) provide the communication framework supporting Cooperative and Connected Automated Mobility (CCAM) via standardized Vehicle-to-Everything (V2X) message exchanges. Complex traffic environments require real-time coordination among vehicles to ensure safety, fairness, and smooth traffic flow, particularly in the absence of centralized control or fixed roadside infrastructure such as traffic lights or Roadside Units (RSUs). This paper proposes a Cluster-based Priority-Aware (CPA) coordination mechanism for intersection management. In the proposed scheme, vehicles approaching an intersection form a temporary cluster, within which a Cluster Head (CH) is dynamically elected to compute and broadcast a priority-based crossing schedule. | |
| Attila Papp and Udo Bub | Data Catalogs in Data Mesh and Data Space Implementations |
| Abstract: Data mesh and data spaces are emergent paradigms for scalable data sharing within and across organizations. Despite different motivations and trust assumptions, both rely on data catalogs to provide visibility and discovery of data products. This paper studies data catalogs as a convergence point between the two paradigms. First, we analyze data access and usage policy placement by distinguishing policy source of truth, policy binding, and policy enforcement. We show that in data meshes, catalogs can serve as a single source of truth for policy bindings and enable governance automation through workflow-driven or reconciliation-based approaches integrated with enterprise platforms, whereas in data spaces, catalogs primarily support discovery and contract initiation, with enforcement remaining boundary- and contract-driven at participant connectors. Second, we examine the feasibility of modeling catalogs as knowledge graphs to represent relational metadata, improve discovery, enable semantic interoperability, and support context-aware classification with conservative propagation along lineage. The results clarify how catalogs, policies, and enforcement components align in each paradigm and provide a basis for future design science guidelines for data-sharing system design. |
Back to Program Overview, Top
