Program in Detail


The program is still subject to change.


Invited Speakers

Have a look at the details on our invited speakers.


Session 1: Recognition and Verification

Wednesday, June 11, 2025, 10:30 a.m. - Chair: N.N.

Günter FahrnbergerA Pluggable Authentication Module for E-Mail as a Secure Additional Authentication Factor
Abstract: Anti-hammering mechanisms frequently struggle to manage both Brute Force Attacks (BFAs) and Denial of Service (DoS) attacks effectively, highlighting the need for robust safeguards to counter credential guessing and account lockouts. A pluggable authentication module for e-mail as an additional authentication factor may offer a practical solution but fails to provide an advantage when the same e-mail address resets the primary authentication factor. A thorough literature review reveals no existing module supporting a secondary e-mail address. This technical documentation presents a Lightweight Directory Access Protocol (LDAP)-dependent prototype implemented on a standard Linux Operating System (OS). Each Multi-Factor Authentication (MFA) solution faces inherent vulnerabilities. Therefore, comprehensive threat modeling identifies nine categories of potential weaknesses in the new module, necessitating careful evaluation during deployment.
Bhimendra Dewangan, M Srinivas and Rbv SubramanyamTRYOLO: A Transformer Based Real-Time Object Detection Model for UAV Images
Abstract: A fundamental challenge in computer vision is object detection; however, it is still difficult to recognize small things because of their limited pixel representation, size changes, and background noise. In order to tackle these problems, we present TRYOLO, an improved real-time object identification model designed primarily for small object recognition and based on YOLOv11. Two important architectural improvements are included in our model: the DeepFocus Block, which enhances spatial feature extraction by adding more convolutions and residual connections, and the C3XFormer Block, which uses positional embeddings and multi-head self-attention to capture global dependen- cies and improve contextual understanding. These improvements significantly enhance the accuracy of feature representation and detection, especially for small objects in complicated situations. TRYOLO delivers state-of-the-art performance when we test it on two benchmark datasets, VisDrone-DET2019 and GlobalWheat2020. The validation performance metric shows a +2.4% gain in mAP on VisDrone-DET2019 and a +1.4% improvement in mAP on GlobalWheat2020.
Patrick Seidel and Preßler WesleyAutomated Text Classification in Maturity Models Using Transformer Architectures: An Encoder-Based Approach
Abstract: In recent years, progress in AI and NLP technology has significantly increased. Researchers are exploring various applications of this technology to boost process efficiency. This study examines how encoder-based transformer models can be integrated into sociological maturity models. The process is still largely manual, making it prone to errors and time-consuming. In this scientific paper, we describe how four transformer models were used to assign interview passages to the corresponding categories of a maturity model. It can be observed that two out of four models achieve excellent results and that the leading model correctly assigns 22 out of 23 inputs to a specific class. In this way, transformer models offer an effective method to improve the efficiency of processes in maturity models without compromising the quality of classification. The continuation of the research involves adding additional categories and training the corresponding models to ultimately determine the maturity level using a transformer model.

Back to Program Overview, Top


Session 2: Computational Intelligence

Wednesday, June 11, 2025, 12:45 p.m. - Chair: N.N.

Tobias Rohe, Maximilian Balthasar Mansky, Michael Kölle, Jonas Stein, Leo Sünkel and Claudia Linnhoff-PopienAccelerated VQE: Parameter Recycling for Similar Recurring Problem Instances
Abstract: Training the Variational Quantum Eigensolver (VQE) is a task that requires substantial compute. We propose the use of concepts from transfer learning to considerably reduce the training time when solving similar problem instances. We demonstrate that its utilization leads to accelerated convergence and provides a similar quality of results compared to circuits with parameters initialized around zero. Further, transfer learning works better when the distance between the source- solution is close to that of the target-solution. Based on these findings, we present an accelerated VQE approach tested on the MaxCut problem with a problem size of 12 nodes solved with two different circuits utilized. We compare our results against a random baseline and non transfer learn- ing trained circuits. Our experiments demonstrate that transfer learning can reduce training time by around 93% in post-training, relative to identical circuits without the use of transfer learning. The accelerated VQE approach beats the standard approach by seven, respectively nine percentage points in terms of solution quality, if the early-stopping is considered. In settings where time-to-solution or computational costs are critical, this approach provides a significant advantage, having an improved trade-off between training effort and solution quality.
Frank PhillipsonFair Benchmarking Combinatorial Optimization Solvers in the Era of Emerging Computing Paradigms
Abstract: As computational needs expand, new computing paradigms such as GPUs, FPGAs, high-performance computing clusters, digital annealers, neuromorphic computing systems, and quantum computers are emerging to complement traditional CPU-based computing models. Each paradigm offers unique capabilities for combinatorial optimization, a field concerned with finding the best solution from a finite set of possibilities. This paper addresses the challenge of fairly benchmarking the performance of combinatorial optimization solvers across these diverse paradigms. We propose a holistic approach to benchmarking that includes recommendations for fair comparisons and the introduction of new metrics. Our findings highlight the need for clear and equitable comparison criteria, particularly when contrasting digital and analogue platforms or different algorithm classes.
Karl-Heinz Lüke, Gerald Eichler and Denis RoyerArtificial Intelligence Application Scenarios Considering Objective and Subjective Influence Factors for Industrial Solutions in Supply Chain Management
Abstract: The use and significance of Artificial Intelligence (AI) are widely discussed across various fields, including business and science. AI, a key branch of computer science, enables algorithms to perform tasks that traditionally require human intelligence, such as machine learning, deep learning, and decision-making. AI applications span numerous industries, with Supply Chain Management (SCM) being a particularly relevant domain. AI technologies optimize supply chains by improving inventory planning, demand forecasting, and overall operational efficiency. This includes the planning, management, and control of goods, information, and financial flows along the entire supply chain. An empirical survey of AI use cases in SCM highlights demand forecasting, supply chain tracea- bility and quality management, and inventory management as the most widely accepted and impactful applications.

Back to Program Overview, Top


Session 3: Data Processing

Wednesday, June 11, 2025, 2:30 p.m. - Chair: N.N.

Stefan Linecker, Felix Strohmeier, Christof Brandauer and Peter DorfingerReal-time Energy Data Aggregation for Energy Communities
Abstract: The number of energy communities in Austria is growing rapidly. Their operation primarily relies on standardized processes for data collection, communication, and billing. Beyond collective accounting within the community, further optimizations—such as maximizing self-consumption and minimizing grid feed-in—are severely limited by the lack of real-time data. Accessing real-time energy data directly via the smart meter customer interface and aggregating it at the community level enables more precise load balancing and better utilization of renewable energy sources. This paper presents the Community Aggregation Tool, a modular software solution designed to collect, harmonize, and aggregate real-time energy data from diverse metering systems and protocols. The tool integrates multiple data sources and leverages existing open-source solutions for its implementation. Initial deployments in selected energy communities demonstrate its potential to address key challenges, such as limited data granularity and the heterogeneity of household devices.
 On Self-Improving Token Embeddings
Abstract: This article introduces a novel and fast method for refining pre-trained static word or, more generally, token embeddings. By incorporating the embeddings of neighboring tokens in text corpora, it continuously updates the representation of each token, including those without pre-assigned embeddings. This approach effectively addresses the out-of-vocabulary problem, too. Operating independently of large language models and shallow neural networks, it enables versatile applications such as corpus exploration, conceptual search, and word sense disambiguation. The method is designed to enhance token representations within topically homogeneous corpora, where the vocabulary is restricted to a specific domain, resulting in more meaningful embeddings compared to general-purpose pre-trained vectors. As an example, the methodology is applied to explore storm events and their impacts on infrastructure and communities using narratives from a subset of the NOAA Storm Events database. The article also demonstrates how the approach improves the representation of storm-related terms over time, providing valuable insights into the evolving nature of disaster narratives.
Attila Papp, Udo Bub, Viivi Lähteenoja, Kai Kuikkaniemi, Marko Turpeinen and Sami JokelaData Mesh and Data Space: a Comparative Analysis with a Focus on Governance
Abstract: In this paper, we describe and compare the emergent data mesh and data space paradigms. These socio-technical approaches aim to facilitate data sharing, but both of them has unique governance structures. We rely on design science research methodologies, including a structured literature review subsequently complemented by expert interviews, to highlight their idiosyncrasies, synergies, and differences. While data mesh primarily focuses on domain-oriented decentralization within a single organization, data space, by contrast, addresses interorganizational data exchange, emphasizing data sovereignty. In this comparative analysis, we show how each approach offers distinct strategies for addressing data-sharing challenges, supporting their potential to converge and complement each other. These insights inform practitioners and researchers about best-fit scenarios, guiding adoption decisions. Ultimately, we propose a foundation for future studies to investigate the governance models further, elaborate on emerging convergences, explore technical connection points and rene guidelines for choosing the most eective paradigm.

Back to Program Overview, Top


Session 4: Quantum Computing

Thursday, June 12, 2025, 8:30 a.m. - Chair: N.N.

Orin Pechler and Frank PhillipsonToward Quantum Annealing for Multi-League Sports Scheduling
Abstract: This paper introduces the use of quantum annealing for the Multi-League Scheduling Problem, under the main assumption that all leagues contain the same even number of teams. In this problem, a schedule of matches has to be found for several leagues consisting of multiple teams and clubs, a particularly relevant issue in amateur and youth sports. For this scheduling problem, the main goal is to develop a so-called QUBO formulation, which is the main type of formulation for a quantum annealer. Four different techniques are used to develop such QUBOs. These are then solved for various instances using D-Wave’s current Advantage System. The technique called domain-wall encoding is found to outperform the other three implemented techniques in terms of solution quality, providing empirical support for this approach. However, this technique also has the highest running time, whereas the relatively new technique called unbalanced penalization achieves the lowest run- ning time, with a solution quality that is only marginally worse than that of domain-wall encoding. Although currently quantum annealing does not perform as well as the classical approaches, it is expected that in the future quantum computers will become a superior alternative.
Rares Adrian Oancea, Stan Van Der Linde, Willem de Kok, Matthia Sabatelli and Sebastian FeldOptimizing Initial Qubit Mappings under Fixed Gate Error Rates Using Deep Reinforcement Learning
Abstract: Quantum computing promises to execute some tasks exponentially faster than classical computers. Quantum compilation, which transforms algorithms into executable quantum circuits, involves solving the initial mapping problem, crucial for optimizing qubit assignment and minimizing gate error rates. This study explores Deep Reinforcement Learning (DRL) for initial mapping across various qubit topologies, considering fixed gate error rates. Previous DRL approaches have succeeded but didn't account for fixed error rates, used only one algorithm (PPO), and focused on a single topology with 20 qubits. The trial-and-error nature of Reinforcement Learning makes it ideal for initial mapping. DRL agents, using multiple policy gradient algorithms (A2C, PPO with and without action masking, and TRPO), compute high-quality mappings for small- and medium-scale quantum architectures. While eective, their efficiency decreases with larger systems, necessitating further optimization. Fine-tuning hyperparameters and action masking prevent illegal actions and enhance accuracy. Although currently not surpassing tools like Qiskit or achieving scalability for larger systems, this study highlights DRL's potential for initial mapping in quantum computing, encouraging further innovation and refinement..
Jonas Nüßlein, Sebastian Zielinski and Claudia Linnhof-PopienLearning QUBO Formulations from Data
Abstract: Quadratic Unconstrained Binary Optimization (QUBO) is a fundamental framework for solving combinatorial optimization problems, with significant applications in quantum computing. Many real-world optimization tasks can be naturally expressed as QUBOs, making them well-suited for quantum annealing, a promising paradigm for harnessing quantum hardware to find optimal solutions eficiently. A key challenge in utilizing QUBO models effectively is the formulation of an appropriate QUBO matrix Q that correctly encodes a given problem p, ensuring that the optimal solution x∗ of p corresponds to the minimum value of xT Qx. In this paper, we propose an algorithm for learning QUBO formulations from data, enabling automated discovery of problem encodings that align with optimal solutions, reducing the need for manual problem modeling and enhancing adaptability to diverse optimization tasks. Experimental results show that our learned QUBO formulations yield accurate representations of the underlying problems, paving the way for more effective problem encoding strategies in quantum computing applications.

Back to Program Overview, Top


Session 5: Public Sector

Thursday, June 12, 2025, 10:30 a.m. - Chair: N.N.

Kiana Lesan Pezeshki, Sepinood Haghighi, Farzaneh Jouyandeh, Sarvnaz Sadeghi, Pooya Moradian Zadeh, Jackie Fong, Kendall Soucie, R. Michael McKay, Kenneth K.S. Ng, Lisa A. Porter, Yufeng Tong and Lawrence GoodridgeA Multi-Modal Data-Driven Dashboard for Enhanced Public Health Surveillance and Awareness
Abstract: Effective public health responses require timely access to di- verse information and a holistic understanding of the situation. This paper describes a community-centered framework for developing a multimodal visualization and analytical tool that integrates disparate data sources such as epidemiological data, wastewater surveillance data, and social media sentiment analysis into a unified platform. Our focus is on creating a user-friendly experience that empowers both the public and experts to monitor, analyze, and respond to public health events. We also present the results of our initial evaluation, including an effectiveness and usability assessment based on a survey of over 1,600 participants, demonstrating the platform's potential to enhance public health communication, decision-making, and overall situational awareness.
Razvan Hrestic, Manfred Hofmeier and Ulrike LechnerThe Digital Sovereignty of the ICT Supply Chain: Demonstrating Digital Sovereignty in Real-World Scenarios
Abstract: Digital sovereignty is a concept usually found on the political agenda and one increasingly present in research. But can it also be seen as a desirable and quantifiable dimension in supply chain management for products and services? And if so, which factors can influence this dimension? The concepts in this paper address this topic and related issues including risks to digital sovereignty, decision support enhanced by distributed ledger technology in procurement and supply chain management for cyber-physical systems. The concepts are explored by means a scenario of a vacuum cleaner robot and a proof-of-concept software tool for integrating the concepts into procurement and management of the supply chain. An evaluation of design and implications for decision making concludes the article.
Junaeid Ahmed, Marcel Großmann, Udo Krieger and Duy Thanh LeEvolution of Affordable Surveillance Systems for Patients With the Integration of Smart Health Sensors
Abstract: Along with the rise of the Internet of Things, smart health sensors are transforming patient monitoring by facilitating continuous, economical surveillance. Our research dives into the creation of an economical patient monitoring system that integrates intelligent health sensors. Based on their data, real-time video streaming on cost-effective devices let the medical representative connect to the patient. Our objective is to improve patient oversight by utilizing intelligent sensors for vital sign monitoring, while facilitating continuous, real-time visual evaluation via WebRTC, so guaranteeing a dependable and user-friendly experience for healthcare professionals. We utilize essential supporting technologies to retrieve sensor data and enable low-latency WebRTC video transmission to be able that medical representatives can diagnose patients early. Furthermore, we focus on interoperability and scalability by utilizing container technology, while suggesting methods to enhance system efficiency and cost-effectiveness. All in all, we combine WebRTC-enabled video streaming with smart health sensors to enhance remote healthcare, minimize hospital visits, and improve patient outcomes in a cost-effective and accessible way.

Back to Program Overview, Top 


Session 6: Serious Games

Friday, June 13, 2025, 9:00 a.m. - Chair: N.N.

Andrei-Cristian Iosif, Ulrike Lechner and Maria Pinto-AlbuquerqueBring Your Own Bug: Enabling User-Generated Content in Serious Games for Industrial Cybersecurity and AppSec Education
Abstract: This work investigates the integration of User Generated Content in a Serious Game for cybersecurity education and training in the industry. This Serious Game deals with security code reviews as part of an industrial software lifecycle, and players are invited to review vulnerable snippets to gain awareness of secure coding. We design and implement a way to include User Generated Content contributions into the Serious Game and we evaluate how this approach in cybersecurity education opens a path for a community-driven initiative to gather and share security knowledge. We develop an open contribution pipeline that allows developers to submit security-relevant code snippets in the Serious Games challenge collection, for players of the game to review, and present the technical design choices behind it: automating the integration of content, acceptance quality gates, and the potential for custom data analytics from recorded player interactions. Furthermore, we explore the voluntary contributors’ perceptions of the ease of contribution (with respect to our proposed convention for challenge snippets) and also investigate the characteristics of what is considered an effective educational snippet.
Judith Strussenberg, Karl Seidenfad, Maximilian Greiner, Kevin Riesel, Jan Biermann and Ulrike LechnerFrom Paper to Pixel: The Digitalization of a Serious Game
Abstract: A Question of Security is a serious game that aims to enhance cybersecurity awareness and incident response preparedness. Its current tabletop format presents several limitations that may restrict its scalability and adaptability. In this article, we present the integration path for the Miro© digital whiteboard platform. Using a Design Science Research approach, we outline the transformation from a physical tabletop format to a digital game. We start by explaining the concept of A Question of Security to make both the digital implementation and our design decisions understandable. Then, we show where and how elements could be adopted for digital use and where there were challenges and considerations that must have been taken into account. We would not have designed a digital version if digitization had not offered us interesting and promising opportunities and possibilities to further develop the game. As a result, the digital version of the game provides an interactive, scalable, and engaging platform for participants. The game now allows remote teams to participate from anywhere, eliminates the need for physical presence, and enables simple modifications to the scenario to reflect emerging cybersecurity threats. The digital format ensures that cybersecurity education remains interactive and accessible to a broader audience, making it a resource for organizations seeking to strengthen their security awareness and response strategies.
Quynh-Lan Nguyen Pham, Pradipta Banerjee and Sobah Abbas PetersenCities as Innovation Ecosystems - Game to Enhance City Learning through Stakeholder Collaboration
Abstract: This study explores the concept of city learning which views cities as urban innovation ecosystems, emphasising the role of continuous learning and adaptation in addressing complex societal challenges. The main contribution of this paper is a board game to enhance the understanding of city learning, based on a conceptual model, which highlights interactions among key city elements and the processes that drive city learning. The game is designed to reflect real-world decision-making processes, fostering collaboration and experience sharing among the players. The game was evaluated through a focus group workshop using a mixed methods approach. Results indicate that players valued the social interaction, role-playing and collaborative aspects of the game, recognising the complexity of citizen-centric innovation in city transformation. While the physical format of the board game was well received, participants suggested integrating digital elements to enhance engagement, streamline game mechanics, and provide real-time visualisation of urban dynamics.

Back to Program Overview, Top 


Session 7: Information Security

Friday, June 13, 2025, 12:45 p.m. - Chair: N.N.

Larissa Schachenhofer, Gregor Langner, Gerald Quirchmayr, Philipp Wolf, Patrick Hirsch, Stefan Schauer, Ulrike Lechner and Günter FahrnbergerA Simulation-Oriented Approach to Securing Logistics Processes Based on the NIST CSF and OODA Loop
Abstract: Integrating simulation techniques with the Observe-Orient- Decide-Act (OODA) loop concept into the cybersecurity of logistics processes offers an innovative solution to address the growing and evolving threats in the digital world. By simulating potential attack scenarios, organizations can identify vulnerabilities in their systems and take proactive measures to mitigate them. Grounded in the principles of rapid decision-making and adaptability, the OODA loop enables decisionmakers to respond to threats in real time and implement appropriate countermeasures. Hybrid simulation models, which combine various techniques, represent attack scenarios in logistics processes with a high degree of realism and detail. These models can account for numerous factors, including interactions between different actors in the supply chain, the impact of cyberattacks on operations, and potential financial and operational damage. Additionally, integrating the OODA loop into the simulation process fosters continuous improvement in incident response strategies. In conclusion, combining simulation techniques with the OODA loop concept offers a scientifically novel, robust, and holistic strategy to enhance the security of logistics processes in a complex and ever-changing cybersecurity landscape. By enabling proactive risk mitigation, rapid decision-making, and continuous improvement, organizations can strengthen their cybersecurity posture and safeguard the integrity of their supply chains.
Andreas Kornmaier, Marko Hofmann and Ulrike LechnerDesigning and Implementing an Educational Game for Cyber Planning Building on Cyberspace's Layers
Abstract: Cyberspace has become an integral part of many areas of everyday life. Consequently, there is a need to teach and incorporate cyber aspects into planning at the operational level and explain their impact on operational planning. This publication presents the design and implementation of the three layers of cyberspace in an operational context in a table top game as well as initial results.
Günter Fahrnberger, Maximilian Greiner, Stefan Hofbauer, Ulrike Lechner, Andreas Seiler, Judith Strussenberg and Philipp WolfCybersecurity Awareness Education by Making Ransomware Tangible Securely - The Beginning
Abstract: Phishing techniques under the Massachusetts Institute of Technology Research and Engineering (MITRE) ATT&CK Framework, along with their offshoots Smishing, Spearphishing, and Whaling, remain prevalent despite widespread security awareness, facilitating ransomware attacks that encrypt data for impact. Ransomware threats expand from single to triple extortion, combining data encryption with threats of auctioning stolen data and launching Distributed Denial of Service (DDoS) attacks. Europol’s Internet Organised Crime Threat Assessment (IOCTA) 2024 underscores the persistent risk of ransomware, a danger often underestimated by organizations. This research examines the security awareness gap, as typical end users and staff engaged in Information Technology (IT) rarely face ransomware incidents or gain hands-on experience with incident response. To address this gap, a safe, playful, and controlled environment enables trainees to interact with ransomware securely while exploring the encryption process and incident response strategies. A new research design assesses security awareness, with findings analyzed in the context of a walkthrough room named CONTAIN on TryHackMe, supported by a longitudinal study. The document concludes with a summary of results and recommendations for future work.

Back to Program Overview, Top