Floragasse 7 – 5th floor, 1040 Vienna
Subscribe to our Newsletter

IKT-Sicherheitskonferenz 2025 & Young Researchers’ Day

June 25, 2025 - June 26, 2025 , 9:00 am - 6:00 pm
German

The aim of the conference is to provide further training for IT security experts from different fields. The ICT Security Conference, organized by the Austrian Armed Force, also offers participants the opportunity to exchange ideas and make contacts across disciplines.

Young Researchers’ Day is there to provide a stage for young researchers and to promote research in Austria.

SBA Research and OCG will be hosting the Young Researchers’ Day one more time.

Program

Speakers Young Researchers’ Day

FH St. PöltenMaria KloibhoferGamified, Story-Driven Forensics Education
AITViktor BeckInnovation in ML-Driven Log Analytics
Technikum WienInes JanstaSocial Skills in IT Security Curricula
FH Salzburg und Uni SalzburgOlaf SaßnickGenerative Honeypots for OT Security
FH JoanneumPatrick A.J. DeiningerEvaluation of LLMs for Vulnerability Detection
TU GrazFlorian DraschbacherChoiceJacking via Malicious Chargers
Universität WienJennifer-Marieclaire SturleseCybersecurity Game Simulations
TU WienManuel ReinspergerDangerous Capability Evaluation of LLMs
SBA ResearchVerena SchusterFederated Learning for Secure Model Training

1st Conference Day

Interest, Confidence and Murderous Animatronics – A Study in Gamified, Story-Driven Forensics Education

Maria Kloibhofer (FH St. Pölten)

A free, beginner-friendly digital forensics training was developed for women and FINTA as part of a cybersecurity initiative. Using a gamified, story-based approach, it combined lectures with hands-on exercises using real forensic tools. While participant surveys showed little change in self-assessed confidence or interest, the training methods were well received. Key factors for participants included having a direct contact person for questions.

Advancing Log Analytics with Machine Learning: Innovations in Parsing, Anomaly Detection, and Behavior Simulation

Viktor Beck (AIT Austrian Institute of Technology)

This PhD thesis explores the application of machine learning (ML) in log analytics, focusing on log parsing, anomaly detection (AD), and the simulation of user and attacker behavior. The research builds upon a Systematization of Knowledge (SoK) paper on LLM-based log parsing, which highlights the advantages of using Large Language Models (LLMs) for high-quality log template extraction. The thesis aims to leverage recent advancements in ML and Natural Language Processing (NLP) to enhance log analytics processes, including the configuration of AD algorithms and the generation of synthetic log data. Key research questions address the efficiency and effectiveness of LLM-based log parsing, the impact of parsing performance on AD accuracy, and the distinguishability of machine-generated behavior from human behavior. The emphasis is on integrating new and disruptive technologies into conventional log analytics methods.

Integration of Social Skills into IT Security Curricula at Austrian Universities

Ines Jansta (FH Technikum Wien)

This thesis explores how social, psychological, and behavioral topics can be better integrated into IT security degree programs at Austrian universities to address the human factor in cybersecurity. Analysis showed that social skills make up only 3.59% of program content, and awareness topics just 0.45%. Interviews with experts revealed key barriers such as lack of interdisciplinarity, limited resources, and undervaluing of soft skills. A competency model was developed to guide universities in improving curricula, with practical recommendations to better prepare students for real-world cybersecurity challenges.

Enhancing OT Security with Generative Model-Based Honeypots

Olaf Saßnick (FH Salzburg and Uni Salzburg)

Industrial Operational Technology (OT) systems are increasingly targeted by cyber attacks as they become more integrated with Information Technology (IT) systems in the Industry 4.0 era. Besides intrusion detection systems, honeypots can effectively detect cyberattacks. The more realistic the honeypot, the longer it can potentially bind the attacker. However, creating realistic honeypots for existing systems (brownfield systems) is a challenging task, as typically simulation models are required to craft a virtual model of the production process. Recent advances in machine learning allow us to replace traditional simulations with self-learning models, significantly simplifying the process of building a honeypot. In this work, we explore generative model-based honeypots designed to mimic the OPC UA communication interfaces of cyber-physical systems, such as industrial machines on the shop floor. We also introduce the concept of on-demand honeypots and, as an outlook, consider the use of generative models to interconnect, forming a honeynet, a virtual recreation of an entire shop floor..

Towards Stronger Software Security: Evaluating Large Language Models for Vulnerability Detection

Patrick Deininger (FH JOHANNEUM)

This study evaluates how well AI language models like GPT, CodeBERT, StarCoder2, and DeepseekCoder can detect security vulnerabilities in Java and Kotlin code. It compares their performance to traditional tools such as CodeQL and SpotBugs. The research is the first of its kind to systematically assess LLMs on mixed-language codebases, revealing their strengths and limitations. The work is part of a PhD project exploring how AI can improve software engineering and is expected to advance the field of AI-based vulnerability detection.

2nd Conference Day

ChoiceJacking: Compromising Mobile Devices through Malicious Chargers like a Decade ago

Florian Draschbacher (Technische Universität Graz)

This study introduces ChoiceJacking, a new class of USB-based attacks that bypass existing defenses against Juice Jacking. Unlike previous assumptions, the researchers show that malicious chargers can simulate user input to enable unauthorized data access on both Android and iOS devices. Tests with a low-cost, custom-built charger revealed that sensitive data could be accessed on devices from 8 major manufacturers—including locked devices in some cases. The attack can even detect optimal moments using power signal analysis to avoid detection. Most affected vendors, including Apple, Google, and Samsung, have acknowledged the issue and are working on fixes.

Game Simulations for Cybersecurity

Jennifer-Marieclaire Sturlese (Universität Wien)

This project places significant emphasis on decision science in context of cybersecurity, modeling game-theoretic approaches to this. Particularly relevant here are Stackelberg Security Games. Through (1) computational modeling, a virtual environment where a wargame can be executed along with a decision support system dedicated to executive decision-makers and (2) econometric modeling, an algorithmic opponent player for the game simulation, a wargame experiment aims to be undertaken. In this wargame, the effects of cyberattacks coming from the algorithmic opponent on executive decision-makers with (treatment) and without (control) the aid of decision support instruments is to be tested in an experiment. Ultimately, the distinct elements of an effective decision support system are aimed to be identified through causal machine learning.

Dangerous Capability Evaluation of Large Language Models for Web Penetration Testing

Manuel Reinsperger (TU Wien)

Due to the widespread use of web applications, they often handle critical information. We investigate the potential of an LLM-based approach to test the security of web applications in order to assess the current capabilities of modern LLM models for offensive attacks. The goal is to identify present and future attack possibilities to enable timely implementation of defensive measures. The evaluation is based on a new, realistic benchmark, which is instrumented with a generalized Offensive Security LLM framework.

Smarter Together: Enabling Collaborative Model Training Without Data Sharing Via Federated Learning

Verena Schuster (SBA Research)

In today’s data-driven world, sharing sensitive data (like health or defense records) poses privacy and regulatory challenges, especially under laws like GDPR. Centralizing data from decentralized sources (e.g., IoT devices, drones, mobile phones) is often impractical due to privacy risks and bandwidth limitations.

Federated Learning (FL) offers a solution by enabling decentralized machine learning: data stays local, and only model updates are shared. FL comes in two forms:

  • Centralized FL uses a central server to aggregate local models in training cycles.
  • Decentralized FL removes the central server, with clients exchanging updates peer-to-peer—ideal for secure or infrastructure-limited environments like military operations.

This approach maintains data privacy while enabling collaborative learning across distributed sources

Further information

IKT Sicherheitskonferenz

This event is hosted by Vienna ACM SIGSAC Chapter and IEEE SMC/CS Austria Chapter.

IEEE Section Austria_logo_2016