Machine Learning (ML) offers exciting possibilities for innovative products and improvements of existing services. To avoid negative consequences, such as the loss of costumer data or commercial secrets, it is important to consider security and privacy aspects before applying Machine Learning in real-world applications.
SBA Research conducts research in the area of privacy-preserving Machine Learning and develops novel solutions to mitigate related threats. In addition, we offer trainings and expert discussions on implications and available defense mechanisms as well as consulting services in the area of secure and privacy-preserving data management and ML.
Topics include:
- Protection against theft of intellectual property (data or trained ML models)
- Defense mechanisms against adversarial attacks
- Privacy-preserving computation methods like federated learning and secure multi-party computation
- Novel methods for data anonymization, including complex data types
This content is locked.
By loading this content, you accept YouTube's privacy policy.
https://policies.google.com/privacy
Research
Machine Learning (ML) offers exciting possibilities for innovative products and improvements of existing services. To avoid negative consequences, such as the loss of costumer data or commercial secrets, it is important to consider security and privacy aspects before applying Machine Learning in real-world applications.
SBA Research conducts research in the area of privacy-preserving Machine Learning and develops novel solutions to mitigate related threats. In addition, we offer trainings and expert discussions on implications and available defense mechanisms as well as consulting services in the area of secure and privacy-preserving data management and ML.
Topics include:
- Protection against theft of intellectual property (data or trained ML models)
- Defense mechanisms against adversarial attacks
- Privacy-preserving computation methods like federated learning and secure multi-party computation
- Novel methods for data anonymization, including complex data types
With regards to data processing, preserving the privacy of individuals and protecting business secrets is highly relevant for organizations which are working with sensitive and/or personal data. In particular, if companies outsource ML models to external (cloud) providers to analyze their data, they have to consider privacy-preserving data analysis. Data anonymization or -synthetization are possible solutions for privacy protection. A further threat that has to be considered is an adversary recovering training data directly from ML models. SBA Research addresses how organizations can collaboratively build ML models while not directly sharing their data and guaranteeing privacy for their customers.
Automated decision making can have a significant influence on individuals and groups; hence, the robustness of the respective algorithms is of great concern when deploying such systems in real-world applications. Various types of attacks can trick a ML system into making wrong predictions. Backdoor attacks, for instance, poison the training data by injecting carefully designed (adversarial) samples to compromise the whole learning process. In this manner it is possible to, for example: cause classification errors in traffic sign recognition with safety critical implications on autonomous driving; evade spam filters; manipulate predictive maintenance; or circumvent face recognition systems.
Developing methods to detect and defend against these attacks is an important research topic for SBA Research.


Downloads
- Poster: RDA DMP Common Standard for Machine-actionable Data Management Plans
- Poster: Synthetic Data - Privacy Evaluation and Disclosure Risk Estimates
- Poster: Synthetic Data - Utility Evaluation for Machine Learning
- Poster: Data Poisoning Attacks in Federated Learning
- Poster: Federated Machine Learning in Privacy-Sensitive Settings
- Poster: Fingerprinting Relational Data Sets
The Machine Learning and Data Management Research Group participates in the following research projects:
FeatureCloud
Privacy Preserving Federated Machine Learning and Blockchaining for Reduced Cyber Risks in a World of Distributed Healthcare

PRIMAL
Privacy Preserving Machine Learning for Industrial Applications

WellFort
WellFort

GASTRIC
Gene Anonymisation and Synthetisation for Privacy

KnoP-2D
Evolving and Securing of Knowledge, Tasks and Processes in Distributed Dynamic Environments via a 2D-Knowledge/Process Graph

IPP4ML
Intellectual Property Protection of Machine Learning Processes

Find the full publications list here.
MLDM consists of experts in the areas of privacy-preserving computation, privacy-preserving data publishing, synthetic data, adversarial machine learning, secure learning, detection of and defenses against attacks (e.g., poisoning attacks, evasion attacks), watermarking and fingerprinting of data, and machine learning models.
The following scientific partners and company partners are / have been working closely together with the Machine Learning and Data Management Research Group:
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Teaching
The Machine Learning and Data Management Group is also very active in teaching in subjects in their domain at TU Wien. This includes for example the following courses, all at the level of master curricula:
- Machine Learning (Andreas Rauber, Rudolf Mayer)
- Security, Privacy and Explainability in Machine Learning (Andreas Rauber, Rudolf Mayer)
- Self-Organizing Systems (Andreas Rauber, Rudolf Mayer)
- Data Stewardship (Andreas Rauber, Tomasz Miksa)
Bachelor | Master | PhD - Theses Suervision
The MLDM Research Group is supervising Bachelor, Master and PhD theses in the following areas. For further details please contact team lead Rudolf Mayer directly.
Adversarial Machine Learning
A good overview talk (in German) into Adversarial Machine Learning is given by Konrad Rieck: “Sicherheitslücken in der künstlichen Intelligenz”
Adversarial Inputs (resp. robustness against adversarial inputs)
- Intro papers: “Explaining and Harnessing Adversarial Examples”, S&P 2017 paper on “Towards Evaluating the Robustness of Neural Networks”, “Making machine learning robust against adversarial inputs”
- Video: “Towards Evaluating the Robustness of Neural Networks”
- Goal: systematically analyse existing evasion attacks and defenses and develop new attacks/defenses in specific application domains (such as industrial productions systems (https://www.sqi.at)).
Backdoor (data poisoning) attacks & defenses
- Intro papers, e.g. “Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning” or “BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain”
- Video: A talk about one defense mechanism: “Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks”
- Goal: analyse and evaluate attack vectors for poisoning attacks, evaluate their effectiveness and side-effects, as well as existing defenses, and develop new detection/defense mechanisms
Membership inference attack
- Intro papers: “Membership Inference Attacks against Machine Learning Models”
- Video: Reza Shokri, “Membership Inference Attacks against Machine Learning Models”
- Goal: analyse and evaluate attack scenarios for membership inference, analyse existing attack and defense patterns, and develop new mechanisms
Other attacks, e.g.
- Model stealing: “Stealing Machine Learning Models via Prediction APIs” & video
- Model inversion: “Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures”
Privacy-preserving Machine Learning / Data Mining
Privacy-preserving analysis of data is becoming more relevant with the increasing amount of personal data being gathered. Several different approaches aiming at this problem exist, e.g.:Privacy-preserving data publishing
Privacy-preserving data publishing
- k-anonymity, l-diversity, etc.
- Differential privacy, including local differential privacy
- Synthetic data generation
- Goal: evaluation of privacy protection, utility of the published data, novel attack mechanisms, application of differential privacy to machine learning models, …
Privacy-preserving computation
- Secure Multi-party computation (SMPC / MPC). Teaser video, more detailed explanation: “Secure Multiparty Computation – Tal Rabin Technion lecture – Part 1”
- Homomorphic encryption. Intro Video
- Federated learning (e.g. https://federated.withgoogle.com/)
- Goal: evaluation of effectiveness (e.g. accuracy) and efficiency of privacy-preserving approaches, compared to a base line of centralised learning. Application of approaches to new algorithms, data types, etc.
Watermarking / fingerprinting of datasets
- Goal: evaluation of schemes for their robustness of attacks, vs. their data utility, e.g. measured by effectiveness in machine learning tasks
To contact the team, please reach out to the individual team members or to the team lead Rudolf Mayer.
rmayer@sba-research.org
+43 (1) 505 36 88