Machine Learning (ML) offers exciting possibilities for innovative products and improvements of existing services. To avoid negative consequences, such as the loss of costumer data or commercial secrets, it is important to consider security and privacy aspects before applying Machine Learning in real-world applications.
SBA Research conducts research in the area of privacy-preserving Machine Learning and develops novel solutions to mitigate related threats. In addition, we offer trainings and expert discussions on implications and available defense mechanisms as well as consulting services in the area of secure and privacy-preserving data management and ML.
Topics include:
- Protection against theft of intellectual property (data or trained ML models)
- Defense mechanisms against adversarial attacks
- Privacy-preserving computation methods like federated learning and secure multi-party computation
- Novel methods for data anonymization, including complex data types
This content is locked.
By loading this content, you accept YouTube's privacy policy.
https://policies.google.com/privacy
Research
Machine Learning (ML) offers exciting possibilities for innovative products and improvements of existing services. To avoid negative consequences, such as the loss of costumer data or commercial secrets, it is important to consider security and privacy aspects before applying Machine Learning in real-world applications.
SBA Research conducts research in the area of privacy-preserving Machine Learning and develops novel solutions to mitigate related threats. In addition, we offer trainings and expert discussions on implications and available defense mechanisms as well as consulting services in the area of secure and privacy-preserving data management and ML.
Topics include:
- Protection against theft of intellectual property (data or trained ML models)
- Defense mechanisms against adversarial attacks
- Privacy-preserving computation methods like federated learning and secure multi-party computation
- Novel methods for data anonymization, including complex data types
With regards to data processing, preserving the privacy of individuals and protecting business secrets is highly relevant for organizations which are working with sensitive and/or personal data. In particular, if companies outsource ML models to external (cloud) providers to analyze their data, they have to consider privacy-preserving data analysis. Data anonymization or -synthetization are possible solutions for privacy protection. A further threat that has to be considered is an adversary recovering training data directly from ML models. SBA Research addresses how organizations can collaboratively build ML models while not directly sharing their data and guaranteeing privacy for their customers.
Automated decision making can have a significant influence on individuals and groups; hence, the robustness of the respective algorithms is of great concern when deploying such systems in real-world applications. Various types of attacks can trick a ML system into making wrong predictions. Backdoor attacks, for instance, poison the training data by injecting carefully designed (adversarial) samples to compromise the whole learning process. In this manner it is possible to, for example: cause classification errors in traffic sign recognition with safety critical implications on autonomous driving; evade spam filters; manipulate predictive maintenance; or circumvent face recognition systems.
Developing methods to detect and defend against these attacks is an important research topic for SBA Research.
Downloads
- Poster: RDA DMP Common Standard for Machine-actionable Data Management Plans
- Poster: Synthetic Data - Privacy Evaluation and Disclosure Risk Estimates
- Poster: Synthetic Data - Utility Evaluation for Machine Learning
- Poster: Data Poisoning Attacks in Federated Learning
- Poster: Federated Machine Learning in Privacy-Sensitive Settings
- Poster: Fingerprinting Relational Data Sets
The Machine Learning and Data Management Research Group participates in the following research projects:
SYNTHEMA
Synthetic generation of haematological data over federated computing frameworks
Screen4Care
Shortening the path to rare disease diagnosis by using newborn genetic screening and digital technologies
Monitaur
MONITAUR: Monitoring system for copy protection through malicious client detection
SBA-K1 (FP2)
SBA Research - K1 (FP2)
gAia
Predicting landslides - Entwicklung von Gefahrenhinweiskarten für Hangrutschungen aus konsolidierten Inventardaten
FeatureCloud
Privacy Preserving Federated Machine Learning and Blockchaining for Reduced Cyber Risks in a World of Distributed Healthcare
PRIMAL
Privacy Preserving Machine Learning for Industrial Applications
IPP4ML
Intellectual Property Protection of Machine Learning Processes
GASTRIC
Gene Anonymisation and Synthetisation for Privacy
WellFort
WellFort
KnoP-2D
Evolving and Securing of Knowledge, Tasks and Processes in Distributed Dynamic Environments via a 2D-Knowledge/Process Graph
SBA-K1 (FP1)
SBA Research - K1 (FP1)
MHMD
My Health – My Data
DEXHELPP
Decision Support for Health Policy and Planning: Methods, Models and Technologies based on Existing Health Care Data
MLDM consists of experts in the areas of privacy-preserving computation, privacy-preserving data publishing, synthetic data, adversarial machine learning, secure learning, detection of and defenses against attacks (e.g., poisoning attacks, evasion attacks), watermarking and fingerprinting of data, and machine learning models.
is researcher at SBA Research.
is senior researcher at SBA Research and leads the Machine Learning and Data Management Research Group. He is a lecturer at TU Wien as well as University of Applied Sciences Technikum Wien.
is FEMtech intern at SBA Research.
is senior researcher at SBA Research.
is researcher at SBA Research.
is FEMtech intern at SBA Research.
is researcher at SBA Research and PhD student at TU Wien.
is researcher at SBA Research.
is researcher at SBA Research.
The following scientific partners and company partners are / have been working closely together with the Machine Learning and Data Management Research Group:
Teaching
The Machine Learning and Data Management Group is also very active in teaching in subjects in their domain at TU Wien. This includes for example the following courses, all at the level of master curricula:
- Machine Learning (Andreas Rauber, Rudolf Mayer)
- Security, Privacy and Explainability in Machine Learning (Andreas Rauber, Rudolf Mayer)
- Self-Organizing Systems (Andreas Rauber, Rudolf Mayer)
- Data Stewardship (Andreas Rauber, Tomasz Miksa)
Bachelor | Master | PhD - Thesis Supervision
The MLDM Research Group is supervising Bachelor, Master and PhD theses in the following areas.
- Adversarial Machine Learning
- Adversarial Inputs (resp. robustness against adversarial inputs)
- Backdoor (data poisoning) attacks & defenses
- Membership inference attack
- Privacy-preserving Machine Learning / Data Mining
- Privacy-preserving data publishing
- Privacy-preserving computation
- Watermarking/fingerprinting of datasets
Detailed theses information find here.
For further details please contact team lead Rudolf Mayer directly.
To contact the team, please reach out to the individual team members or to the team lead Rudolf Mayer.