Boost Your AI Compliance and Security with Expert Guidance
SBA Research and the Research Institute – Digital Human Rights Center invite you to a focused seminar designed to help organizations navigate the complex legal and security landscape of artificial intelligence. Tailored for companies and institutions developing or deploying AI systems, this seminar equips participants with the tools to meet regulatory requirements – such as the AI Act – and strengthen the security and privacy of their systems. You’ll explore real-world threats like poisoning and inference attacks and learn how to counter them using proven methods including anonymization, differential privacy, and homomorphic encryption. Legal insights cover the latest developments, including the AI Act and the European Data Protection Board’s recent opinion on AI models. Through practical examples and hands-on exercises, the seminar ensures participants leave with actionable skills. With the AI Act requiring a sufficient level of AI literacy among staff, this seminar is a strategic investment in compliance, security, and professional development.
Stärken Sie Ihre KI-Compliance und Sicherheit mit Expertenwissen
SBA Research und das Research Institute – Digital Human Rights Center laden Sie zu einem Schwerpunktseminar ein, das Organisationen dabei unterstützt, durch die komplexe rechtliche und sicherheitsrelevante Landschaft der Künstlichen Intelligenz zu navigieren. Speziell für Unternehmen und Institutionen, die KI-Systeme entwickeln oder einsetzen, vermittelt dieses Seminar praxisnahe Werkzeuge, um regulatorische Anforderungen – wie etwa den AI Act – zu erfüllen und gleichzeitig die Sicherheit und den Datenschutz ihrer Systeme zu stärken.
Sie erhalten Einblicke in reale Bedrohungen wie Poisoning- und Inference-Angriffe und lernen bewährte Gegenmaßnahmen kennen, darunter Anonymisierung, Differential Privacy und Homomorphe Verschlüsselung. Der rechtliche Teil behandelt die neuesten Entwicklungen, einschließlich des AI Acts und der aktuellen Stellungnahme des Europäischen Datenschutzausschusses zu KI-Modellen.
Anhand praktischer Beispiele und interaktiver Übungen stellt das Seminar sicher, dass die TeilnehmerInnen mit umsetzbaren Fähigkeiten aus dem Seminar herausgehen. Da der AI Act ein ausreichendes Maß an KI-Kompetenz bei Mitarbeitenden fordert, ist dieses Seminar eine strategische Investition in Compliance, Sicherheit und die berufliche Weiterentwicklung.
Understand privacy and security threats in machine learning like poisoning and inference attacks.
Explore security and privacy-preserving techniques (e.g., anonymization, differential privacy).
Dive into the new AI Act, the GDPR, and the recent opinion of the European Data Protection Board.
Learn how to strengthen AI literacy in your organisation and meet regulatory requirements.
AI Developers and Engineers
Data Protection Officers (DPOs) and Compliance Managers
Legal and Policy Advisors
IT Security Professionals
AI Product Managers and Tech Leads
Public Sector Institutions and NGOs
SoftwareentwicklerInnen
CISOs
Requirement Engineers
Incident Response Team Members
Software-ArchitektInnen
Management
DevOps
CTOs
Security Champions
The seminar fee is EUR 480,- excl. VAT (-10% early bird discount). For organisations interested in in-house training, a flat-rate price is determined individually based on the specific requirements and needs of the company.
Anastasia Pustozerova is a researcher at SBA Research and the University of Vienna, and a PhD candidate at TU Wien. Her research focuses on mitigating privacy and security threats in machine learning, with a particular emphasis on developing robust defence strategies. She holds a Bachelor’s degree in Applied Mathematics and Physics from St. Petersburg University and a joint Master’s degree in Computational Logic from TU Wien, TU Dresden, and the Free University of Bozen-Bolzano. Since 2020, she has also lectured at the St. Pölten University of Applied Sciences, where she teaches a course on privacy and security in federated learning.
Dipl.-Ing. Tanja Šarčević is a Machine Learning Privacy and Security researcher at SBA Research and a PhD candidate at TU Wien. Previously, she completed her bachelor’s in Computer Science at Zagreb University and a master’s degree in Logic and Computation at TU Wien, and has since been a lecturer at TU Wien and FH Technicum Wien, teaching about privacy and security in Machine Learning. Her research focuses on trustworthy ML, particularly on data privacy and ownership protection.
Madeleine Müller
Dr. Madeleine Müller, BA, MU is Senior Researcher and Consultant at the Research Institute – Digital Human Rights Center. She studied law and philosophy at the University of Vienna and at the Université Paris 1 Panthéon-Sorbonne and completed the Master’s programme ‘Political Philosophy’ at the Universitat Pompeu Fabra Barcelona. She specialises in digital human rights, with a strong focus on data protection law, artificial intelligence, automated decision-making, platform regulation and the rights of affected persons.
Walter Hötzendorfer
Dipl.-Ing. Dr. Walter Hötzendorfer is a Senior Researcher at the Research Institute – Digital Human Rights Center in Vienna. He studied business information systems at the Vienna University of Technology and law at the Universities of Vienna and Sheffield. After working in legal consulting and software engineering, he was a Researcher at the University of Vienna Centre for Computers and Law from 2011 to 2016, where he did a PhD on Data Protection and Privacy by Design in Federated Identity Management. Dr. Hötzendorfer leads several work packages on data protection, data security and ethics in large medical research projects, advises various types of organisations on the implementation of the GDPR, is a university lecturer in Austria and abroad and the author of numerous publications on data protection law, privacy by design, privacy engineering, the AI Act, network and information security (NIS) and related topics. He is a member of the Data Protection Council of the Republic of Austria, a board member of the Austrian Computer Society (OCG), co-leader of the OCG Forum Privacy and a member of the OCG Certification Committee.
Rudolf Mayer
Rudolf is working as a senior researcher at SBA Research since February 2011. He received his master’s degree in Business Informatics from the TU Wien in 2004, his master’s degree in Computer Science in 2012, and is currently working towards a PhD degree. At SBA Research, his research focus lies on machine learning, specifically including privacy-preserving learning and security of machine learning, and data management, among others in the currently ongoing research projects FeatureCloud (EU H2020), focusing on privacy-preserving federated learning, and projects funded by the Austrian Research Promotion Agency (FFG): WellFort, KnoP-2D, PRIMAL, and Gastric.