Data and Model Protection in Generative AI
A full-day workshop co-located with the Canadian Conference on AI, Robots & Vision.
About the Workshop
Generative Artificial Intelligence (GenAI) systems are increasingly deployed in high-impact domains, raising critical concerns about the protection of training data, deployed models, and generated outputs. These systems face a growing range of security and privacy risks, including data leakage, membership and attribute inference, model extraction, prompt injection, poisoning attacks, and misuse of generated content.
Addressing these challenges requires not only robust technical defenses, but also thoughtful alignment with emerging governance, regulatory, and policy frameworks.
The Data and Model Protection in Generative AI (DMP) workshop at AI/CRV 2026 brings together researchers, practitioners, and policymakers to examine the evolving threat landscape affecting GenAI systems and to discuss effective mitigation strategies.
Call for Papers
We invite submissions to the Data and Model Protection in Generative AI workshop at AI/CRV 2026. This workshop aims to bring together researchers, practitioners, and policymakers to examine the evolving threat landscape affecting GenAI systems and to discuss effective mitigation strategies.
Topics of Interest
Topics include, but are not limited to, the following:
- Data poisoning, backdoor attacks, and defenses in machine learning
- Privacy risks and training data leakage in generative models
- Dataset provenance, attribution, and governance
- Model extraction, model stealing, and intellectual property protection
- Model watermarking, fingerprinting, and ownership verification
- Security risks in generative AI (e.g., prompt injection, jailbreak attacks)
- Robust and secure machine learning pipelines
- Governance, auditing, and responsible deployment of AI systems
Submission Guidelines
Submissions may report new research results, empirical analyses, system implementations, benchmarks, negative results, or visionary perspectives (e.g., positions).
- Long track: Up to 9 pages (excluding references)
- Short track: Up to 4 pages (excluding references)
- Formatting: Use the official Canadian AI 2026 style files and submit a single PDF (which should be anonymized, like Canadian AI submissions).
- Appendix: Include any supplementary material in the same PDF — no page limit for the appendix.
Review Process
Submissions will be reviewed by the workshop program chairs. Accepted papers will be presented as talks or posters. The workshop is non-archival, and authors are free to submit extended versions of their work to archival venues.
Important Dates
Confirmed Speakers
Jekaterina Novikova
Vanguard Group
Yangyi Liu
Vanguard Group
Sirisha Rambhatla
University of Waterloo
Reza Samavi
Toronto Metropolitan University
Mathias Lécuyer
University of British Columbia
Linyi Li
Simon Fraser University
Joanna Redden
Western University
Additional speakers to be announced.