Skip to the content.

Welcome to FRCSyn, the Face Recognition Challenge in the Era of Synthetic Data organized at WACV 2024.

The summary paper of the FRCSyn Challenge is available here.

To promote and advance the use of synthetic data for face recognition, we organize the Face Recognition Challenge in the Era of Synthetic Data (FRCSyn). This challenge intends to explore the application of synthetic data to the field of face recognition in order to find solutions to the current limitations existed in the technology, for example, in terms of privacy concerns associated with real data, bias in demographic groups (e.g., ethnicity and gender), and lack of performance in challenging conditions such as large age gaps between enrolment and testing, pose variations, occlusions etc.

This challenge intends to provide an in-depth analysis of the following research questions:

FRCSyn challenge will analyze improvements achieved using synthetic data and the state-of-the-art face recognition technology in realistic scenarios, providing valuable contributions to advance the field.

News

Tasks

The FRCSyn challenge focuses on the two following challenges existed in current face recognition technology:

Within each task, there are two sub-tasks that propose alternative approaches for training face recognition technology: one exclusively with synthetic data and the other with a possible combination of real and synthetic data.

Synthetic Datasets

In the FRCSyn Challenge, we will provide participants with our synthetic datasets after registration in the challenge. They are based on our two recent approaches:

DCFace: a novel framework entirely based on Diffusion models, composed of i) a sampling stage for the generation of synthetic identities XID, and ii) a mixing stage for the generation of images XID,sty with the same identities XID from the sampling stage and the style selected from a “style bank” of images Xsty.

Reference M. Kim, F. Liu, A. Jain and X. Liu, “DCFace: Synthetic Face Generation with Dual Condition Diffusion Model”, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.

GANDiffFace: a novel framework based on GANs and Diffusion models that provides fully-synthetic face recognition datasets with the desired properties of human face realism, controllable demographic distributions, and realistic intra-class variations. Best Paper Award at AMFG @ ICCV 2023.

Reference P. Melzi, C. Rathgeb, R. Tolosana, R. Vera-Rodriguez, D. Lawatsch, F. Domin, M. Schaubert, “GANDiffFace: Controllable Generation of Synthetic Datasets for Face Recognition with Realistic Variations”, in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2023.

Registration

The platform used in FRCSyn Challenge is CodaLab. Participants need to register to take part in the challenge. Please, follow the instructions:

  1. Fill up this form including your information.
  2. Sign up in CodaLab using the same email introduced in step 1).
  3. Join in CodaLab the FRCSyn Challenge. Just click in the “Participate” tab for the registration.
  4. We will give you access once we check everything is correct.
  5. You will receive an email with all the instructions to kickstart FRCSyn, including links to download datasets, experimental protocol, and an example of submission file.

Paper

The best teams of each sub-task will be invited to contribute as co-authors in the summary paper of the FRCSyn challenge. This paper will be published in the proceedings of the WACV 2024 conference. In addition, top performers will be invited to present their methods at the workshop. This presentation can be virtual.

Important Dates

Schedule

Time (HST) Duration Activity
8:20 – 8:30 10 mins Introduction
8:30 – 9:15 45 min Keynote 1: Koki Nagano
9:15 – 10:00 45 min Keynote 2: Fernando De la Torre
10:00 – 10:15 15 min 1st Break
10:15 – 10:35 20 min FRCSyn Challenge
10:35 – 11:10 35 min Top-ranked Teams (5)
11:10 – 11:25 15 min Notable Teams (3)
11:25 – 11:45 20 min FRCSyn Challenge: Q&A
11:45 – 12:00 15 min 2nd Break
12:00 – 12:45 45 min Keynote 3: Xiaoming Liu
12:45 – 12:55 10 min Closing Notes

Keynote Speakers

TBD

Organizers

Pietro Melzi

Universidad Autonoma de Madrid, Spain

Minchul Kim

Michigan State University, US

Ruben Tolosana

Universidad Autonoma de Madrid, Spain

Christian Rathgeb

Hochschule Darmstadt, Germany

Ruben Vera-Rodriguez

Universidad Autonoma de Madrid, Spain

Aythami Morales

Universidad Autonoma de Madrid, Spain

Xiaoming Liu

Michigan State University, US

Julian Fierrez

Universidad Autonoma de Madrid, Spain

Javier Ortega-Garcia

Universidad Autonoma de Madrid, Spain

Fundings