Grand Challenge

AffectiveArt 2026

Fine-Grained Emotion Understanding
& Generation in Artistic Images

Location Rio de Janeiro, Brazil
Date November 2026
Scroll
News
[March 27, 2026] Registration for the AffectiveArt Challenge 2026 is officially open! Check the instructions and register your team here.

Introduction

The ACM Multimedia 2026 Grand Challenge invites you to participate in the AffectiveArt Challenge 2026: Fine-Grained Emotion Understanding and Generation in Artistic Images.

The Challenge of Affective Intelligence

With the scaling of large language models and the emergence of strong instruction-following agents, AI systems have made striking progress in logical reasoning and generation. Yet, these advances do not automatically translate into affective intelligence. Emotion understanding and artistic appreciation are rooted in subjective human experience, cultural context, and subtle perceptual cues.

In fine art, meaning is often conveyed indirectly through visual decisions—such as brushstroke rhythm, tonal contrast, color harmony, spatial tension, and compositional balance—rather than through explicit objects alone. Current generative models, largely trained on web-scale photographic data, tend to prioritize semantic correctness over affective coherence. They frequently treat emotion as a superficial style token, producing outputs that are semantically correct yet affectively ambiguous.

What Makes This Challenge Unique

Bridging this gap requires benchmarks connecting what is depicted with how emotion is expressed visually:

  • Large-Scale Dataset: Built upon the EmoArt dataset with 132,664 artworks across 56 artistic styles.
  • Hierarchical Annotations: Structured affective annotations including visual attributes (brushwork, composition, color, line, light) and therapeutic potential.
  • Two Complementary Tracks: Focusing on both emotionally coherent synthesis (Track 1) and fine-grained affect recognition (Track 2).

🏆 Special Award: The top 2 performing teams from Track 1 and the top 1 performing team from Track 2 will be invited to submit papers recommended to the ACM Multimedia 2026 main conference.

For any inquiries regarding the challenge, please contact the organizing committee at affectiveartchallenge2026@gmail.com.

Be Part of the Future

This challenge at ACM Multimedia 2026 aims to be a sustainable benchmark initiative advancing affective computing and emotion-aware artistic AI. This is your chance to push the boundaries—we can't wait to see what you create!

Challenge Tracks & Submission

AffectiveArt Challenge 2026 consists of two complementary tracks:

Track 1: Emotion-Aware Artistic Image Generation

Problem Statement: Given a multimodal prompt describing semantic content, artistic movement, and desired emotional state, the goal is to generate an image that fulfills all three conditions.

Scientific Challenge: Participants must develop models that can disentangle style from emotion. Successful models must learn to subtly manipulate visual attributes to achieve a hybrid state (e.g., a "Calm" Expressionist painting).

Evaluation Metrics: Fréchet Inception Distance (FID), Attribute Alignment Score (AAS), and LPIPS Perceptual Similarity.

🏆 Awards: The top 2 performing teams will be eligible to submit papers recommended to the main conference.

Track 2: Multidimensional Art Emotion Understanding

Problem Statement: Given an input artwork, a model is expected to produce a report covering emotion prediction, binary VA prediction, and textual attribute analysis (Brushwork, Composition, Color, Line, Light).

Scientific Challenge: Connecting low-level visual patterns to high-level affective judgments. Models should explain emotions through interpretable attribute evidence.

Evaluation Metrics: Top-1 Accuracy and Macro F1-score for 12-way Emotion Classification, Valence Prediction, and Arousal Prediction.

🏆 Awards: The top 1 performing team will be eligible to submit a paper recommended to the main conference.

Dataset

EmoArt Dataset

The foundation of this challenge is the EmoArt dataset, which comprises 132,664 artworks across 56 artistic styles with structured affective annotations. It creates a bridge between Computer Vision, Art History, and Psychology.

  • Data Collection: Over 200,000 raw images aggregated from WikiArt, The Metropolitan Museum of Art, etc.
  • Filtering: Four-stage protocol including Art Form Filtering (paintings only), Content Safety, Quality Assurance, and Category Balancing.
  • Diversity: Includes significant representation of non-Western art, such as Ukiyo-e and Chinese Ink Wash painting.

Hierarchical Annotation

  • Content Description: Detailed narratives of the scene integrating emotional cues.
  • Visual Attributes: Structured descriptions for five key elements: Brushwork, Composition, Color, Line, and Light.
  • Valence and Arousal (VA): Continuous values based on Russell's Circumplex Model.
  • Dominant Emotion: 12 categories including Excited, Happy, Sad, Bored, Calm, etc.
  • Therapeutic Potential: Labels indicating psychological benefits (e.g., "Relieve Stress").
132K+
Artworks
56
Artistic Styles
12
Emotion Categories
5
Annotation Dimensions
Table 1. Comparison of Emotion-related Datasets
Dataset Image Type Label Source Tasks Images VA
Artemis Art Human G&R 80K
EmoSet Photo/Art Human&LLM G&R 3300K
EmoArt (Ours) Art Human&LLM G&R 130K

Important Dates

All deadlines are at 11:59 p.m. Anywhere on Earth (AoE).

Event Date
Challenge Registration March 27, 2026 - June 10, 2026
Test Set Released April 20, 2026
Results Submission Open May 1, 2026
Results Submission Deadline & Reproducibility Verification June 10, 2026
Challenge Result Announcement June 15, 2026
Paper Submission June 25, 2026
Decision/Author Notification July 16, 2026
Camera-Ready Submission August 6, 2026

*All dates are AoE (UTC−12), 23:59 of the specified day

Registration

The challenge registration is open from March 27, 2026 to June 10, 2026. Please note that each team must be registered by a full-time researcher (e.g., a university faculty member or research scientist).

Registration Form

Instructions & Rules

  • Step 1: Register for the Challenge
    Please fill out our official registration form. Remember that registration must be submitted by a full-time researcher.
  • Step 2: Download the EmoArt Dataset
    Once your team is prepared, you can download the EmoArt dataset from our HuggingFace page to begin developing and training your models.
  • Step 3: Wait for the Test Set
    We will construct a hidden set of 2,000 items specifically for testing your model's real-world affective capabilities. The detailed evaluation metrics, testing modalities, and the test set will be officially released on April 20, 2026.

Organizers

Hongxia Xie
Jilin University
Cheng Zhang
Jilin University
Wen-Huang Cheng
Wen-Huang Cheng
National Taiwan University
Ling Lo
Ling Lo
National Yang Ming Chiao Tung University
Hong-Han Shuai
Hong-Han Shuai
National Yang Ming Chiao Tung University
Jianlong Fu
Jianlong Fu
Microsoft Research Asia
Sicheng Zhao
Sicheng Zhao
Tsinghua University
Sanghoon Lee
Sanghoon Lee
Yonsei University

Program Committee

  • Zihan Li, Master Student at Jilin University.
  • Yifan Duan, Undergraduate Student at Jilin University.
  • Yuer Liu, Undergraduate Student at Jilin University.
  • Jian-Yu Jiang-Lin, Ph.D. Student at National Taiwan University.
  • Kang-Yang Huang, Research Assistant at National Taiwan University.
  • Ling Zou, Ph.D. Student at National Taiwan University.
  • Chieh-Yun Chen, Ph.D. Student at Georgia Institute of Technology.
  • Ziyun Li, Postdoctoral Researcher at KTH Royal Institute of Technology.
  • Xing Huang, Lianxin Digital.
Partner Logos