This is an international workshop on weakly supervised learning. The main goal of this workshop is to discuss challenging research topics in weakly supervised learning areas such as semi-supervised learning, positive-unlabeled learning, label noise learning, partial label learning, and self-supervised learning, as well as the foster collaborations among universities and institutes.
Program
Venue & Date
Workshop Venue: Southeast University Jiulonghu Campus, Nanjing, China.
Workshop Date: October 27 09:30--17:00, October 28 09:30--16:00, October 29 09:30--16:30 (China Standard Time (UTC+8)).
Banquet Date: October 28 17:30 - (China Standard Time (UTC+8)).
Schedule
The workshop will use China Standard Time (UTC+8) for scheduling, and it will be combined with invited talks and contributed talks.
Registration
Please complete your registration through WSL 2025 Registration Link by Oct 10th. The registration fee is CNY 1500 (about USD 200), covering the coffee breaks and the lunches for three days.
Note: A confirmation email will be sent immediately after registration. The official invoice will be issued and distributed after the workshop. For any registration or payment inquiries, please contact wangjing91@seu.edu.cn.
Topics
Overview
Machine learning should not be accessible only to those who can pay. Specifically, modern machine learning is migrating to the era of complex models (e.g., deep neural networks), which require a plethora of well-annotated data. Giant companies have enough money to collect well-annotated data. However, for startups or non-profit organizations, such data is barely acquirable due to the cost of labeling data or the intrinsic scarcity in the given domain. These practical issues motivate us to research and pay attention to weakly supervised learning (WSL), since WSL does not require such a huge amount of annotated data. We define WSL as the collection of machine learning problem settings and algorithms that share the same goals as supervised learning but can only access to less supervised information than supervised learning. In this workshop, we discuss both theoretical and applied aspects of WSL.
This workshop is a series of our previous workshops at ACML 2019, SDM 2020, ACML 2020, IJCAI 2021, and ACML 2021. Our particular technical emphasis at this workshop is incomplete supervision, inexact supervision, inaccurate supervision, cross-domain supervision, imperfect demonstration, and weak adversarial supervision. Meanwhile, this workshop will also focus on WSL for Science and Social Good, such as WSL for healthcare, WSL for climate change, WSL for remote sensing, and new public WSL datasets regarding the scientific scenarios. With the emergence of foundation models, WSL has gained new momentum: foundation models can enhance WSL through their rich semantic knowledge and powerful representations, while WSL provides efficient solutions for adapting and aligning foundation models with minimal human supervision.
Topics of Interest
WSL workshop includes (but not limited to) the following topics:
- Algorithms and theories of incomplete supervision, e.g., semi-supervised learning, active learning, and positive-unlabeled learning;
- Algorithms and theories of inexact supervision, e.g., multi-instance learning, complementary learning, and open-set learning;
- Algorithms and theories of inaccurate supervision, e.g., crowdsourced learning and label-noise learning;
- Algorithms and theories of cross-domain supervision, e.g., zero-/one-/few-shot learning, transferable learning, and multi-task learning;
- Algorithms and theories of imperfect demonstration, e.g., inverse reinforcement learning and imitation learning with non-expert demonstrations;
- Algorithms and theories of adversarial weakly-supervised learning, e.g., adversarial semi-supervised learning and adversarial label-noisy learning;
- Algorithms and theories of self-supervision, e.g., contrastive learning and autoencoder learning;
- Algorithms and theories of WSL in foundation models, e.g., weak-to-strong paradigm, weak supervision signals for model alignment, and fine-tuning with weak feedback;
- Foundation models for WSL, e.g., leveraging large language/vision models to provide weak supervision signals, and foundation model guided label denoising and sample selection;
- Broad applications of weakly supervised learning in the field of computer science, such as weakly supervised object detection (computer vision), weakly supervised sequence modeling (natural language processing), weakly supervised cross-media retrieval (information retrieval), and weakly supervised cooperation policy learning (multi-agent systems).
- WSL for science and social good, such as WSL for COVID-19, WSL for healthcare, WSL for climate change, and WSL for remote sensing, meanwhile, new public datasets regarding the above WSL research directions (new focus).
Further Descriptions
The focus of this workshop is six types of weak supervision: incomplete supervision, inexact supervision, inaccurate supervision, cross-domain supervision, imperfect demonstration, and weak adversarial supervision, which are briefly introduced below.
- Incomplete supervision considers a subset of training data given with ground-truth labels while the other data remain unlabeled, such as semi-supervised learning and positive-unlabeled learning.
- Inexact supervision considers the situation where some supervision information is given but not as exacted as desired, i.e., only coarse-grained labels are available. For example, if we are considering to classify every pixel of an image, rather than the image itself, then ImageNet becomes a benchmark with inexact supervision. Besides, multi-instance learning belongs to inexact supervision, where we do not exactly know which instance in the bag corresponds to the given ground-truth label.
- Inaccurate supervision considers the situation where the supervision information is not always the ground-truth, such as label-noise learning.
- Cross-domain supervision considers the situation where the supervision information is scarce or even non-existent in the current domain but can be possibly derived from other domains. Examples of cross-domain supervision appear in zero-/one-/few-shot learning, where external knowledge from other domains is usually used to overcome the problem of too few or even no supervision in the original domain.
- Imperfect demonstration considers the situation for inverse reinforcement learning and imitation learning, where the agent learns with imperfect or non-expert demonstrations. For example, AlphaGo learns a policy from a sequence of states and actions (expert demonstration). Even if an expert player wins a game, it is not guaranteed that every action in the sequence is optimal.
- Weak adversarial supervision considers the situation where weak supervision meets adversarial robustness. Since machine learning models are increasingly deployed in real-world applications, their security attracts more and more attention from both academia and industry. Therefore, many robust learning algorithms aim to prevent various evasion attacks, e.g., adversarial attacks, privacy attacks, model stealing attacks, and so on. However, almost all those robust algorithms (against evasion attacks) implicitly assume the strong supervision signals (no noisy labels in the training data), which hardly meets the requirements in practice. Therefore, when we develop evasion-robust algorithms, it is very practical/urgent to consider the supervision signals are imperfect.
- Self-supervision considers the unsupervised situation, and it pre-trains a generic feature representation by autonomously building the pseudo supervision (e.g., the similarity contrast and sample reconstruction) from the raw data, the learned representation can be applied in various downstream tasks such as classification, retrieval, and clustering.
- WSL in foundation models considers the critical challenge of efficiently adapting and aligning foundation models to specific tasks and requirements. While foundation models acquire broad knowledge through pre-training on massive unlabeled data, they still face challenges in task-specific alignment, safety constraints, and behavioral refinement. WSL offers theoretical frameworks and practical approaches to achieve these objectives with minimal human supervision, enabling weak-to-strong generalization and efficient fine-tuning through various forms of weak supervision signals (e.g., preferences, rankings, constraints). This paradigm is particularly crucial for developing more trustworthy AI systems.
- Foundation models for WSL leverages the rich semantic knowledge and powerful representations learned by foundation models to enhance WSL tasks. The broad knowledge captured by foundation models enables multiple key capabilities: generating diverse forms of weak supervision signals, providing semantic understanding for label reasoning, augmenting training data through knowledge transfer, and improving WSL algorithms through better feature representations and cross-modal correlations.
Meanwhile, this workshop will continue discussing broad applications of weakly supervised learning in the field of computer science, such as weakly supervised object detection (computer vision), weakly supervised sequence modeling (natural language processing), weakly supervised cross-media retrieval (information retrieval), and weakly supervised cooperation policy learning (multi-agent systems).
Organizers
General Chairs
Masashi Sugiyama, RIKEN / The University of Tokyo, Japan.
Xin Geng, Southeast University, China.
Program Chairs
Jiaqi Lv , Southeast University, China.
Lei Feng, Southeast University, China.
Bo Han, Hong Kong Baptist University, Hong Kong SAR, China.
Tongliang Liu, The University of Sydney, Australia.
Gang Niu, RIKEN, Japan.
Local Organization Committee (in alphabetical order)
Lei Qi, Southeast University, China.
Yiguo Qiao, Southeast University, China.
Jing Wang , Southeast University, China.
Ning Xu, Southeast University, China.
Organization
Organizing Institution
School of Computer Science and Engineering, Southeast University.
Supporting Organizations
School of Mathematics, Southeast University.
Jiangsu Association of Artificial Intelligence.
Previous Workshops
WSL 2024 Workshop, Brisbane, Australia.
WSL 2023 Workshop, Tokyo, Japan.
ACML2022 WSL Workshop, Online.
ACML2021 WSL Workshop, Online.
IJCAI2021 WSRL Workshop, Online.
ACML2020 WSRL Workshop, Online.
SDM2020 WSUL Workshop, Ohio, United States.
ACML2019 WSL Workshop, Nagoya, Japan.