Zoom Recordings

Check Workshop Recordings

Topic Summary

UC terminates subscriptions with world’s largest scientific publisher in push for open access to publicly funded research, since “Knowledge should not be accessible only to those who can pay,” said Robert May, chair of UC’s faculty Academic Senate. Similarly, machine learning should not be accessible only to those who can pay. Specifically, modern machine learning is migrating to the era of complex models (e.g., deep neural networks), which emphasizes the data representation highly. This learning paradigm is known as representation learning. Specifically, via deep neural networks, learned representations often result in much better performance than can be obtained with hand-designed representations. It is noted that representation learning normally requires a plethora of well-annotated data. Giant companies have enough money to collect well-annotated data. Nonetheless, for startups or non-profit organizations, such data is barely acquirable due to the cost of labeling data or the intrinsic scarcity in the given domain. These practical issues motivate us to research and pay attention to weakly-supervised representation learning (WSRL), since WSRL does not require such a huge amount of annotated data. We define WSRL as the collection of representation learning problem settings and algorithms that share the same goals as supervised representation learning but can only access to less supervised information than supervised representation learning. In this workshop, we discuss both theoretical and applied aspects of WSRL.

Topics of Interest

WSRL workshop includes but not limited to the following topics:

Topic Description

The focus of this workshop is five types of weak supervision: incomplete supervision, inexact supervision, inaccurate supervision, cross-domain supervision and imperfect demonstration. Specifically, incomplete supervision considers a subset of training data given with ground-truth labels while the other data remain unlabeled, such as semi-supervised representation learning and positive-unlabeled representation learning. Inexact supervision considers the situation where some supervision information is given but not as exacted as desired, i.e., only coarse-grained labels are available. For example, if we are considering to classify every pixel of an image, rather than the image itself, then ImageNet becomes a benchmark with inexact supervision. Besides, multi-instance representation learning belongs to inexact supervision, where we do not exactly know which instance in the bag corresponds to the given ground-truth label. Inaccurate supervision considers the situation where the supervision information is not always the ground-truth, such as label-noise representation learning.

Cross-domain supervision considers the situation where the supervision information is scarce or even non-existent in the current domain but can be possibly derived from other domains. Examples of cross-domain supervision appear in zero-/one-/few-shot representation learning, where external knowledge from other domains is usually used to overcome the problem of too few or even no supervision in the original domain. Imperfect demonstration considers the situation for inverse reinforcement representation learning and imitation representation learning, where the agent learns with imperfect or non-expert demonstrations. For example, AlphaGo learns a policy from a sequence of states and actions (expert demonstration). Even if an expert player wins a game, it is not guaranteed that every action in the sequence is optimal.

This workshop will discuss the fundamental theory of weakly-supervised representation learning. Although theories of weakly-supervised statistical learning already exist, extending these results for weakly-supervised representation learning is still a challenge. Besides, this workshop also discusses on broad applications of weakly-supervised representation learning, such as weakly-supervised object detection (computer vision), weakly-supervised sequence modeling (natural language processing), weakly-supervised cross-media retrieval (information retrieval), and weakly-supervised medical image segmentation (healthcare analysis).

Submission Guidelines

Workshop submissions and camera ready versions will be handled by email. Please E-mail your paper to bhanml@comp.hkbu.edu.hk with subject line ACML20-WSRL-{paper name}. Papers should be formatted according to the ACML20 formatting instructions for the Conference Track. The submissions with 2 pages will be considered for the poster, while the submissions with at least 4 pages will be considered for the oral presentation.

ACML20-WSRL is a non-archival venue and there will be no published proceedings. The papers will be posted on the workshop website. It will be possible to submit to other conferences and journals both in parallel to and after ACML20-WSRL. Besides, we also welcome submissions to ACML20-WSRL that are under review at other conferences and workshops. At least one author from each accepted paper must register for the workshop. Please see the ACML 2020 Website for information about registration.

List of Invited Speakers

Kun Zhang, Carnegie Mellon University (confirmed)

Aditya K. Menon, Google Research (confirmed)

Schedule and Zoom Recordings

The workshop will be combined with invited talks, accepted presentations, and informal discussions.

The workshop is totally free online. Check our workshop recordings

Time (GMT+7) Event
8:30-8:35 Opening Ceremony
  Host: Masashi Sugiyama
8:35-9:20 Keynote Talk 1
  Title: Learning and Using Causal Representations
  Speaker: Kun Zhang (Carnegie Mellon University)
9:20-9:30 Contributed Talk 1
  Title: Part-dependent Label Noise: Towards Instance-dependent Label Noise
  Authors: Xiaobo Xia (The University of Sydney)
9:30-10:15 Keynote Talk 2
  Title: Training Deep Networks under Label Noise: New Views on Old Tricks
  Speaker: Aditya K. Menon (Google Research)
10:15-10:25 Contributed Talk 2
  Title: LTF: A Label Transformation Framework for Correcting Target Shift
  Authors: Jiaxian Guo (The University of Sydney)
10:25-10:35 Contributed Talk 3
  Title: Dual T: Reducing Esitmation Error for Transtion Matrix in Label-noise Learning
  Speaker: Yu Yao (The University of Sydney)
10:35-10:45 Contributed Talk 4
  Title: Learning from Aggregate Observations
  Authors: Yivan Zhang (The University of Tokyo)
10:45-11:15 Panel Discussion & Concluding Remark
  Host: Bo Han
  Guests: Masashi Sugiyama, Ivor W. Tsang, Kun Zhang, Aditya K. Menon, Gang Niu, Tongliang Liu, Mingming Gong, Quanming Yao

Important Dates

Submission Deadline: 05:00 PM (Pacific Time), Nov 5th, 2020 (1st Round)

Acceptance Notifications: Nov 10th, 2020

Organizers

Bo Han, Hong Kong Baptist University / RIKEN, Hong Kong SAR, China.

Tongliang Liu, The University of Sydney, Australia.

Mingming Gong, The University of Melbourne, Australia.

Quanming Yao, Tsinghua University / 4Paradigm Inc., China.

Gang Niu, RIKEN, Japan.

Ivor W. Tsang, University of Technology Sydney, Australia.

Masashi Sugiyama, RIKEN / University of Tokyo, Japan.

Sponsors

4Paradigm Inc.

Previous Workshops

ACML2019 WSL Workshop, Nagoya, Japan.

SDM2020 WSUL Workshop, Ohio, United States.