[ Program, Topics, Organizers, Previous Workshops ]

Note that this workshop is a part of Asian Conference on Machine Learning 2022, and thus in order to attend the workshop, you should first register for the conference. Then, the link of the airmeet can be found at the virtual site of ACML 2022 (if you attend the online conference).

Since ACML 2022 uses the AirMeet to deliver the online conference, please check that you can successfully log in ACML 2022 AirMeet (after you register ACML 2022, AirMeet Events will send you an email containing the “enter event” link). The email from AirMeet might be blocked, so please double check this. If your web browser has remembered your login cookie, you should be able to join ACML 2022 via this link.

Program

Schedule

The workshop will use for scheduling, and it will be combined with invited keynote talks and contributed talks (including invited short talks).

TimeEvent
Opening Ceremony
Host: Masashi Sugiyama (he/him)
Invited Keynote Talk 1
Title: Recent Progress and Challenges in Few-shot Learning
Speaker: Eleni Triantafillou (she/her)
Contributed talks (Part I)
Title: MissDAG: Causal Discovery in the Presence of Missing Data with Continuous Additive Noise Models
Speaker: Erdun Gao (he/him)
Bio Breaks and Social
Contributed talks (Part II)
Invited Short Talk: Robust Semi-Supervised Learning for Open Environments
Speaker: Lan-Zhe Guo (he/him)
Title: Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data
Speaker: Harsh Rangwani (he/him)
Invited Short Talk: Simplified Graph Learning for Inductive Short Text Classification
Speaker: Yaqing Wang (she/her)
Title: A Boosting Algorithm for Positive-Unlabeled Learning
Speaker: Yawen Zhao (she/her)
Invited Keynote Talk 2
Title: Causal Representation Learning: Successes and Challenges
Speaker: Kun Zhang (he/him)
Panel Discussion and Concluding Remark
Host: Feng Liu (he/him)

Invited Keynote Speakers

Dr Eleni Triantafillou (she/her), Google Brain

Prof Kun Zhang (he/him), CMU and MBZUAI

Invited Short-talk Speakers

Dr Lan-Zhe Guo (he/him), LAMDA Group, Nanjing University

Dr Yaqing Wang (she/her), Baidu Research

Topics

Overview

Machine learning should not be accessible only to those who can pay. Specifically, modern machine learning is migrating to the era of complex models (e.g., deep neural networks), which require a plethora of well-annotated data. Giant companies have enough money to collect well-annotated data. However, for startups or non-profit organizations, such data is barely acquirable due to the cost of labeling data or the intrinsic scarcity in the given domain. These practical issues motivate us to research and pay attention to weakly supervised learning (WSL), since WSL does not require such a huge amount of annotated data. We define WSL as the collection of machine learning problem settings and algorithms that share the same goals as supervised learning but can only access to less supervised information than supervised learning. In this workshop, we discuss both theoretical and applied aspects of WSL.

This workshop is a series of our previous workshops at ACML 2019, SDM 2020, ACML 2020, IJCAI 2021, and ACML 2021. Our particular technical emphasis at this workshop is incomplete supervision, inexact supervision, inaccurate supervision, cross-domain supervision, imperfect demonstration, and weak adversarial supervision. Meanwhile, this workshop will also focus on WSL for Science and Social Good, such as WSL for COVID-19, WSL for healthcare, WSL for climate change, and WSL for remote sensing, and new public WSL datasets regarding the scientific scenarios.

Topics of Interest

WSL workshop includes (but not limited to) the following topics:

Further Descriptions

The focus of this workshop is six types of weak supervision: incomplete supervision, inexact supervision, inaccurate supervision, cross-domain supervision, imperfect demonstration, and weak adversarial supervision, which are briefly introduced below.

  • Incomplete supervision considers a subset of training data given with ground-truth labels while the other data remain unlabeled, such as semi-supervised learning and positive-unlabeled learning.
  • Inexact supervision considers the situation where some supervision information is given but not as exacted as desired, i.e., only coarse-grained labels are available. For example, if we are considering to classify every pixel of an image, rather than the image itself, then ImageNet becomes a benchmark with inexact supervision. Besides, multi-instance learning belongs to inexact supervision, where we do not exactly know which instance in the bag corresponds to the given ground-truth label.
  • Inaccurate supervision considers the situation where the supervision information is not always the ground-truth, such as label-noise learning.
  • Cross-domain supervision considers the situation where the supervision information is scarce or even non-existent in the current domain but can be possibly derived from other domains. Examples of cross-domain supervision appear in zero-/one-/few-shot learning, where external knowledge from other domains is usually used to overcome the problem of too few or even no supervision in the original domain.
  • Imperfect demonstration considers the situation for inverse reinforcement learning and imitation learning, where the agent learns with imperfect or non-expert demonstrations. For example, AlphaGo learns a policy from a sequence of states and actions (expert demonstration). Even if an expert player wins a game, it is not guaranteed that every action in the sequence is optimal.
  • Weak adversarial supervision cconsiders the situation where weak supervision meets adversarial robustness. Since machine learning models are increasingly deployed in real-world applications, their security attracts more and more attention from both academia and industry. Therefore, many robust learning algorithms aim to prevent various evasion attacks, e.g., adversarial attacks, privacy attacks, model stealing attacks, and so on. However, almost all those robust algorithms (against evasion attacks) implicitly assume the strong supervision signals (no noisy labels in the training data), which hardly meets the requirements in practice. Therefore, when we develop evasion-robust algorithms, it is very practical/urgent to consider the supervision signals are imperfect.
  • Meanwhile, this workshop will continue discussing broad applications of weakly supervised learning in the field of computer science, such as weakly supervised object detection (computer vision), weakly supervised sequence modeling (natural language processing), weakly supervised cross-media retrieval (information retrieval).

    More importantly, this workshop will focus on how our society benefits from WSL methodologies, i.e., WSL for science and social good. In many scientific scenarios, we will face 1) the data-shortage issue, e.g., medical records at the very beginning of the appearance of COVID-19, and 2) inaccurate observations, e.g., noisy labels of medical images, and 3) non-stationary data, e.g., meteorology records. Thus, how to learn from imperfect information is critical in science and should be noticed in the field of WSL. Studying these scientific scenarios will also boost the development of real-world datasets regarding the WSL methodologies, making researchers propose more algorithms that can make contributions to natural science.

    Organizers

    Program Co-chairs

    Feng Liu, The University of Melbourne, Australia.

    Jingfeng Zhang, RIKEN, Japan.

    Nan Lu, The University of Tokyo, Japan.

    Lei Feng, Chongqing University, China.

    Tongliang Liu, The University of Sydney, Australia.

    Bo Han, Hong Kong Baptist University, Hong Kong SAR, China.

    Gang Niu, RIKEN, Japan.

    Masashi Sugiyama, RIKEN / University of Tokyo, Japan.

    Advisory Board (alphabetical order by last name)

    Chen Gong, Nanjing University of Science and Technology, China.

    Mingming Gong, The University of Melbourne, Australia.

    Yu-Feng Li, Nanjing University, China.

    Yang Liu, University of California, Santa Cruz, US.

    Ivor W. Tsang, A*STAR Centre for Frontier AI Research (CFAR), Singapore.

    Miao Xu, The University of Queensland, Australia.

    Quanming Yao, Tsinghua University / 4Paradigm Inc., China.

    Min-Ling Zhang, Southeast University, China.

    Previous Workshops

    ACML2021 WSL Workshop, Online.

    IJCAI2021 WSRL Workshop, Online.

    ACML2020 WSRL Workshop, Online.

    SDM2020 WSUL Workshop, Ohio, United States.

    ACML2019 WSL Workshop, Nagoya, Japan.