Definition of Different Multi-X Learning

0. Introduction

We discuss Multi-X Learning in two main categories: Multi-input learning and multi-output learning. Their relationships are shown in an imperfect venn diagram as follows. The dotted line implies some special cases would make itself cross the boundaries among these multi-X learning paradigms.

1. Multi-output Level

Multi-class classification[1]

Label of data is defined as multiple (3+3+) classes, but any sample in the dataset is specified into only one of these multiple classes. If we use a one-hot encoding to represent different classes, then all labels in the dataset contain the only 11 and leave the left values as 00.

Multi-label classification[2][3][4]

It can be understood as a generalization of the multi-class classification problem;there is more than one class assigned to a sample in multi-label problem . Thus, if its label is represented by a one-hot encoding, there is no constraint on the number of 11s or 00s on the entries of the binary vector.

Multi-target Regression[5]

Multi-target regression aims to simultaneously learn multiple regression responses attached to any instance, in which each response has its own distribution but shares same instance set.

Multi-task Learning[6][7]

MTL aims to learn mm related tasks simultaneously to improve the performance of a model by sharing the information among these different tasks. These tasks could be represented by datasets from related but different sources.

2. Multi-input Level

Multi-instance Learning[8]

Different from multi-label learning with multiple labels per instance, multi-instance learning has just one label per several instances. Multi-instance learning is a special supervised learning paradigm that targets the bags of the instances. If any instance in a bag of instances is labeled as positive, then this bag is labeled as positive, otherwise it’s labeled as negative if all the instances are labeled as negative. However, there is no any label attached to the individual instances.

Multi-view Learning[9]

Multi-view learning has different sets of features for the single-source data in a single-task learning, such as different color features and texture features extracted from the images; these views may also be obtained from different sources, such as urls and words contained in a web page. Multi-modality learning is a special case of multi-view learning, in which different sets of features are represented by data in different modalities.

3. Transfer Learning[10]

Learn representations on a target domain with the help of source domain, which alleviates the dependence on labeled data in target domain and achieves matching results compared with leaning from large-scale annotated dataset.

4. Comparison and Connection[7]

Multi-label Learning & Multi-target Regression[5]

If all targets in a multi-target regression problem are binary variates (predicted by using logistic regression), we can understand it as a special case of multi-label learning.

Multi-label Learning & Multi-task Learning

Multi-label learning takes advantage of dataset in which multiple labels are associated with identical data point. However, multi-task learning always targets several datasets from related but different data sources. If each of the multiple labels is separately treated as a task, but learning a sharing machine to predict all labels, then we can also resort to multi-task learning model to solve multi-label problem.

Multi-view Learning & Multi-task Learning

Multi-view learning is a single task learning with many different sets of features where multi-task learning aims to learn different tasks together. The intersection between them is \emptyset.

Transfer Learning & Multi-task Learning

Transfer learning aims to improve target task by using source task, but all tasks are equally treated and mutually learn representations from each other in Multi-task learning. On the other hand, target task is learned after source task in transfer learning but all tasks are shared and learned together in multi-task learning. From my point view, transfer learning is a special case of multi-task learning (mutually transfer learning).

Reference

[1] https://en.wikipedia.org/wiki/Multiclass_classification

[2] https://en.wikipedia.org/wiki/Multi-label_classification

[3] Zhang, Min-Ling, and Zhi-Hua Zhou. “A review on multi-label learning algorithms.” IEEE transactions on knowledge and data engineering 26.8 (2013): 1819-1837.

[4] Zhou, Zhi-Hua, and Min-Ling Zhang. “Multi-label Learning.” (2017): 875-881.

[5] Xu, Donna, Yaxin Shi, Ivor W. Tsang, Yew-Soon Ong, Chen Gong, and Xiaobo Shen. “Survey on multi-output learning.” IEEE transactions on neural networks and learning systems 31, no. 7 (2019): 2409-2429.

[6] Caruana, Rich. “Multitask learning.” Machine learning 28.1 (1997): 41-75.

[7] Zhang, Yu, and Qiang Yang. “A survey on multi-task learning.” arXiv preprint arXiv:1707.08114 (2017).

[8] https://www.cs.cmu.edu/~juny/MILL/review.htm

[9] Xu, Chang, Dacheng Tao, and Chao Xu. “A survey on multi-view learning.” arXiv preprint arXiv:1304.5634 (2013).

[10] Zhuang, Fuzhen, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. “A comprehensive survey on transfer learning.” Proceedings of the IEEE 109, no. 1 (2020): 43-76.