keyboard_arrow_up
Negative Sampling in Knowledge Representation Learning: A Mini-Review

Authors

Jing Qian1, 2, Gangmin Li1, Katie Atkinson2 and Yong Yue1, 1Xi’an Jiaotong-Liverpool University, China, 2University of Liverpool, United Kingdom

Abstract

Knowledge representation learning (KRL) aims at encoding components of a knowledge graph (KG) into a low-dimensional continuous space, which has brought considerable successes in applying deep learning to graph embedding. Most famous KGs contain only positive instances for space efficiency. Typical KRL techniques, especially translational distance-based models, are trained through discriminating positive and negative samples. Thus, negative sampling is unquestionably a non-trivial step in KG embedding. The quality of generated negative samples can directly influence the performance of final knowledge representations in downstream tasks, such as link prediction and triple classification. This review summarizes current negative sampling methods in KRL and we categorize them into three sorts, fixed distribution-based, generative adversarial net (GAN)-based and cluster sampling. Based on this categorization we discuss the most prevalent existing approaches and their characteristics.

Keywords

Knowledge Representation Learning, Negative Sampling, Generative Adversarial Nets.

Full Text  Volume 10, Number 15