Author | Shi Fangyuan
Editor | Chen Caixian
On the evening of October 12, 2022, the ACM Multimedia Conference officially announced several awards such as the best paper. This year’s ACM Multimedia will be held in Lisbon, Portugal from October 10th to 14th, 2022. A total of 3,009 submitted papers were received, and a total of 13 papers were selected as high-scoring papers.
Five papers were shortlisted for the Best Paper Award. Among them, the team of Professor Nie Liqiang from Harbin Institute of Technology won the highly anticipated Best Paper Award.
Professor Nie Liqiang’s award-winning paper, titled ” Search-oriented Micro-video Captioning “, was jointly completed by a joint team from Harbin Institute of Technology (Shenzhen), Shandong University, Kuaishou, Huawei and the University of Florence.
Paper address: https://ift.tt/O9hFWgs
The award-winning paper is described as follows:
This paper mainly studies the problem of how to automatically generate a text description for “short videos without video description”. In order to automatically generate an abstract text description for 38% of the short videos without text description, the researchers established a relevant model to automatically generate text to describe a short video from the perspective of user search needs, so as to meet the diversification of user search videos. need.
Previous work is devoted to content-oriented video captioning, which generates relevant sentences from the creator’s perspective to describe the visual content of a given video. The goal of this work is to be search-oriented, generating keywords from the user’s perspective to summarize a given video. In addition to relevance, diversity is also critical for describing users’ search intent from different perspectives.
To this end, the research team designed a large-scale multimodal pretrained network to enhance downstream video representations through five tasks, which was trained on 11 million micro-videos collected by the research team. After that, the research team proposes a stream-based diverse subtitle model to generate different subtitles according to users’ search needs. The model is optimized by reconstructing the KL divergence of the loss between the prior and the posterior. By constructing a golden dataset consisting of 690,000 <query, short video> pairs, the authors validate their model, and experimental results demonstrate its superiority.
It is understood that the “short video summary generation algorithm” developed by this work has been implemented in Kuaishou and has been running smoothly for half a year, processing about 30 million short videos every day.
Professor Nie Liqiang, Bachelor of Xi’an Jiaotong University, Ph.D. and Postdoctoral Fellow of National University of Singapore, has been selected for the International Talent Program twice. He is currently the second-level professor, doctoral supervisor and executive dean of the School of Computer Science, Harbin Institute of Technology (Shenzhen), and concurrently serves as IEEE TKDE, ACM ToMM, etc. The editorial board of the journal, and the chairman of the ACM MM 2018-2022 field, won the ACM China Rising Star Award in 2019, the DAMO Academy Green Orange Award in 2020, and was selected as the “MIT Technology Review” in 2020. “China list.
ACM International Conference on Multimedia (referred to as ACM MM, ACM International Multimedia Conference) was established in 1993. It is the world’s leading event in the field of multimedia. It aims to showcase scientific and scientific achievements and innovative industrial products in the field of multimedia. It is also recommended by the China Computer Federation. The only category A international academic conference.
Reference link: 1. https://2022.acmmm.org/ 2. https://ift.tt/GosHy9x For more content, click below to follow: Scan the code to add AI technology comment WeChat account, contribute & join the group:
Leifeng.com
This article is reprinted from: https://www.leiphone.com/category/academic/sUQBg4VNuvmj4sFx.html
This site is for inclusion only, and the copyright belongs to the original author.