• <var id="xq0fh"></var>
      <input id="xq0fh"></input>
    1. <table id="xq0fh"><code id="xq0fh"></code></table>
      <table id="xq0fh"></table>
      <table id="xq0fh"><code id="xq0fh"></code></table>
      <table id="xq0fh"></table>
      1. <table id="xq0fh"></table>
        <input id="xq0fh"><rt id="xq0fh"></rt></input>
        首頁 > 學術講座 > 正文
        From Multilingual to Multimodal Processing


        From Multilingual to Multimodal Processing


        Chenhui Chu received his B.S. in Software Engineering from Chongqing University in 2008, and M.S., and Ph.D. in Informatics from Kyoto University in 2012 and 2015, respectively. He is currently a research assistant professor at Osaka University. His research won the MSRA collaborative research 2019 grant award, 2018 AAMT Nagao award, and CICLing 2014 best student paper award. He is on the editorial board of the Journal of Natural Language Processing, Journal of Information Processing, and a steering committee member of Young Researcher Association for NLP Studies. His research interests center on natural language processing, particularly machine translation and language and vision understanding.


        In this talk, I will introduce three of our recent work coving from multilingual to multimodal processing. The first work is about how to exploit multilingualism for low-resource neural machine translation. The second work is for identifying visual grounded paraphrases from image and language multimodal data. The last work explores knowledge for visual question answering in videos. Through the talk, I would like to discuss the research challenges and opportunities in multilingual and multimodal processing.