• <var id="xq0fh"></var>
      <input id="xq0fh"></input>
    1. <table id="xq0fh"><code id="xq0fh"></code></table>
      <table id="xq0fh"></table>
      <table id="xq0fh"><code id="xq0fh"></code></table>
      <table id="xq0fh"></table>
      1. <table id="xq0fh"></table>
        <input id="xq0fh"><rt id="xq0fh"></rt></input>
        开心彩票网开心彩票网官网开心彩票网网址开心彩票网注册开心彩票网app开心彩票网平台开心彩票网邀请码开心彩票网网登录开心彩票网开户开心彩票网手机版开心彩票网app下载开心彩票网ios开心彩票网可靠吗
        首頁 > 學術講座 > 正文
        From Multilingual to Multimodal Processing
        發布時間:2019-12-26    

        講座主題

        From Multilingual to Multimodal Processing

        主講人姓名及介紹

        Chenhui Chu received his B.S. in Software Engineering from Chongqing University in 2008, and M.S., and Ph.D. in Informatics from Kyoto University in 2012 and 2015, respectively. He is currently a research assistant professor at Osaka University. His research won the MSRA collaborative research 2019 grant award, 2018 AAMT Nagao award, and CICLing 2014 best student paper award. He is on the editorial board of the Journal of Natural Language Processing, Journal of Information Processing, and a steering committee member of Young Researcher Association for NLP Studies. His research interests center on natural language processing, particularly machine translation and language and vision understanding.

        報告摘要

        In this talk, I will introduce three of our recent work coving from multilingual to multimodal processing. The first work is about how to exploit multilingualism for low-resource neural machine translation. The second work is for identifying visual grounded paraphrases from image and language multimodal data. The last work explores knowledge for visual question answering in videos. Through the talk, I would like to discuss the research challenges and opportunities in multilingual and multimodal processing.

        學術講座
        开心彩票网{{转码主词}官网{{转码主词}网址