LiLT:A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding
LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding 논문명 : LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding 링크 : https://arxiv.org/abs/2202.13669 출간일 : 2022.02 출간 학회 : ACL 저자 : Wang, Jiapeng, Lianwen Jin, and Kai Ding 소속 : South China University of Technology, Guangzhou, China IntSig Information Co., Ltd, Shanghai, China INTSIG-SCUT Joint Laboratory of Document Recognition and Understanding, China Peng Cheng Laboratory, Shenzhen, China 인용 수 : 117 코드 : https://github.com/jpWang/LiLT https://huggingface.co/docs/transformers/main/model_doc/lilt Abstract 문제 의식 : English 에 특화된 Structured Document Understanding (SDU) 모델들만 있음 → Multi lingual SDU 모델에 Contribution DLA 태스크를 명확히 말하지 않음. Semantic Entity Recognition (SER), Relation Extraction(RE) 에 한정해서 언급 Paragraph 단위의 SER 이 DLA Task 와 같은 것으로 보임
1