ReS2TIM: Reconstruct Syntactic Structures from Table Images

Published in The 15th IAPR International Conference on Document Analysis and Recognition (ICDAR), 2018

Recommended citation: W. Xue, Q. Li and D. Tao, "ReS2TIM: Reconstruct Syntactic Structures from Table Images," the 15th IAPR International Conference onDocument Analysis and Recognition, 2019. http://xuewenyuan.github.io/files/ICDAR2019-ReS2TIM-WenyuanXue.pdf

Tables often represent densely packed but structured data. Understanding table semantics is vital for effective information retrieval and data mining. Unlike web tables, whose semantics are readable directly from markup language and contents, the full analysis of tables published as images requires the conversion of discrete data into structured information. This paper presents a novel framework to convert a table image into its syntactic representation through the relationships between its cells. In order to reconstruct the syntactic structures of a table, we build a cell relationship network to predict the neighbors of each cell in four directions. During the training stage, a distance-based sample weight is proposed to handle the class imbalance problem. According to the detected relationships, the table is represented by a weighted graph that is then employed to infer the basic syntactic table structure. Experimental evaluation of the proposed framework using two datasets demonstrates the effectiveness of our model for cell relationship detection and table structure inference.