Automatic Consistency Checking of Table and Text in Financial Documents

Authors

  • Syed Musharraf Ali Hochschule Bonn-Rhein-Sieg & Fraunhofer IAIS
  • Tobias Deußer University of Bonn & Fraunhofer IAIS
  • Sebastian Houben Hochschule Bonn-Rhein-Sieg
  • Lars Hillebrand University of Bonn & Fraunhofer IAIS
  • Tim Metzler Hochschule Bonn-Rhein-Sieg
  • Rafet Sifa Fraunhofer IAIS

DOI:

https://doi.org/10.7557/18.6816

Keywords:

text mining, natural language processing, deep learning

Abstract

A company's financial documents use tables along with text to organize the data containing key performance indicators (KPIs) (such as profit and loss) and a financial quantity linked to them. The KPI’s linked quantity in a table might not be equal to the similarly described KPI's quantity in a text. Auditors take substantial time to manually audit these financial mistakes and this process is called consistency checking. As compared to existing work, this paper attempts to automate this task with the help of transformer-based models. Furthermore, for consistency checking it is essential for the table's KPIs embeddings to encode the semantic knowledge of the KPIs and the structural knowledge of the table. Therefore, this paper proposes a pipeline that uses a tabular model to get the table's KPIs embeddings. The pipeline takes input table and text KPIs, generates their embeddings, and then checks whether these KPIs are identical. The pipeline is evaluated on the financial documents in the German language and a comparative analysis of the cell embeddings' quality from the three tabular models is also presented. From the evaluation results, the experiment that used the English-translated text and table KPIs and Tabbie model to generate table KPIs’ embeddings achieved an accuracy of 72.81% on the consistency checking task, outperforming the benchmark, and other tabular models.

References

D. Beisner, R. Ramamurthy, R. Stenzel, M. Lübbering, L. Hillebrand, A. Ladi, M. Pielka, R. Stenzel, R. Loitz, C. Bauckhage, and R. Sifa. Anonymization of German financial documents using neural network-based language models with contextual word representations. Int. J. Data Sci. Anal., 2021. doi: 10.1007/s41060-021-00285-x.

Y. Cao, H. Li, P. Luo, and J. Yao. Towards automatic numerical cross-checking: Extracting formulas from text. In Proc. WWW, 2018. doi: 10.1145/3178876.3186166.

J. Chen, E. Jiménez-Ruiz, I. Horrocks, and C. Sutton. Colnet: Embedding the semantics of web tables for column type prediction. In Proc. AAAI, 2019. doi: 10.1609/aaai.v33i01. 330129.

Z. Cheng, H. Dong, R. Jia, P. Wu, S. Han, F. Cheng, and D. Zhang. Fortap: Using formulas for numerical-reasoning-aware table pre-training. In Proc. ACL, 2022. doi: 10.18653/ v1/2022.acl-long.82.

K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning. Electra: Pre-training text encoders as discriminators rather than generators. In Proc. ICLR, 2020. doi: 10.48550/ARXIV. 2003.10555.

X. Deng, H. Sun, A. Lees, Y. Wu, and C. Yu. Turl: Table understanding through representation learning. Proc. VLDB Endow., 2020. doi: 10.5555/3430915.3442430.

X. Deng, A. Hassan, C. Meek, O. Polozov, H. Sun, and M. Richardson. Structure-grounded pretraining for text-to-sql. In Proc. NAACL-HLT, 2021. doi: 10.18653/v1/2021.naacl-main.105.

T. Deußer, S. M. Ali, L. Hillebrand, D. Nurchalifah, B. Jacob, C. Bauckhage, and R. Sifa. KPI-EDGAR: A novel dataset and accompanying metric for relation extraction from financial documents. In Proc. ICMLA, 2022. doi: 10.48550/arXiv.2210.09163.

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proc. NAACL-HLT, 2019. doi: 10.18653/v1/n19-1423.

H. Dong, Z. Cheng, X. He, M. Zhou, A. Zhou, F. Zhou, A. Liu, S. Han, and D. Zhang. Table pre-training: A survey on model architectures, pretraining objectives, and downstream tasks. CoRR, 2022. doi: 10.48550/ARXIV.2201.09745.

L. Du, F. Gao, X. Chen, R. Jia, J. Wang, J. Zhang, S. Han, and D. Zhang. TabularNet: A neural network architecture for understanding semantic structures of tabular data. In Proc. KDD, 2021. doi: 10.1145/3447548. 3467228.

G. D’Atri. Logic-based consistency checking of XRBL instances. IJACT, 2014.

J. Eisenschlos, M. Gor, T. Mueller, and W. Cohen. MATE: multi-view attention for table transformer efficiency. In Proc. EMNLP, 2021. doi: 10.18653/v1/2021.emnlp-main.600.

A. L. Gentile, P. Ristoski, S. Eckel, D. Ritze, and H. Paulheim. Entity matching on web tables: a table embeddings approach for blocking. In Proc. EDBT, 2017. doi: 10.5441/002/edbt.2017.57.

M. Ghasemi-Gol, J. Pujara, and P. Szekely. Learning cell embeddings for understanding table layouts. Knowl. Inf. Syst., 2021. doi: 10.1007/s10115-020-01508-6.

J. Herzig, P. K. Nowak, T. Mueller, F. Piccinno, and J. Eisenschlos. TaPas: Weakly supervised table parsing via pre-training. In Proc. ACL, 2020. doi: 10.18653/v1/2020.acl-main.398.

L. Hillebrand, T. Deußer, T. Dilmaghani, B. Kliem, R. Loitz, C. Bauckhage, and R. Sifa. Towards automating numerical consistency checks infinancial reports. In Proc. BigData, 2022. doi: 10.48550/arXiv.2211.06112.

L. Hillebrand, T. Deußer, T. Dilmaghani, B. Kliem, R. Loitz, C. Bauckhage, and R. Sifa. KPI-BERT: A joint named entity recognition and relation extraction model for financial reports. In Proc. ICPR, 2022. doi: 10.1109/ ICPR56361.2022.9956191.

H. Iida, D. Thai, V. Manjunatha, and M. Iyyer. TABBIE: pretrained representations of tabular data. In Proc. NAACL-HLT, 2021. doi: 10.18653/v1/2021.naacl-main.270.

Q. Liu, B. Chen, J. Guo, M. Ziyadi, Z. Lin, W. Chen, and J.-G. Lou. TAPEX: table pretraining via learning a neural SQL executor. In Proc. ICLR, 2022. doi: 10.48550/arXiv.2107.07653.

Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. CoRR, 2019. Doi: 10.48550/arXiv.1907.11692.

K. Nishida, K. Sadamitsu, R. Higashinaka, and Y. Matsuo. Understanding the semantic structures of tables with a hybrid deep neural network architecture. In Proc. AAAI, 2017. doi: 10.1609/aaai.v31i1.10484.

R. Ramamurthy, M. Pielka, R. Stenzel, C. Bauckhage, R. Sifa, T. D. Khameneh, U. Warning, B. Kliem, and R. Loitz. ALiBERT: improved automated list inspection (ALI) with BERT. In Proc. DocEng, 2021. doi: 10.1145/3469096.3474928.

R. Sifa, A. Ladi, M. Pielka, R. Ramamurthy, L. Hillebrand, B. Kirsch, D. Biesner, R. Stenzel, T. Bell, M. Lübbering, et al. Towards automated auditing with machine learning. In Proc. DocEng, 2019. doi: 10.1145/3342558. 3345421.

K. Sun, H. Rayudu, and J. Pujara. A hybrid probabilistic approach for table understanding. In Proc. AAAI, 2021. doi: 10.1609/aaai.v35i5.16562.

A. Vaswani, N. Shazeer, N. Parmar, J. Uszko- reit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Proc. NIPS, 2017. doi: 10.48550/arXiv.1706.03762.

Z. Wang, H. Dong, R. Jia, J. Li, Z. Fu, S. Han, and D. Zhang. TUTA: tree-based transformers for generally structured table pre-training. In Proc. KDD, 2021. doi: 10.1145/3447548.3467434.

J. Yang, A. Gupta, S. Upadhyay, L. He, R. Goel, and S. Paul. TableFormer: robust transformer modeling for table-text encoding. In Proc. ACL, 2022. doi: 10.18653/v1/2022.acl-long.40.

P. Yin, G. Neubig, W.-t. Yih, and S. Riedel. TaBERT: pretraining for joint understanding of textual and tabular data. In Proc. ACL, 2020. doi: 10.18653/v1/2020.acl-main.745.

T. Yu, C.-S. Wu, X. V. Lin, Y. C. Tan, X. Yang, D. Radev, C. Xiong, et al. GraPPa: grammar-augmented pre-training for table semantic parsing. In Proc. ICLR, 2021. doi: 10.48550/arXiv.2009.13845.

L. Zhang, S. Zhang, and K. Balog. Table2Vec: neural word and entity embeddings for table population and retrieval. In Proc. SIGIR, 2019. doi: 10.1145/3331184.3331333.

F. Zhu, D. Ning, Y. Wang, and S. Liu. A Novel Cost-sensitive Capsule Network for Audit Fraud Detection. In Proc. IUCC, 2021. doi: 10.1109/IUCC-CIT-DSCI-SmartCNS55181.2021.00091.

A. Zisman and A. Athanasopoulou. Consistency Management of Financial XML Documents. In Proc. CAiSE, 2001. doi: 10.1007/ 3-540-45341-5_15.

Downloads

Published

2023-01-23