diff --git a/docling/datamodel/document.py b/docling/datamodel/document.py
index 5904df14..fe0507b8 100644
--- a/docling/datamodel/document.py
+++ b/docling/datamodel/document.py
@@ -388,7 +388,7 @@ def render_as_doctags(
add_page_index=add_page_index,
# table specific flags
add_table_cell_location=add_table_cell_location,
- add_table_cell_label=add_table_cell_labe
+ add_table_cell_label=add_table_cell_label,
add_table_cell_text=add_table_cell_text
)
diff --git a/tests/data/2203.01017v2.doctags.txt b/tests/data/2203.01017v2.doctags.txt
new file mode 100644
index 00000000..db8f9674
--- /dev/null
+++ b/tests/data/2203.01017v2.doctags.txt
@@ -0,0 +1,351 @@
+
+TableFormer: Table Structure Understanding with Transformers.
+Ahmed Nassar, Nikolaos Livathinos, Maksym Lysak, Peter Staar IBM Research
+{ ahn,nli,mly,taa } @zurich.ibm.com
+Abstract
+a. Picture of a table:
+1. Introduction
+The occurrence of tables in documents is ubiquitous. They often summarise quantitative or factual data, which is cumbersome to describe in verbose text but nevertheless extremely valuable. Unfortunately, this compact representation is often not easy to parse by machines. There are many implicit conventions used to obtain a compact table representation. For example, tables often have complex columnand row-headers in order to reduce duplicated cell content. Lines of different shapes and sizes are leveraged to separate content or indicate a tree structure. Additionally, tables can also have empty/missing table-entries or multi-row textual table-entries. Fig. 1 shows a table which presents all these issues.
+Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.
+
+
+
Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.
+31
+2
+
+b. Red-annotation of bounding boxes, Blue-predictions by TableFormer
+
+c. Structure predicted by TableFormer:
+
Figure 1: Picture of a table with subtle, complex features such as (1) multi-column headers, (2) cell with multi-row text and (3) cells with no content. Image from PubTabNet evaluation set, filename: 'PMC2944238 004 02'.
+
+
+
Figure 1: Picture of a table with subtle, complex features such as (1) multi-column headers, (2) cell with multi-row text and (3) cells with no content. Image from PubTabNet evaluation set, filename: 'PMC2944238 004 02'.
+Recently, significant progress has been made with vision based approaches to extract tables in documents. For the sake of completeness, the issue of table extraction from documents is typically decomposed into two separate challenges, i.e. (1) finding the location of the table(s) on a document-page and (2) finding the structure of a given table in the document.
+The first problem is called table-location and has been previously addressed [30, 38, 19, 21, 23, 26, 8] with stateof-the-art object-detection networks (e.g. YOLO and later on Mask-RCNN [9]). For all practical purposes, it can be
+considered as a solved problem, given enough ground-truth data to train on.
+The second problem is called table-structure decomposition. The latter is a long standing problem in the community of document understanding [6, 4, 14]. Contrary to the table-location problem, there are no commonly used approaches that can easily be re-purposed to solve this problem. Lately, a set of new model-architectures has been proposed by the community to address table-structure decomposition [37, 36, 18, 20]. All these models have some weaknesses (see Sec. 2). The common denominator here is the reliance on textual features and/or the inability to provide the bounding box of each table-cell in the original image.
+In this paper, we want to address these weaknesses and present a robust table-structure decomposition algorithm. The design criteria for our model are the following. First, we want our algorithm to be language agnostic. In this way, we can obtain the structure of any table, irregardless of the language. Second, we want our algorithm to leverage as much data as possible from the original PDF document. For programmatic PDF documents, the text-cells can often be extracted much faster and with higher accuracy compared to OCR methods. Last but not least, we want to have a direct link between the table-cell and its bounding box in the image.
+To meet the design criteria listed above, we developed a new model called TableFormer and a synthetically generated table structure dataset called SynthTabNet $^{1}$. In particular, our contributions in this work can be summarised as follows:
+· We propose TableFormer , a transformer based model that predicts tables structure and bounding boxes for the table content simultaneously in an end-to-end approach.
+· Across all benchmark datasets TableFormer significantly outperforms existing state-of-the-art metrics, while being much more efficient in training and inference to existing works.
+· We present SynthTabNet a synthetically generated dataset, with various appearance styles and complexity.
+· An augmented dataset based on PubTabNet [37], FinTabNet [36], and TableBank [17] with generated ground-truth for reproducibility.
+The paper is structured as follows. In Sec. 2, we give a brief overview of the current state-of-the-art. In Sec. 3, we describe the datasets on which we train. In Sec. 4, we introduce the TableFormer model-architecture and describe
+its results & performance in Sec. 5. As a conclusion, we describe how this new model-architecture can be re-purposed for other tasks in the computer-vision community.
+2. Previous work and State of the Art
+Identifying the structure of a table has been an outstanding problem in the document-parsing community, that motivates many organised public challenges [6, 4, 14]. The difficulty of the problem can be attributed to a number of factors. First, there is a large variety in the shapes and sizes of tables. Such large variety requires a flexible method. This is especially true for complex column- and row headers, which can be extremely intricate and demanding. A second factor of complexity is the lack of data with regard to table-structure. Until the publication of PubTabNet [37], there were no large datasets (i.e. > 100 K tables) that provided structure information. This happens primarily due to the fact that tables are notoriously time-consuming to annotate by hand. However, this has definitely changed in recent years with the deliverance of PubTabNet [37], FinTabNet [36], TableBank [17] etc.
+Before the rising popularity of deep neural networks, the community relied heavily on heuristic and/or statistical methods to do table structure identification [3, 7, 11, 5, 13, 28]. Although such methods work well on constrained tables [12], a more data-driven approach can be applied due to the advent of convolutional neural networks (CNNs) and the availability of large datasets. To the best-of-our knowledge, there are currently two different types of network architecture that are being pursued for state-of-the-art tablestructure identification.
+Image-to-Text networks : In this type of network, one predicts a sequence of tokens starting from an encoded image. Such sequences of tokens can be HTML table tags [37, 17] or LaTeX symbols[10]. The choice of symbols is ultimately not very important, since one can be transformed into the other. There are however subtle variations in the Image-to-Text networks. The easiest network architectures are "image-encoder → text-decoder" (IETD), similar to network architectures that try to provide captions to images [32]. In these IETD networks, one expects as output the LaTeX/HTML string of the entire table, i.e. the symbols necessary for creating the table with the content of the table. Another approach is the "image-encoder → dual decoder" (IEDD) networks. In these type of networks, one has two consecutive decoders with different purposes. The first decoder is the tag-decoder , i.e. it only produces the HTML/LaTeX tags which construct an empty table. The second content-decoder uses the encoding of the image in combination with the output encoding of each cell-tag (from the tag-decoder ) to generate the textual content of each table cell. The network architecture of IEDD is certainly more elaborate, but it has the advantage that one can pre-train the
+tag-decoder which is constrained to the table-tags.
+In practice, both network architectures (IETD and IEDD) require an implicit, custom trained object-characterrecognition (OCR) to obtain the content of the table-cells. In the case of IETD, this OCR engine is implicit in the decoder similar to [24]. For the IEDD, the OCR is solely embedded in the content-decoder. This reliance on a custom, implicit OCR decoder is of course problematic. OCR is a well known and extremely tough problem, that often needs custom training for each individual language. However, the limited availability for non-english content in the current datasets, makes it impractical to apply the IETD and IEDD methods on tables with other languages. Additionally, OCR can be completely omitted if the tables originate from programmatic PDF documents with known positions of each cell. The latter was the inspiration for the work of this paper.
+Graph Neural networks : Graph Neural networks (GNN's) take a radically different approach to tablestructure extraction. Note that one table cell can constitute out of multiple text-cells. To obtain the table-structure, one creates an initial graph, where each of the text-cells becomes a node in the graph similar to [33, 34, 2]. Each node is then associated with en embedding vector coming from the encoded image, its coordinates and the encoded text. Furthermore, nodes that represent adjacent text-cells are linked. Graph Convolutional Networks (GCN's) based methods take the image as an input, but also the position of the text-cells and their content [18]. The purpose of a GCN is to transform the input graph into a new graph, which replaces the old links with new ones. The new links then represent the table-structure. With this approach, one can avoid the need to build custom OCR decoders. However, the quality of the reconstructed structure is not comparable to the current state-of-the-art [18].
+Hybrid Deep Learning-Rule-Based approach : A popular current model for table-structure identification is the use of a hybrid Deep Learning-Rule-Based approach similar to [27, 29]. In this approach, one first detects the position of the table-cells with object detection (e.g. YoloVx or MaskRCNN), then classifies the table into different types (from its images) and finally uses different rule-sets to obtain its table-structure. Currently, this approach achieves stateof-the-art results, but is not an end-to-end deep-learning method. As such, new rules need to be written if different types of tables are encountered.
+3. Datasets
+We rely on large-scale datasets such as PubTabNet [37], FinTabNet [36], and TableBank [17] datasets to train and evaluate our models. These datasets span over various appearance styles and content. We also introduce our own synthetically generated SynthTabNet dataset to fix an im-
+
Figure 2: Distribution of the tables across different table dimensions in PubTabNet + FinTabNet datasets
+
+balance in the previous datasets.
+The PubTabNet dataset contains 509k tables delivered as annotated PNG images. The annotations consist of the table structure represented in HTML format, the tokenized text and its bounding boxes per table cell. Fig. 1 shows the appearance style of PubTabNet. Depending on its complexity, a table is characterized as "simple" when it does not contain row spans or column spans, otherwise it is "complex". The dataset is divided into Train and Val splits (roughly 98% and 2%). The Train split consists of 54% simple and 46% complex tables and the Val split of 51% and 49% respectively. The FinTabNet dataset contains 112k tables delivered as single-page PDF documents with mixed table structures and text content. Similarly to the PubTabNet, the annotations of FinTabNet include the table structure in HTML, the tokenized text and the bounding boxes on a table cell basis. The dataset is divided into Train, Test and Val splits (81%, 9.5%, 9.5%), and each one is almost equally divided into simple and complex tables (Train: 48% simple, 52% complex, Test: 48% simple, 52% complex, Test: 53% simple, 47% complex). Finally the TableBank dataset consists of 145k tables provided as JPEG images. The latter has annotations for the table structure, but only few with bounding boxes of the table cells. The entire dataset consists of simple tables and it is divided into 90% Train, 3% Test and 7% Val splits.
+Due to the heterogeneity across the dataset formats, it was necessary to combine all available data into one homogenized dataset before we could train our models for practical purposes. Given the size of PubTabNet, we adopted its annotation format and we extracted and converted all tables as PNG images with a resolution of 72 dpi. Additionally, we have filtered out tables with extreme sizes due to small
+amount of such tables, and kept only those ones ranging between 1*1 and 20*10 (rows/columns).
+The availability of the bounding boxes for all table cells is essential to train our models. In order to distinguish between empty and non-empty bounding boxes, we have introduced a binary class in the annotation. Unfortunately, the original datasets either omit the bounding boxes for whole tables (e.g. TableBank) or they narrow their scope only to non-empty cells. Therefore, it was imperative to introduce a data pre-processing procedure that generates the missing bounding boxes out of the annotation information. This procedure first parses the provided table structure and calculates the dimensions of the most fine-grained grid that covers the table structure. Notice that each table cell may occupy multiple grid squares due to row or column spans. In case of PubTabNet we had to compute missing bounding boxes for 48% of the simple and 69% of the complex tables. Regarding FinTabNet, 68% of the simple and 98% of the complex tables require the generation of bounding boxes.
+As it is illustrated in Fig. 2, the table distributions from all datasets are skewed towards simpler structures with fewer number of rows/columns. Additionally, there is very limited variance in the table styles, which in case of PubTabNet and FinTabNet means one styling format for the majority of the tables. Similar limitations appear also in the type of table content, which in some cases (e.g. FinTabNet) is restricted to a certain domain. Ultimately, the lack of diversity in the training dataset damages the ability of the models to generalize well on unseen data.
+Motivated by those observations we aimed at generating a synthetic table dataset named SynthTabNet . This approach offers control over: 1) the size of the dataset, 2) the table structure, 3) the table style and 4) the type of content. The complexity of the table structure is described by the size of the table header and the table body, as well as the percentage of the table cells covered by row spans and column spans. A set of carefully designed styling templates provides the basis to build a wide range of table appearances. Lastly, the table content is generated out of a curated collection of text corpora. By controlling the size and scope of the synthetic datasets we are able to train and evaluate our models in a variety of different conditions. For example, we can first generate a highly diverse dataset to train our models and then evaluate their performance on other synthetic datasets which are focused on a specific domain.
+In this regard, we have prepared four synthetic datasets, each one containing 150k examples. The corpora to generate the table text consists of the most frequent terms appearing in PubTabNet and FinTabNet together with randomly generated text. The first two synthetic datasets have been fine-tuned to mimic the appearance of the original datasets but encompass more complicated table structures. The third
+
Table 1: Both "Combined-Tabnet" and "CombinedTabnet" are variations of the following: (*) The CombinedTabnet dataset is the processed combination of PubTabNet and Fintabnet. (**) The combined dataset is the processed combination of PubTabNet, Fintabnet and TableBank.
+
+
+
Table 1: Both "Combined-Tabnet" and "CombinedTabnet" are variations of the following: (*) The CombinedTabnet dataset is the processed combination of PubTabNet and Fintabnet. (**) The combined dataset is the processed combination of PubTabNet, Fintabnet and TableBank.
+one adopts a colorful appearance with high contrast and the last one contains tables with sparse content. Lastly, we have combined all synthetic datasets into one big unified synthetic dataset of 600k examples.
+Tab. 1 summarizes the various attributes of the datasets.
+4. The TableFormer model
+Given the image of a table, TableFormer is able to predict: 1) a sequence of tokens that represent the structure of a table, and 2) a bounding box coupled to a subset of those tokens. The conversion of an image into a sequence of tokens is a well-known task [35, 16]. While attention is often used as an implicit method to associate each token of the sequence with a position in the original image, an explicit association between the individual table-cells and the image bounding boxes is also required.
+4.1. Model architecture.
+We now describe in detail the proposed method, which is composed of three main components, see Fig. 4. Our CNN Backbone Network encodes the input as a feature vector of predefined length. The input feature vector of the encoded image is passed to the Structure Decoder to produce a sequence of HTML tags that represent the structure of the table. With each prediction of an HTML standard data cell (' < td > ') the hidden state of that cell is passed to the Cell BBox Decoder. As for spanning cells, such as row or column span, the tag is broken down to ' < ', 'rowspan=' or 'colspan=', with the number of spanning cells (attribute), and ' > '. The hidden state attached to ' < ' is passed to the Cell BBox Decoder. A shared feed forward network (FFN) receives the hidden states from the Structure Decoder, to provide the final detection predictions of the bounding box coordinates and their classification.
+CNN Backbone Network. A ResNet-18 CNN is the backbone that receives the table image and encodes it as a vector of predefined length. The network has been modified by removing the linear and pooling layer, as we are not per-
+
Figure 3: TableFormer takes in an image of the PDF and creates bounding box and HTML structure predictions that are synchronized. The bounding boxes grabs the content from the PDF and inserts it in the structure.
+
+
Figure 4: Given an input image of a table, the Encoder produces fixed-length features that represent the input image. The features are then passed to both the Structure Decoder and Cell BBox Decoder . During training, the Structure Decoder receives 'tokenized tags' of the HTML code that represent the table structure. Afterwards, a transformer encoder and decoder architecture is employed to produce features that are received by a linear layer, and the Cell BBox Decoder. The linear layer is applied to the features to predict the tags. Simultaneously, the Cell BBox Decoder selects features referring to the data cells (' < td > ', ' < ') and passes them through an attention network, an MLP, and a linear layer to predict the bounding boxes.
+
+forming classification, and adding an adaptive pooling layer of size 28*28. ResNet by default downsamples the image resolution by 32 and then the encoded image is provided to both the Structure Decoder , and Cell BBox Decoder .
+Structure Decoder. The transformer architecture of this component is based on the work proposed in [31]. After extensive experimentation, the Structure Decoder is modeled as a transformer encoder with two encoder layers and a transformer decoder made from a stack of 4 decoder layers that comprise mainly of multi-head attention and feed forward layers. This configuration uses fewer layers and heads in comparison to networks applied to other problems (e.g. "Scene Understanding", "Image Captioning"), something which we relate to the simplicity of table images.
+The transformer encoder receives an encoded image from the CNN Backbone Network and refines it through a multi-head dot-product attention layer, followed by a Feed Forward Network. During training, the transformer decoder receives as input the output feature produced by the transformer encoder, and the tokenized input of the HTML ground-truth tags. Using a stack of multi-head attention layers, different aspects of the tag sequence could be inferred. This is achieved by each attention head on a layer operating in a different subspace, and then combining altogether their attention score.
+Cell BBox Decoder. Our architecture allows to simultaneously predict HTML tags and bounding boxes for each table cell without the need of a separate object detector end to end. This approach is inspired by DETR [1] which employs a Transformer Encoder, and Decoder that looks for a specific number of object queries (potential object detections). As our model utilizes a transformer architecture, the hidden state of the < td > ' and ' < ' HTML structure tags become the object query.
+The encoding generated by the CNN Backbone Network along with the features acquired for every data cell from the Transformer Decoder are then passed to the attention network. The attention network takes both inputs and learns to provide an attention weighted encoding. This weighted at-
+tention encoding is then multiplied to the encoded image to produce a feature for each table cell. Notice that this is different than the typical object detection problem where imbalances between the number of detections and the amount of objects may exist. In our case, we know up front that the produced detections always match with the table cells in number and correspondence.
+The output features for each table cell are then fed into the feed-forward network (FFN). The FFN consists of a Multi-Layer Perceptron (3 layers with ReLU activation function) that predicts the normalized coordinates for the bounding box of each table cell. Finally, the predicted bounding boxes are classified based on whether they are empty or not using a linear layer.
+Loss Functions. We formulate a multi-task loss Eq. 2 to train our network. The Cross-Entropy loss (denoted as l$_{s}$ ) is used to train the Structure Decoder which predicts the structure tokens. As for the Cell BBox Decoder it is trained with a combination of losses denoted as l$_{box}$ . l$_{box}$ consists of the generally used l$_{1}$ loss for object detection and the IoU loss ( l$_{iou}$ ) to be scale invariant as explained in [25]. In comparison to DETR, we do not use the Hungarian algorithm [15] to match the predicted bounding boxes with the ground-truth boxes, as we have already achieved a one-toone match through two steps: 1) Our token input sequence is naturally ordered, therefore the hidden states of the table data cells are also in order when they are provided as input to the Cell BBox Decoder , and 2) Our bounding boxes generation mechanism (see Sec. 3) ensures a one-to-one mapping between the cell content and its bounding box for all post-processed datasets.
+The loss used to train the TableFormer can be defined as following:
+where λ ∈ [0, 1], and λ$_{iou}$, λ$_{l}$$_{1}$ ∈$_{R}$ are hyper-parameters.
+5. Experimental Results
+5.1. Implementation Details
+TableFormer uses ResNet-18 as the CNN Backbone Network . The input images are resized to 448*448 pixels and the feature map has a dimension of 28*28. Additionally, we enforce the following input constraints:
+Although input constraints are used also by other methods, such as EDD, ours are less restrictive due to the improved
+runtime performance and lower memory footprint of TableFormer. This allows to utilize input samples with longer sequences and images with larger dimensions.
+The Transformer Encoder consists of two "Transformer Encoder Layers", with an input feature size of 512, feed forward network of 1024, and 4 attention heads. As for the Transformer Decoder it is composed of four "Transformer Decoder Layers" with similar input and output dimensions as the "Transformer Encoder Layers". Even though our model uses fewer layers and heads than the default implementation parameters, our extensive experimentation has proved this setup to be more suitable for table images. We attribute this finding to the inherent design of table images, which contain mostly lines and text, unlike the more elaborate content present in other scopes (e.g. the COCO dataset). Moreover, we have added ResNet blocks to the inputs of the Structure Decoder and Cell BBox Decoder. This prevents a decoder having a stronger influence over the learned weights which would damage the other prediction task (structure vs bounding boxes), but learn task specific weights instead. Lastly our dropout layers are set to 0.5.
+For training, TableFormer is trained with 3 Adam optimizers, each one for the CNN Backbone Network , Structure Decoder , and Cell BBox Decoder . Taking the PubTabNet as an example for our parameter set up, the initializing learning rate is 0.001 for 12 epochs with a batch size of 24, and λ set to 0.5. Afterwards, we reduce the learning rate to 0.0001, the batch size to 18 and train for 12 more epochs or convergence.
+TableFormer is implemented with PyTorch and Torchvision libraries [22]. To speed up the inference, the image undergoes a single forward pass through the CNN Backbone Network and transformer encoder. This eliminates the overhead of generating the same features for each decoding step. Similarly, we employ a 'caching' technique to preform faster autoregressive decoding. This is achieved by storing the features of decoded tokens so we can reuse them for each time step. Therefore, we only compute the attention for each new tag.
+5.2. Generalization
+TableFormer is evaluated on three major publicly available datasets of different nature to prove the generalization and effectiveness of our model. The datasets used for evaluation are the PubTabNet, FinTabNet and TableBank which stem from the scientific, financial and general domains respectively.
+We also share our baseline results on the challenging SynthTabNet dataset. Throughout our experiments, the same parameters stated in Sec. 5.1 are utilized.
+5.3. Datasets and Metrics
+The Tree-Edit-Distance-Based Similarity (TEDS) metric was introduced in [37]. It represents the prediction, and ground-truth as a tree structure of HTML tags. This similarity is calculated as:
+where T$_{a}$ and T$_{b}$ represent tables in tree structure HTML format. EditDist denotes the tree-edit distance, and | T | represents the number of nodes in T .
+5.4. Quantitative Analysis
+Structure. As shown in Tab. 2, TableFormer outperforms all SOTA methods across different datasets by a large margin for predicting the table structure from an image. All the more, our model outperforms pre-trained methods. During the evaluation we do not apply any table filtering. We also provide our baseline results on the SynthTabNet dataset. It has been observed that large tables (e.g. tables that occupy half of the page or more) yield poor predictions. We attribute this issue to the image resizing during the preprocessing step, that produces downsampled images with indistinguishable features. This problem can be addressed by treating such big tables with a separate model which accepts a large input image size.
+
Table 2: Structure results on PubTabNet (PTN), FinTabNet (FTN), TableBank (TB) and SynthTabNet (STN).
+
+
+
Table 2: Structure results on PubTabNet (PTN), FinTabNet (FTN), TableBank (TB) and SynthTabNet (STN).
+FT: Model was trained on PubTabNet then finetuned.
+Cell Detection. Like any object detector, our Cell BBox Detector provides bounding boxes that can be improved with post-processing during inference. We make use of the grid-like structure of tables to refine the predictions. A detailed explanation on the post-processing is available in the supplementary material. As shown in Tab. 3, we evaluate
+our Cell BBox Decoder accuracy for cells with a class label of 'content' only using the PASCAL VOC mAP metric for pre-processing and post-processing. Note that we do not have post-processing results for SynthTabNet as images are only provided. To compare the performance of our proposed approach, we've integrated TableFormer's Cell BBox Decoder into EDD architecture. As mentioned previously, the Structure Decoder provides the Cell BBox Decoder with the features needed to predict the bounding box predictions. Therefore, the accuracy of the Structure Decoder directly influences the accuracy of the Cell BBox Decoder . If the Structure Decoder predicts an extra column, this will result in an extra column of predicted bounding boxes.
+
Table 3: Cell Bounding Box detection results on PubTabNet, and FinTabNet. PP: Post-processing.
+
+
+
Table 3: Cell Bounding Box detection results on PubTabNet, and FinTabNet. PP: Post-processing.
+Cell Content. In this section, we evaluate the entire pipeline of recovering a table with content. Here we put our approach to test by capitalizing on extracting content from the PDF cells rather than decoding from images. Tab. 4 shows the TEDs score of HTML code representing the structure of the table along with the content inserted in the data cell and compared with the ground-truth. Our method achieved a 5.3% increase over the state-of-the-art, and commercial solutions. We believe our scores would be higher if the HTML ground-truth matched the extracted PDF cell content. Unfortunately, there are small discrepancies such as spacings around words or special characters with various unicode representations.
+
Table 4: Results of structure with content retrieved using cell detection on PubTabNet. In all cases the input is PDF documents with cropped tables.
+
+
+
Table 4: Results of structure with content retrieved using cell detection on PubTabNet. In all cases the input is PDF documents with cropped tables.
+a. Red - PDF cells, Green - predicted bounding boxes, Blue - post-processed predictions matched to PDF cells
+Japanese language (previously unseen by TableFormer):
+Example table from FinTabNet:
+
+
+b. Structure predicted by TableFormer, with superimposed matched PDF cell text:
+
Text is aligned to match original for ease of viewing
+
+
+
Text is aligned to match original for ease of viewing
+Shares (in millions)Shares (in millions)Weighted Average Grant Date Fair ValueWeighted Average Grant Date Fair Value
+RS U sPSUsRSUsPSUs
+Nonvested on Janua ry 11. 10.390.10 $$ 91.19
+Granted0. 50.1117.44122.41
+Vested(0. 5 )(0.1)87.0881.14
+Canceled or forfeited(0. 1 )-102.0192.18
+Nonvested on December 311.00.3104.85 $$ 104.51
+
+
Figure 5: One of the benefits of TableFormer is that it is language agnostic, as an example, the left part of the illustration demonstrates TableFormer predictions on previously unseen language (Japanese). Additionally, we see that TableFormer is robust to variability in style and content, right side of the illustration shows the example of the TableFormer prediction from the FinTabNet dataset.
+
+
+
Figure 6: An example of TableFormer predictions (bounding boxes and structure) from generated SynthTabNet table.
+
+5.5. Qualitative Analysis
+We showcase several visualizations for the different components of our network on various "complex" tables within datasets presented in this work in Fig. 5 and Fig. 6 As it is shown, our model is able to predict bounding boxes for all table cells, even for the empty ones. Additionally, our post-processing techniques can extract the cell content by matching the predicted bounding boxes to the PDF cells based on their overlap and spatial proximity. The left part of Fig. 5 demonstrates also the adaptability of our method to any language, as it can successfully extract Japanese text, although the training set contains only English content. We provide more visualizations including the intermediate steps in the supplementary material. Overall these illustrations justify the versatility of our method across a diverse range of table appearances and content type.
+6. Future Work & Conclusion
+In this paper, we presented TableFormer an end-to-end transformer based approach to predict table structures and bounding boxes of cells from an image. This approach enables us to recreate the table structure, and extract the cell content from PDF or OCR by using bounding boxes. Additionally, it provides the versatility required in real-world scenarios when dealing with various types of PDF documents, and languages. Furthermore, our method outperforms all state-of-the-arts with a wide margin. Finally, we introduce "SynthTabNet" a challenging synthetically generated dataset that reinforces missing characteristics from other datasets.
+References
+[1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-
+end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 , pages 213-229, Cham, 2020. Springer International Publishing. 5
+[2] Zewen Chi, Heyan Huang, Heng-Da Xu, Houjin Yu, Wanxuan Yin, and Xian-Ling Mao. Complicated table structure recognition. arXiv preprint arXiv:1908.04729 , 2019. 3
+[3] Bertrand Couasnon and Aurelie Lemaitre. Recognition of Tables and Forms , pages 647-677. Springer London, London, 2014. 2
+[4] Herv'e D'ejean, Jean-Luc Meunier, Liangcai Gao, Yilun Huang, Yu Fang, Florian Kleber, and Eva-Maria Lang. ICDAR 2019 Competition on Table Detection and Recognition (cTDaR), Apr. 2019. http://sac.founderit.com/. 2
+[5] Basilios Gatos, Dimitrios Danatsas, Ioannis Pratikakis, and Stavros J Perantonis. Automatic table detection in document images. In International Conference on Pattern Recognition and Image Analysis , pages 609-618. Springer, 2005. 2
+[6] Max Gobel, Tamir Hassan, Ermelinda Oro, and Giorgio Orsi. Icdar 2013 table competition. In 2013 12th International Conference on Document Analysis and Recognition , pages 1449-1453, 2013. 2
+[7] EA Green and M Krishnamoorthy. Recognition of tables using table grammars. procs. In Symposium on Document Analysis and Recognition (SDAIR'95) , pages 261-277. 2
+[8] Khurram Azeem Hashmi, Alain Pagani, Marcus Liwicki, Didier Stricker, and Muhammad Zeshan Afzal. Castabdetectors: Cascade network for table detection in document images with recursive feature pyramid and switchable atrous convolution. Journal of Imaging , 7(10), 2021. 1
+[9] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) , Oct 2017. 1
+[10] Yelin He, X. Qi, Jiaquan Ye, Peng Gao, Yihao Chen, Bingcong Li, Xin Tang, and Rong Xiao. Pingan-vcgroup's solution for icdar 2021 competition on scientific table image recognition to latex. ArXiv , abs/2105.01846, 2021. 2
+[11] Jianying Hu, Ramanujan S Kashi, Daniel P Lopresti, and Gordon Wilfong. Medium-independent table detection. In Document Recognition and Retrieval VII , volume 3967, pages 291-302. International Society for Optics and Photonics, 1999. 2
+[12] Matthew Hurst. A constraint-based approach to table structure derivation. In Proceedings of the Seventh International Conference on Document Analysis and Recognition - Volume 2 , ICDAR '03, page 911, USA, 2003. IEEE Computer Society. 2
+[13] Thotreingam Kasar, Philippine Barlas, Sebastien Adam, Cl'ement Chatelain, and Thierry Paquet. Learning to detect tables in scanned document images using line information. In 2013 12th International Conference on Document Analysis and Recognition , pages 1185-1189. IEEE, 2013. 2
+[14] Pratik Kayal, Mrinal Anand, Harsh Desai, and Mayank Singh. Icdar 2021 competition on scientific table image recognition to latex, 2021. 2
+[15] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly , 2(1-2):83-97, 1955. 6
+[16] Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. Babytalk: Understanding and generating simple image descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence , 35(12):2891-2903, 2013. 4
+[17] Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, and Zhoujun Li. Tablebank: A benchmark dataset for table detection and recognition, 2019. 2, 3
+[18] Yiren Li, Zheng Huang, Junchi Yan, Yi Zhou, Fan Ye, and Xianhui Liu. Gfte: Graph-based financial table extraction. In Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, and Roberto Vezzani, editors, Pattern Recognition. ICPR International Workshops and Challenges , pages 644-658, Cham, 2021. Springer International Publishing. 2, 3
+[19] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter Staar. Robust pdf document conversion using recurrent neural networks. Proceedings of the AAAI Conference on Artificial Intelligence , 35(17):15137-15145, May 2021. 1
+[20] Rujiao Long, Wen Wang, Nan Xue, Feiyu Gao, Zhibo Yang, Yongpan Wang, and Gui-Song Xia. Parsing table structures in the wild. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 944-952, 2021. 2
+[21] Shubham Singh Paliwal, D Vishwanath, Rohit Rahul, Monika Sharma, and Lovekesh Vig. Tablenet: Deep learning model for end-to-end table detection and tabular data extraction from scanned document images. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 128-133. IEEE, 2019. 1
+[22] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch'e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32 , pages 8024-8035. Curran Associates, Inc., 2019. 6
+[23] Devashish Prasad, Ayan Gadpal, Kshitij Kapadni, Manish Visave, and Kavita Sultanpure. Cascadetabnet: An approach for end to end table detection and structure recognition from image-based documents. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops , pages 572-573, 2020. 1
+[24] Shah Rukh Qasim, Hassan Mahmood, and Faisal Shafait. Rethinking table recognition using graph neural networks. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 142-147. IEEE, 2019. 3
+[25] Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on
+Computer Vision and Pattern Recognition , pages 658-666, 2019. 6
+[26] Sebastian Schreiber, Stefan Agne, Ivo Wolf, Andreas Dengel, and Sheraz Ahmed. Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) , volume 01, pages 11621167, 2017. 1
+[27] Sebastian Schreiber, Stefan Agne, Ivo Wolf, Andreas Dengel, and Sheraz Ahmed. Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In 2017 14th IAPR international conference on document analysis and recognition (ICDAR) , volume 1, pages 1162-1167. IEEE, 2017. 3
+[28] Faisal Shafait and Ray Smith. Table detection in heterogeneous documents. In Proceedings of the 9th IAPR International Workshop on Document Analysis Systems , pages 6572, 2010. 2
+[29] Shoaib Ahmed Siddiqui, Imran Ali Fateh, Syed Tahseen Raza Rizvi, Andreas Dengel, and Sheraz Ahmed. Deeptabstr: Deep learning based table structure recognition. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 1403-1409. IEEE, 2019. 3
+[30] Peter W J Staar, Michele Dolfi, Christoph Auer, and Costas Bekas. Corpus conversion service: A machine learning platform to ingest documents at scale. In Proceedings of the 24th ACM SIGKDD , KDD '18, pages 774-782, New York, NY, USA, 2018. ACM. 1
+[31] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30 , pages 5998-6008. Curran Associates, Inc., 2017. 5
+[32] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2015. 2
+[33] Wenyuan Xue, Qingyong Li, and Dacheng Tao. Res2tim: reconstruct syntactic structures from table images. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 749-755. IEEE, 2019. 3
+[34] Wenyuan Xue, Baosheng Yu, Wen Wang, Dacheng Tao, and Qingyong Li. Tgrnet: A table graph reconstruction network for table structure recognition. arXiv preprint arXiv:2106.10598 , 2021. 3
+[35] Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. Image captioning with semantic attention. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4651-4659, 2016. 4
+[36] Xinyi Zheng, Doug Burdick, Lucian Popa, Peter Zhong, and Nancy Xin Ru Wang. Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. Winter Conference for Applications in Computer Vision (WACV) , 2021. 2, 3
+[37] Xu Zhong, Elaheh ShafieiBavani, and Antonio Jimeno Yepes. Image-based table recognition: Data, model,
+and evaluation. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision ECCV 2020 , pages 564-580, Cham, 2020. Springer International Publishing. 2, 3, 7
+[38] Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. Publaynet: Largest dataset ever for document layout analysis. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 1015-1022, 2019. 1
+TableFormer: Table Structure Understanding with Transformers
+Supplementary Material
+1. Details on the datasets
+1.1. Data preparation
+As a first step of our data preparation process, we have calculated statistics over the datasets across the following dimensions: (1) table size measured in the number of rows and columns, (2) complexity of the table, (3) strictness of the provided HTML structure and (4) completeness (i.e. no omitted bounding boxes). A table is considered to be simple if it does not contain row spans or column spans. Additionally, a table has a strict HTML structure if every row has the same number of columns after taking into account any row or column spans. Therefore a strict HTML structure looks always rectangular. However, HTML is a lenient encoding format, i.e. tables with rows of different sizes might still be regarded as correct due to implicit display rules. These implicit rules leave room for ambiguity, which we want to avoid. As such, we prefer to have "strict" tables, i.e. tables where every row has exactly the same length.
+We have developed a technique that tries to derive a missing bounding box out of its neighbors. As a first step, we use the annotation data to generate the most fine-grained grid that covers the table structure. In case of strict HTML tables, all grid squares are associated with some table cell and in the presence of table spans a cell extends across multiple grid squares. When enough bounding boxes are known for a rectangular table, it is possible to compute the geometrical border lines between the grid rows and columns. Eventually this information is used to generate the missing bounding boxes. Additionally, the existence of unused grid squares indicates that the table rows have unequal number of columns and the overall structure is non-strict. The generation of missing bounding boxes for non-strict HTML tables is ambiguous and therefore quite challenging. Thus, we have decided to simply discard those tables. In case of PubTabNet we have computed missing bounding boxes for 48% of the simple and 69% of the complex tables. Regarding FinTabNet, 68% of the simple and 98% of the complex tables require the generation of bounding boxes.
+Figure 7 illustrates the distribution of the tables across different dimensions per dataset.
+1.2. Synthetic datasets
+Aiming to train and evaluate our models in a broader spectrum of table data we have synthesized four types of datasets. Each one contains tables with different appear-
+ances in regard to their size, structure, style and content. Every synthetic dataset contains 150k examples, summing up to 600k synthetic examples. All datasets are divided into Train, Test and Val splits (80%, 10%, 10%).
+The process of generating a synthetic dataset can be decomposed into the following steps:
+1. Prepare styling and content templates: The styling templates have been manually designed and organized into groups of scope specific appearances (e.g. financial data, marketing data, etc.) Additionally, we have prepared curated collections of content templates by extracting the most frequently used terms out of non-synthetic datasets (e.g. PubTabNet, FinTabNet, etc.).
+2. Generate table structures: The structure of each synthetic dataset assumes a horizontal table header which potentially spans over multiple rows and a table body that may contain a combination of row spans and column spans. However, spans are not allowed to cross the header - body boundary. The table structure is described by the parameters: Total number of table rows and columns, number of header rows, type of spans (header only spans, row only spans, column only spans, both row and column spans), maximum span size and the ratio of the table area covered by spans.
+3. Generate content: Based on the dataset theme , a set of suitable content templates is chosen first. Then, this content can be combined with purely random text to produce the synthetic content.
+4. Apply styling templates: Depending on the domain of the synthetic dataset, a set of styling templates is first manually selected. Then, a style is randomly selected to format the appearance of the synthesized table.
+5. Render the complete tables: The synthetic table is finally rendered by a web browser engine to generate the bounding boxes for each table cell. A batching technique is utilized to optimize the runtime overhead of the rendering process.
+2. Prediction post-processing for PDF documents
+Although TableFormer can predict the table structure and the bounding boxes for tables recognized inside PDF documents, this is not enough when a full reconstruction of the original table is required. This happens mainly due the following reasons:
+
Figure 7: Distribution of the tables across different dimensions per dataset. Simple vs complex tables per dataset and split, strict vs non strict html structures per dataset and table complexity, missing bboxes per dataset and table complexity.
+
+· TableFormer output does not include the table cell content.
+· There are occasional inaccuracies in the predictions of the bounding boxes.
+However, it is possible to mitigate those limitations by combining the TableFormer predictions with the information already present inside a programmatic PDF document. More specifically, PDF documents can be seen as a sequence of PDF cells where each cell is described by its content and bounding box. If we are able to associate the PDF cells with the predicted table cells, we can directly link the PDF cell content to the table cell structure and use the PDF bounding boxes to correct misalignments in the predicted table cell bounding boxes.
+Here is a step-by-step description of the prediction postprocessing:
+1. Get the minimal grid dimensions - number of rows and columns for the predicted table structure. This represents the most granular grid for the underlying table structure.
+2. Generate pair-wise matches between the bounding boxes of the PDF cells and the predicted cells. The Intersection Over Union (IOU) metric is used to evaluate the quality of the matches.
+3. Use a carefully selected IOU threshold to designate the matches as "good" ones and "bad" ones.
+3.a. If all IOU scores in a column are below the threshold, discard all predictions (structure and bounding boxes) for that column.
+4. Find the best-fitting content alignment for the predicted cells with good IOU per each column. The alignment of the column can be identified by the following formula:
+where c is one of { left, centroid, right } and x$_{c}$ is the xcoordinate for the corresponding point.
+5. Use the alignment computed in step 4, to compute the median x -coordinate for all table columns and the me-
+dian cell size for all table cells. The usage of median during the computations, helps to eliminate outliers caused by occasional column spans which are usually wider than the normal.
+6. Snap all cells with bad IOU to their corresponding median x -coordinates and cell sizes.
+7. Generate a new set of pair-wise matches between the corrected bounding boxes and PDF cells. This time use a modified version of the IOU metric, where the area of the intersection between the predicted and PDF cells is divided by the PDF cell area. In case there are multiple matches for the same PDF cell, the prediction with the higher score is preferred. This covers the cases where the PDF cells are smaller than the area of predicted or corrected prediction cells.
+8. In some rare occasions, we have noticed that TableFormer can confuse a single column as two. When the postprocessing steps are applied, this results with two predicted columns pointing to the same PDF column. In such case we must de-duplicate the columns according to highest total column intersection score.
+9. Pick up the remaining orphan cells. There could be cases, when after applying all the previous post-processing steps, some PDF cells could still remain without any match to predicted cells. However, it is still possible to deduce the correct matching for an orphan PDF cell by mapping its bounding box on the geometry of the grid. This mapping decides if the content of the orphan cell will be appended to an already matched table cell, or a new table cell should be created to match with the orphan.
+9a. Compute the top and bottom boundary of the horizontal band for each grid row (min/max y coordinates per row).
+9b. Intersect the orphan's bounding box with the row bands, and map the cell to the closest grid row.
+9c. Compute the left and right boundary of the vertical band for each grid column (min/max x coordinates per column).
+9d. Intersect the orphan's bounding box with the column bands, and map the cell to the closest grid column.
+9e. If the table cell under the identified row and column is not empty, extend its content with the content of the or-
+phan cell.
+9f. Otherwise create a new structural cell and match it wit the orphan cell.
+Aditional images with examples of TableFormer predictions and post-processing can be found below.
+Figure 8: Example of a table with multi-line header.
+
Figure 9: Example of a table with big empty distance between cells.
+
+
Figure 10: Example of a complex table with empty cells.
+
+
+
Figure 11: Simple table with different style and empty cells.
+
+
Figure 12: Simple table predictions and post processing.
+
+
Figure 13: Table predictions example on colorful table.
+
Figure 14: Example with multi-line text.
+
+
Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact.
+
+
+
Figure 15: Example with triangular table.
+
+
Figure 17: Example of long table. End-to-end example from initial PDF cells to prediction of bounding boxes, post processing and prediction of structure.
Figure 1: Four examples of complex page layouts across different document categories
+
+KEYWORDS
+PDF document conversion, layout segmentation, object-detection, data set, Machine Learning
+ACM Reference Format:
+Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar. 2022. DocLayNet: A Large Human-Annotated Dataset for DocumentLayout Analysis. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '22), August 14-18, 2022, Washington, DC, USA. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/ 3534678.3539043
+1 INTRODUCTION
+Despite the substantial improvements achieved with machine-learning (ML) approaches and deep neural networks in recent years, document conversion remains a challenging problem, as demonstrated by the numerous public competitions held on this topic [1-4]. The challenge originates from the huge variability in PDF documents regarding layout, language and formats (scanned, programmatic or a combination of both). Engineering a single ML model that can be applied on all types of documents and provides high-quality layout segmentation remains to this day extremely challenging [5]. To highlight the variability in document layouts, we show a few example documents from the DocLayNet dataset in Figure 1.
+A key problem in the process of document conversion is to understand the structure of a single document page, i.e. which segments of text should be grouped together in a unit. To train models for this task, there are currently two large datasets available to the community, PubLayNet [6] and DocBank [7]. They were introduced in 2019 and 2020 respectively and significantly accelerated the implementation of layout detection and segmentation models due to their sizes of 300K and 500K ground-truth pages. These sizes were achieved by leveraging an automation approach. The benefit of automated ground-truth generation is obvious: one can generate large ground-truth datasets at virtually no cost. However, the automation introduces a constraint on the variability in the dataset, because corresponding structured source data must be available. PubLayNet and DocBank were both generated from scientific document repositories (PubMed and arXiv), which provide XML or L A T E X sources. Those scientific documents present a limited variability in their layouts, because they are typeset in uniform templates provided by the publishers. Obviously, documents such as technical manuals, annual company reports, legal text, government tenders, etc. have very different and partially unique layouts. As a consequence, the layout predictions obtained from models trained on PubLayNet or DocBank is very reasonable when applied on scientific documents. However, for more artistic or free-style layouts, we see sub-par prediction quality from these models, which we demonstrate in Section 5.
+In this paper, we present the DocLayNet dataset. It provides pageby-page layout annotation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique document pages, of which a fraction carry double- or triple-annotations. DocLayNet is similar in spirit to PubLayNet and DocBank and will likewise be made available to the public 1 in order to stimulate the document-layout analysis community. It distinguishes itself in the following aspects:
+(1) Human Annotation : In contrast to PubLayNet and DocBank, we relied on human annotation instead of automation approaches to generate the data set.
+(2) Large Layout Variability : We include diverse and complex layouts from a large variety of public sources.
+(3) Detailed Label Set : We define 11 class labels to distinguish layout features in high detail. PubLayNet provides 5 labels; DocBank provides 13, although not a superset of ours.
+(4) Redundant Annotations : A fraction of the pages in the DocLayNet data set carry more than one human annotation.
+This enables experimentation with annotation uncertainty and quality control analysis.
+(5) Pre-defined Train-, Test- & Validation-set : Like DocBank, we provide fixed train-, test- & validation-sets to ensure proportional representation of the class-labels. Further, we prevent leakage of unique layouts across sets, which has a large effect on model accuracy scores.
+All aspects outlined above are detailed in Section 3. In Section 4, we will elaborate on how we designed and executed this large-scale human annotation campaign. We will also share key insights and lessons learned that might prove helpful for other parties planning to set up annotation campaigns.
+In Section 5, we will present baseline accuracy numbers for a variety of object detection methods (Faster R-CNN, Mask R-CNN and YOLOv5) trained on DocLayNet. We further show how the model performance is impacted by varying the DocLayNet dataset size, reducing the label set and modifying the train/test-split. Last but not least, we compare the performance of models trained on PubLayNet, DocBank and DocLayNet and demonstrate that a model trained on DocLayNet provides overall more robust layout recovery.
+2 RELATED WORK
+While early approaches in document-layout analysis used rulebased algorithms and heuristics [8], the problem is lately addressed with deep learning methods. The most common approach is to leverage object detection models [9-15]. In the last decade, the accuracy and speed of these models has increased dramatically. Furthermore, most state-of-the-art object detection methods can be trained and applied with very little work, thanks to a standardisation effort of the ground-truth data format [16] and common deep-learning frameworks [17]. Reference data sets such as PubLayNet [6] and DocBank provide their data in the commonly accepted COCO format [16].
+Lately, new types of ML models for document-layout analysis have emerged in the community [18-21]. These models do not approach the problem of layout analysis purely based on an image representation of the page, as computer vision methods do. Instead, they combine the text tokens and image representation of a page in order to obtain a segmentation. While the reported accuracies appear to be promising, a broadly accepted data format which links geometric and textual features has yet to establish.
+3 THE DOCLAYNET DATASET
+DocLayNet contains 80863 PDF pages. Among these, 7059 carry two instances of human annotations, and 1591 carry three. This amounts to 91104 total annotation instances. The annotations provide layout information in the shape of labeled, rectangular boundingboxes. We define 11 distinct labels for layout features, namely Caption , Footnote , Formula , List-item , Page-footer , Page-header , Picture , Section-header , Table , Text , and Title . Our reasoning for picking this particular label set is detailed in Section 4.
+In addition to open intellectual property constraints for the source documents, we required that the documents in DocLayNet adhere to a few conditions. Firstly, we kept scanned documents
+
Figure 2: Distribution of DocLayNet pages across document categories.
+
+to a minimum, since they introduce difficulties in annotation (see Section 4). As a second condition, we focussed on medium to large documents ( > 10 pages) with technical content, dense in complex tables, figures, plots and captions. Such documents carry a lot of information value, but are often hard to analyse with high accuracy due to their challenging layouts. Counterexamples of documents not included in the dataset are receipts, invoices, hand-written documents or photographs showing "text in the wild".
+The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports , Manuals , Scientific Articles , Laws & Regulations , Patents and Government Tenders . Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports 2 which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories ( Financial Reports and Manuals ) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.
+We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.
+To ensure that future benchmarks in the document-layout analysis community can be easily compared, we have split up DocLayNet into pre-defined train-, test- and validation-sets. In this way, we can avoid spurious variations in the evaluation scores due to random splitting in train-, test- and validation-sets. We also ensured that less frequent labels are represented in train and test sets in equal proportions.
+Table 1 shows the overall frequency and distribution of the labels among the different sets. Importantly, we ensure that subsets are only split on full-document boundaries. This avoids that pages of the same document are spread over train, test and validation set, which can give an undesired evaluation advantage to models and lead to overestimation of their prediction accuracy. We will show the impact of this decision in Section 5.
+In order to accommodate the different types of models currently in use by the community, we provide DocLayNet in an augmented COCO format [16]. This entails the standard COCO ground-truth file (in JSON format) with the associated page images (in PNG format, 1025 × 1025 pixels). Furthermore, custom fields have been added to each COCO record to specify document category, original document filename and page number. In addition, we also provide the original PDF pages, as well as sidecar files containing parsed PDF text and text-cell coordinates (in JSON). All additional files are linked to the primary page images by their matching filenames.
+Despite being cost-intense and far less scalable than automation, human annotation has several benefits over automated groundtruth generation. The first and most obvious reason to leverage human annotations is the freedom to annotate any type of document without requiring a programmatic source. For most PDF documents, the original source document is not available. The latter is not a hard constraint with human annotation, but it is for automated methods. A second reason to use human annotations is that the latter usually provide a more natural interpretation of the page layout. The human-interpreted layout can significantly deviate from the programmatic layout used in typesetting. For example, "invisible" tables might be used solely for aligning text paragraphs on columns. Such typesetting tricks might be interpreted by automated methods incorrectly as an actual table, while the human annotation will interpret it correctly as Text or other styles. The same applies to multi-line text elements, when authors decided to space them as "invisible" list elements without bullet symbols. A third reason to gather ground-truth through human annotation is to estimate a "natural" upper bound on the segmentation accuracy. As we will show in Section 4, certain documents featuring complex layouts can have different but equally acceptable layout interpretations. This natural upper bound for segmentation accuracy can be found by annotating the same pages multiple times by different people and evaluating the inter-annotator agreement. Such a baseline consistency evaluation is very useful to define expectations for a good target accuracy in trained deep neural network models and avoid overfitting (see Table 1). On the flip side, achieving high annotation consistency proved to be a key challenge in human annotation, as we outline in Section 4.
+4 ANNOTATION CAMPAIGN
+The annotation campaign was carried out in four phases. In phase one, we identified and prepared the data sources for annotation. In phase two, we determined the class labels and how annotations should be done on the documents in order to obtain maximum consistency. The latter was guided by a detailed requirement analysis and exhaustive experiments. In phase three, we trained the annotation staff and performed exams for quality assurance. In phase four,
+
Table 1: DocLayNet dataset overview. Along with the frequency of each class label, we present the relative occurrence (as % of row "Total") in the train, test and validation sets. The inter-annotator agreement is computed as the mAP@0.5-0.95 metric between pairwise annotations from the triple-annotated pages, from which we obtain accuracy ranges.
+
+
+
Table 1: DocLayNet dataset overview. Along with the frequency of each class label, we present the relative occurrence (as % of row "Total") in the train, test and validation sets. The inter-annotator agreement is computed as the mAP@0.5-0.95 metric between pairwise annotations from the triple-annotated pages, from which we obtain accuracy ranges.
Figure 3: Corpus Conversion Service annotation user interface. The PDF page is shown in the background, with overlaid text-cells (in darker shades). The annotation boxes can be drawn by dragging a rectangle over each segment with the respective label from the palette on the right.
+
+we distributed the annotation workload and performed continuous quality controls. Phase one and two required a small team of experts only. For phases three and four, a group of 40 dedicated annotators were assembled and supervised.
+Phase 1: Data selection and preparation. Our inclusion criteria for documents were described in Section 3. A large effort went into ensuring that all documents are free to use. The data sources
+include publication repositories such as arXiv$^{3}$, government offices, company websites as well as data directory services for financial reports and patents. Scanned documents were excluded wherever possible because they can be rotated or skewed. This would not allow us to perform annotation with rectangular bounding-boxes and therefore complicate the annotation process.
+Preparation work included uploading and parsing the sourced PDF documents in the Corpus Conversion Service (CCS) [22], a cloud-native platform which provides a visual annotation interface and allows for dataset inspection and analysis. The annotation interface of CCS is shown in Figure 3. The desired balance of pages between the different document categories was achieved by selective subsampling of pages with certain desired properties. For example, we made sure to include the title page of each document and bias the remaining page selection to those with figures or tables. The latter was achieved by leveraging pre-trained object detection models from PubLayNet, which helped us estimate how many figures and tables a given page contains.
+Phase 2: Label selection and guideline. We reviewed the collected documents and identified the most common structural features they exhibit. This was achieved by identifying recurrent layout elements and lead us to the definition of 11 distinct class labels. These 11 class labels are Caption , Footnote , Formula , List-item , Pagefooter , Page-header , Picture , Section-header , Table , Text , and Title . Critical factors that were considered for the choice of these class labels were (1) the overall occurrence of the label, (2) the specificity of the label, (3) recognisability on a single page (i.e. no need for context from previous or next page) and (4) overall coverage of the page. Specificity ensures that the choice of label is not ambiguous, while coverage ensures that all meaningful items on a page can be annotated. We refrained from class labels that are very specific to a document category, such as Abstract in the Scientific Articles category. We also avoided class labels that are tightly linked to the semantics of the text. Labels such as Author and Affiliation , as seen in DocBank, are often only distinguishable by discriminating on
+the textual content of an element, which goes beyond visual layout recognition, in particular outside the Scientific Articles category.
+At first sight, the task of visual document-layout interpretation appears intuitive enough to obtain plausible annotations in most cases. However, during early trial-runs in the core team, we observed many cases in which annotators use different annotation styles, especially for documents with challenging layouts. For example, if a figure is presented with subfigures, one annotator might draw a single figure bounding-box, while another might annotate each subfigure separately. The same applies for lists, where one might annotate all list items in one block or each list item separately. In essence, we observed that challenging layouts would be annotated in different but plausible ways. To illustrate this, we show in Figure 4 multiple examples of plausible but inconsistent annotations on the same pages.
+Obviously, this inconsistency in annotations is not desirable for datasets which are intended to be used for model training. To minimise these inconsistencies, we created a detailed annotation guideline. While perfect consistency across 40 annotation staff members is clearly not possible to achieve, we saw a huge improvement in annotation consistency after the introduction of our annotation guideline. A few selected, non-trivial highlights of the guideline are:
+(1) Every list-item is an individual object instance with class label List-item . This definition is different from PubLayNet and DocBank, where all list-items are grouped together into one List object.
+(2) A List-item is a paragraph with hanging indentation. Singleline elements can qualify as List-item if the neighbour elements expose hanging indentation. Bullet or enumeration symbols are not a requirement.
+(3) For every Caption , there must be exactly one corresponding Picture or Table .
+(4) Connected sub-pictures are grouped together in one Picture object.
+(5) Formula numbers are included in a Formula object.
+(6) Emphasised text (e.g. in italic or bold) at the beginning of a paragraph is not considered a Section-header , unless it appears exclusively on its own line.
+The complete annotation guideline is over 100 pages long and a detailed description is obviously out of scope for this paper. Nevertheless, it will be made publicly available alongside with DocLayNet for future reference.
+Phase 3: Training. After a first trial with a small group of people, we realised that providing the annotation guideline and a set of random practice pages did not yield the desired quality level for layout annotation. Therefore we prepared a subset of pages with two different complexity levels, each with a practice and an exam part. 974 pages were reference-annotated by one proficient core team member. Annotation staff were then given the task to annotate the same subsets (blinded from the reference). By comparing the annotations of each staff member with the reference annotations, we could quantify how closely their annotations matched the reference. Only after passing two exam levels with high annotation quality, staff were admitted into the production phase. Practice iterations
+
Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.
+
+were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar.
+Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted
+Table 2: Prediction performance (mAP@0.5-0.95) of object detection networks on DocLayNet test set. The MRCNN (Mask R-CNN) and FRCNN (Faster R-CNN) models with ResNet-50 or ResNet-101 backbone were trained based on the network architectures from the detectron2 model zoo (Mask R-CNN R50, R101-FPN 3x, Faster R-CNN R101-FPN 3x), with default configurations. The YOLO implementation utilized was YOLOv5x6 [13]. All models were initialised using pre-trained weights from the COCO 2017 dataset.
+
+
+
Table 2: Prediction performance (mAP@0.5-0.95) of object detection networks on DocLayNet test set. The MRCNN (Mask R-CNN) and FRCNN (Faster R-CNN) models with ResNet-50 or ResNet-101 backbone were trained based on the network architectures from the detectron2 model zoo (Mask R-CNN R50, R101-FPN 3x, Faster R-CNN R101-FPN 3x), with default configurations. The YOLO implementation utilized was YOLOv5x6 [13]. All models were initialised using pre-trained weights from the COCO 2017 dataset.
+to avoid this at any cost in order to have clear, unbiased baseline numbers for human document-layout annotation. Third, we introduced the feature of snapping boxes around text segments to obtain a pixel-accurate annotation and again reduce time and effort. The CCS annotation tool automatically shrinks every user-drawn box to the minimum bounding-box around the enclosed text-cells for all purely text-based segments, which excludes only Table and Picture . For the latter, we instructed annotation staff to minimise inclusion of surrounding whitespace while including all graphical lines. A downside of snapping boxes to enclosed text cells is that some wrongly parsed PDF pages cannot be annotated correctly and need to be skipped. Fourth, we established a way to flag pages as rejected for cases where no valid annotation according to the label guidelines could be achieved. Example cases for this would be PDF pages that render incorrectly or contain layouts that are impossible to capture with non-overlapping rectangles. Such rejected pages are not contained in the final dataset. With all these measures in place, experienced annotation staff managed to annotate a single page in a typical timeframe of 20s to 60s, depending on its complexity.
+5 EXPERIMENTS
+The primary goal of DocLayNet is to obtain high-quality ML models capable of accurate document-layout analysis on a wide variety of challenging layouts. As discussed in Section 2, object detection models are currently the easiest to use, due to the standardisation of ground-truth data in COCO format [16] and the availability of general frameworks such as detectron2 [17]. Furthermore, baseline numbers in PubLayNet and DocBank were obtained using standard object detection models such as Mask R-CNN and Faster R-CNN. As such, we will relate to these object detection methods in this
+
Figure 5: Prediction performance (mAP@0.5-0.95) of a Mask R-CNN network with ResNet50 backbone trained on increasing fractions of the DocLayNet dataset. The learning curve flattens around the 80% mark, indicating that increasing the size of the DocLayNet dataset with similar data will not yield significantly better predictions.
+
+paper and leave the detailed evaluation of more recent methods mentioned in Section 2 for future work.
+In this section, we will present several aspects related to the performance of object detection models on DocLayNet. Similarly as in PubLayNet, we will evaluate the quality of their predictions using mean average precision (mAP) with 10 overlaps that range from 0.5 to 0.95 in steps of 0.05 (mAP@0.5-0.95). These scores are computed by leveraging the evaluation code provided by the COCO API [16].
+Baselines for Object Detection
+In Table 2, we present baseline experiments (given in mAP) on Mask R-CNN [12], Faster R-CNN [11], and YOLOv5 [13]. Both training and evaluation were performed on RGB images with dimensions of 1025 × 1025 pixels. For training, we only used one annotation in case of redundantly annotated pages. As one can observe, the variation in mAP between the models is rather low, but overall between 6 and 10% lower than the mAP computed from the pairwise human annotations on triple-annotated pages. This gives a good indication that the DocLayNet dataset poses a worthwhile challenge for the research community to close the gap between human recognition and ML approaches. It is interesting to see that Mask R-CNN and Faster R-CNN produce very comparable mAP scores, indicating that pixel-based image segmentation derived from bounding-boxes does not help to obtain better predictions. On the other hand, the more recent Yolov5x model does very well and even out-performs humans on selected labels such as Text , Table and Picture . This is not entirely surprising, as Text , Table and Picture are abundant and the most visually distinctive in a document.
+
Table 3: Performance of a Mask R-CNN R50 network in mAP@0.5-0.95 scores trained on DocLayNet with different class label sets. The reduced label sets were obtained by either down-mapping or dropping labels.
+
+
+
Table 3: Performance of a Mask R-CNN R50 network in mAP@0.5-0.95 scores trained on DocLayNet with different class label sets. The reduced label sets were obtained by either down-mapping or dropping labels.
+Learning Curve
+One of the fundamental questions related to any dataset is if it is "large enough". To answer this question for DocLayNet, we performed a data ablation study in which we evaluated a Mask R-CNN model trained on increasing fractions of the DocLayNet dataset. As can be seen in Figure 5, the mAP score rises sharply in the beginning and eventually levels out. To estimate the error-bar on the metrics, we ran the training five times on the entire data-set. This resulted in a 1% error-bar, depicted by the shaded area in Figure 5. In the inset of Figure 5, we show the exact same data-points, but with a logarithmic scale on the x-axis. As is expected, the mAP score increases linearly as a function of the data-size in the inset. The curve ultimately flattens out between the 80% and 100% mark, with the 80% mark falling within the error-bars of the 100% mark. This provides a good indication that the model would not improve significantly by yet increasing the data size. Rather, it would probably benefit more from improved data consistency (as discussed in Section 3), data augmentation methods [23], or the addition of more document categories and styles.
+Impact of Class Labels
+The choice and number of labels can have a significant effect on the overall model performance. Since PubLayNet, DocBank and DocLayNet all have different label sets, it is of particular interest to understand and quantify this influence of the label set on the model performance. We investigate this by either down-mapping labels into more common ones (e.g. Caption → Text ) or excluding them from the annotations entirely. Furthermore, it must be stressed that all mappings and exclusions were performed on the data before model training. In Table 3, we present the mAP scores for a Mask R-CNN R50 network on different label sets. Where a label is down-mapped, we show its corresponding label, otherwise it was excluded. We present three different label sets, with 6, 5 and 4 different labels respectively. The set of 5 labels contains the same labels as PubLayNet. However, due to the different definition of
+
Table 4: Performance of a Mask R-CNN R50 network with document-wise and page-wise split for different label sets. Naive page-wise split will result in GLYPH 10% point improvement.
+
+
+
Table 4: Performance of a Mask R-CNN R50 network with document-wise and page-wise split for different label sets. Naive page-wise split will result in GLYPH 10% point improvement.
+lists in PubLayNet (grouped list-items) versus DocLayNet (separate list-items), the label set of size 4 is the closest to PubLayNet, in the assumption that the List is down-mapped to Text in PubLayNet. The results in Table 3 show that the prediction accuracy on the remaining class labels does not change significantly when other classes are merged into them. The overall macro-average improves by around 5%, in particular when Page-footer and Page-header are excluded.
+Impact of Document Split in Train and Test Set
+Many documents in DocLayNet have a unique styling. In order to avoid overfitting on a particular style, we have split the train-, test- and validation-sets of DocLayNet on document boundaries, i.e. every document contributes pages to only one set. To the best of our knowledge, this was not considered in PubLayNet or DocBank. To quantify how this affects model performance, we trained and evaluated a Mask R-CNN R50 model on a modified dataset version. Here, the train-, test- and validation-sets were obtained by a randomised draw over the individual pages. As can be seen in Table 4, the difference in model performance is surprisingly large: pagewise splitting gains ˜ 10% in mAP over the document-wise splitting. Thus, random page-wise splitting of DocLayNet can easily lead to accidental overestimation of model performance and should be avoided.
+Dataset Comparison
+Throughout this paper, we claim that DocLayNet's wider variety of document layouts leads to more robust layout detection models. In Table 5, we provide evidence for that. We trained models on each of the available datasets (PubLayNet, DocBank and DocLayNet) and evaluated them on the test sets of the other datasets. Due to the different label sets and annotation styles, a direct comparison is not possible. Hence, we focussed on the common labels among the datasets. Between PubLayNet and DocLayNet, these are Picture ,
+Table 5: Prediction Performance (mAP@0.5-0.95) of a Mask R-CNN R50 network across the PubLayNet, DocBank & DocLayNet data-sets. By evaluating on common label classes of each dataset, we observe that the DocLayNet-trained model has much less pronounced variations in performance across all datasets.
+
+
+
Table 5: Prediction Performance (mAP@0.5-0.95) of a Mask R-CNN R50 network across the PubLayNet, DocBank & DocLayNet data-sets. By evaluating on common label classes of each dataset, we observe that the DocLayNet-trained model has much less pronounced variations in performance across all datasets.
+Section-header , Table and Text . Before training, we either mapped or excluded DocLayNet's other labels as specified in table 3, and also PubLayNet's List to Text . Note that the different clustering of lists (by list-element vs. whole list objects) naturally decreases the mAP score for Text .
+For comparison of DocBank with DocLayNet, we trained only on Picture and Table clusters of each dataset. We had to exclude Text because successive paragraphs are often grouped together into a single object in DocBank. This paragraph grouping is incompatible with the individual paragraphs of DocLayNet. As can be seen in Table 5, DocLayNet trained models yield better performance compared to the previous datasets. It is noteworthy that the models trained on PubLayNet and DocBank perform very well on their own test set, but have a much lower performance on the foreign datasets. While this also applies to DocLayNet, the difference is far less pronounced. Thus we conclude that DocLayNet trained models are overall more robust and will produce better results for challenging, unseen layouts.
+Example Predictions
+To conclude this section, we illustrate the quality of layout predictions one can expect from DocLayNet-trained models by providing a selection of examples without any further post-processing applied. Figure 6 shows selected layout predictions on pages from the test-set of DocLayNet. Results look decent in general across document categories, however one can also observe mistakes such as overlapping clusters of different classes, or entirely missing boxes due to low confidence.
+6 CONCLUSION
+In this paper, we presented the DocLayNet dataset. It provides the document conversion and layout analysis research community a new and challenging dataset to improve and fine-tune novel ML methods on. In contrast to many other datasets, DocLayNet was created by human annotation in order to obtain reliable layout ground-truth on a wide variety of publication- and typesettingstyles. Including a large proportion of documents outside the scientific publishing domain adds significant value in this respect.
+From the dataset, we have derived on the one hand reference metrics for human performance on document-layout annotation (through double and triple annotations) and on the other hand evaluated the baseline performance of commonly used object detection methods. We also illustrated the impact of various dataset-related aspects on model performance through data-ablation experiments, both from a size and class-label perspective. Last but not least, we compared the accuracy of models trained on other public datasets and showed that DocLayNet trained models are more robust.
+To date, there is still a significant gap between human and ML accuracy on the layout interpretation task, and we hope that this work will inspire the research community to close that gap.
+REFERENCES
+[1] Max Göbel, Tamir Hassan, Ermelinda Oro, and Giorgio Orsi. Icdar 2013 table competition. In 2013 12th International Conference on Document Analysis and Recognition , pages 1449-1453, 2013.
+[2] Christian Clausner, Apostolos Antonacopoulos, and Stefan Pletschacher. Icdar2017 competition on recognition of documents with complex layouts rdcl2017. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) , volume 01, pages 1404-1410, 2017.
+[3] Hervé Déjean, Jean-Luc Meunier, Liangcai Gao, Yilun Huang, Yu Fang, Florian Kleber, and Eva-Maria Lang. ICDAR 2019 Competition on Table Detection and Recognition (cTDaR), April 2019. http://sac.founderit.com/.
+[4] Antonio Jimeno Yepes, Peter Zhong, and Douglas Burdick. Competition on scientific literature parsing. In Proceedings of the International Conference on Document Analysis and Recognition , ICDAR, pages 605-617. LNCS 12824, SpringerVerlag, sep 2021.
+[5] Logan Markewich, Hao Zhang, Yubin Xing, Navid Lambert-Shirzad, Jiang Zhexin, Roy Lee, Zhi Li, and Seok-Bum Ko. Segmentation for document layout analysis: not dead yet. International Journal on Document Analysis and Recognition (IJDAR) , pages 1-11, 01 2022.
+[6] Xu Zhong, Jianbin Tang, and Antonio Jimeno-Yepes. Publaynet: Largest dataset ever for document layout analysis. In Proceedings of the International Conference on Document Analysis and Recognition , ICDAR, pages 1015-1022, sep 2019.
+[7] Minghao Li, Yiheng Xu, Lei Cui, Shaohan Huang, Furu Wei, Zhoujun Li, and Ming Zhou. Docbank: A benchmark dataset for document layout analysis. In Proceedings of the 28th International Conference on Computational Linguistics , COLING, pages 949-960. International Committee on Computational Linguistics, dec 2020.
+[8] Riaz Ahmad, Muhammad Tanvir Afzal, and M. Qadir. Information extraction from pdf sources based on rule-based system using integrated formats. In SemWebEval@ESWC , 2016.
+[9] Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition , CVPR, pages 580-587. IEEE Computer Society, jun 2014.
+[10] Ross B. Girshick. Fast R-CNN. In 2015 IEEE International Conference on Computer Vision , ICCV, pages 1440-1448. IEEE Computer Society, dec 2015.
+[11] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence , 39(6):1137-1149, 2017.
+[12] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross B. Girshick. Mask R-CNN. In IEEE International Conference on Computer Vision , ICCV, pages 2980-2988. IEEE Computer Society, Oct 2017.
+[13] Glenn Jocher, Alex Stoken, Ayush Chaurasia, Jirka Borovec, NanoCode012, TaoXie, Yonghye Kwon, Kalen Michael, Liu Changyu, Jiacong Fang, Abhiram V, Laughing, tkianai, yxNONG, Piotr Skalski, Adam Hogan, Jebastin Nadar, imyhxy, Lorenzo Mammana, Alex Wang, Cristi Fati, Diego Montes, Jan Hajek, Laurentiu
+
Figure 6: Example layout predictions on selected pages from the DocLayNet test-set. (A, D) exhibit favourable results on coloured backgrounds. (B, C) show accurate list-item and paragraph differentiation despite densely-spaced lines. (E) demonstrates good table and figure distinction. (F) shows predictions on a Chinese patent with multiple overlaps, label confusion and missing boxes.
+
+Diaconu, Mai Thanh Minh, Marc, albinxavi, fatih, oleg, and wanghao yang. ultralytics/yolov5: v6.0 - yolov5n nano models, roboflow integration, tensorflow export, opencv dnn support, October 2021.
+[14] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. CoRR , abs/2005.12872, 2020.
+[15] Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. CoRR , abs/1911.09070, 2019.
+[16] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context, 2014.
+[17] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2, 2019.
+[18] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter W. J. Staar. Robust pdf document conversion using recurrent neural networks. In Proceedings of the 35th Conference on Artificial Intelligence , AAAI, pages 1513715145, feb 2021.
+[19] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutlm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 1192-1200, New York, USA, 2020. Association for Computing Machinery.
+[20] Shoubin Li, Xuyan Ma, Shuaiqun Pan, Jun Hu, Lin Shi, and Qing Wang. Vtlayout: Fusion of visual and text features for document layout analysis, 2021.
+[21] Peng Zhang, Can Li, Liang Qiao, Zhanzhan Cheng, Shiliang Pu, Yi Niu, and Fei Wu. Vsr: A unified framework for document layout analysis combining vision, semantics and relations, 2021.
+[22] Peter W J Staar, Michele Dolfi, Christoph Auer, and Costas Bekas. Corpus conversion service: A machine learning platform to ingest documents at scale. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 774-782. ACM, 2018.
+[23] Connor Shorten and Taghi M. Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data , 6(1):60, 2019.
+
\ No newline at end of file
diff --git a/tests/data/2206.01062.json b/tests/data/2206.01062.json
index 0786a34d..32114c4d 100644
--- a/tests/data/2206.01062.json
+++ b/tests/data/2206.01062.json
@@ -1 +1 @@
-{"_name": "", "type": "pdf-document", "description": {"title": null, "abstract": null, "authors": null, "affiliations": null, "subjects": null, "keywords": null, "publication_date": null, "languages": null, "license": null, "publishers": null, "url_refs": null, "references": null, "publication": null, "reference_count": null, "citation_count": null, "citation_date": null, "advanced": null, "analytics": null, "logs": [], "collection": null, "acquisition": null}, "file-info": {"filename": "2206.01062.pdf", "filename-prov": null, "document-hash": "5dfbd8c115a15fd3396b68409124cfee29fc8efac7b5c846634ff924e635e0dc", "#-pages": 9, "collection-name": null, "description": null, "page-hashes": [{"hash": "3c76b6d3fd82865e42c51d5cbd7d1a9996dba7902643b919acc581e866b92716", "model": "default", "page": 1}, {"hash": "5ccfaddd314d3712cbabc857c8c0f33d1268341ce37b27089857cbf09f0522d4", "model": "default", "page": 2}, {"hash": "d2dc51ad0a01ee9486ffe248649ee1cd10ce35773de8e4b21abf30d310f4fc26", "model": "default", "page": 3}, {"hash": "310121977375f8f1106412189943bd70f121629b2b4d35394077233dedbfb041", "model": "default", "page": 4}, {"hash": "09fa72b602eb0640669844acabc17ef494802a4a9188aeaaf0e0131c496e6951", "model": "default", "page": 5}, {"hash": "ec3fa60f136f3d9f5fa790ab27f5d1c14e5622573c52377b909b591d0be0ea44", "model": "default", "page": 6}, {"hash": "ec1bc56fe581ce95615b1fab11c3ba8fc89662acf2f53446decd380a155b06dd", "model": "default", "page": 7}, {"hash": "fbd2b06876dddc19ee08e0a9751d978c03e6943b74bedf1d83d6528cd4f8954d", "model": "default", "page": 8}, {"hash": "6cfa4eb4410fa9972da289dbf8d8cc585d317a192e1214c778ddd7768e98f311", "model": "default", "page": 9}]}, "main-text": [{"text": "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [107.30000305175781, 672.3833618164062, 505.1857604980469, 709.082275390625], "page": 1, "span": [0, 71], "__ref_s3_data": null}]}, {"text": "Birgit Pfitzmann IBM Research Rueschlikon, Switzerland bpf@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [90.94670867919922, 611.2825317382812, 193.91998291015625, 658.7803344726562], "page": 1, "span": [0, 73], "__ref_s3_data": null}]}, {"text": "Christoph Auer IBM Research Rueschlikon, Switzerland cau@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [254.97935485839844, 611.7597045898438, 357.8802490234375, 658.7174072265625], "page": 1, "span": [0, 71], "__ref_s3_data": null}]}, {"text": "Michele Dolfi IBM Research Rueschlikon, Switzerland dol@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [419.0672302246094, 611.7597045898438, 522.0595703125, 658.9878540039062], "page": 1, "span": [0, 70], "__ref_s3_data": null}]}, {"text": "Ahmed S. Nassar IBM Research Rueschlikon, Switzerland ahn@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [171.90907287597656, 553.3746948242188, 275.3072509765625, 600.1580200195312], "page": 1, "span": [0, 72], "__ref_s3_data": null}]}, {"text": "Peter Staar IBM Research Rueschlikon, Switzerland taa@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [336.5292053222656, 553.3746948242188, 439.84405517578125, 599.942626953125], "page": 1, "span": [0, 68], "__ref_s3_data": null}]}, {"text": "ABSTRACT", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [53.33011245727539, 533.9879760742188, 112.2127456665039, 544.47509765625], "page": 1, "span": [0, 8], "__ref_s3_data": null}]}, {"text": "Accurate document layout analysis is a key requirement for highquality PDF document conversion. With the recent availability of public, large ground-truth datasets such as PubLayNet and DocBank, deep-learning models have proven to be very effective at layout detection and segmentation. While these datasets are of adequate size to train such models, they severely lack in layout variability since they are sourced from scientific article repositories such as PubMed and arXiv only. Consequently, the accuracy of the layout segmentation drops significantly when these models are applied on more challenging and diverse layouts. In this paper, we present DocLayNet , a new, publicly available, document-layout annotation dataset in COCO format. It contains 80863 manually annotated pages from diverse data sources to represent a wide variability in layouts. For each PDF page, the layout annotations provide labelled bounding-boxes with a choice of 11 distinct classes. DocLayNet also provides a subset of double- and triple-annotated pages to determine the inter-annotator agreement. In multiple experiments, we provide baseline accuracy scores (in mAP) for a set of popular object detection models. We also demonstrate that these models fall approximately 10% behind the inter-annotator agreement. Furthermore, we provide evidence that DocLayNet is of sufficient size. Lastly, we compare models trained on PubLayNet, DocBank and DocLayNet, showing that layout predictions of the DocLayNettrained models are more robust and thus the preferred choice for general-purpose document-layout analysis.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [52.857933044433594, 257.10565185546875, 295.5601806640625, 529.5941162109375], "page": 1, "span": [0, 1595], "__ref_s3_data": null}]}, {"text": "CCS CONCEPTS", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [53.36912155151367, 230.69398498535156, 134.81988525390625, 241.21551513671875], "page": 1, "span": [0, 12], "__ref_s3_data": null}]}, {"text": "\u00b7 Information systems \u2192 Document structure ; \u00b7 Applied computing \u2192 Document analysis ; \u00b7 Computing methodologies \u2192 Machine learning ; Computer vision ; Object detection ;", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.02470016479492, 194.8704071044922, 297.8529357910156, 226.241455078125], "page": 1, "span": [0, 170], "__ref_s3_data": null}]}, {"text": "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.33460235595703, 117.82738494873047, 295.11798095703125, 158.33511352539062], "page": 1, "span": [0, 397], "__ref_s3_data": null}]}, {"text": "KDD '22, August 14-18, 2022, Washington, DC, USA \u00a9 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9385-0/22/08. https://doi.org/10.1145/3534678.3539043", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.31700134277344, 85.73310852050781, 197.8627471923828, 116.91976928710938], "page": 1, "span": [0, 168], "__ref_s3_data": null}]}, {"text": "Figure 1: Four examples of complex page layouts across different document categories", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [317.2291564941406, 232.3291473388672, 559.8057861328125, 252.12974548339844], "page": 1, "span": [0, 84], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/0"}, {"text": "KEYWORDS", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [317.11431884765625, 189.22499084472656, 379.82049560546875, 199.97215270996094], "page": 1, "span": [0, 8], "__ref_s3_data": null}]}, {"text": "PDF document conversion, layout segmentation, object-detection, data set, Machine Learning", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.2037658691406, 164.9988250732422, 559.2164306640625, 184.67845153808594], "page": 1, "span": [0, 90], "__ref_s3_data": null}]}, {"text": "ACM Reference Format:", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [317.3434753417969, 144.41390991210938, 404.6536560058594, 152.36439514160156], "page": 1, "span": [0, 21], "__ref_s3_data": null}]}, {"text": "Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar. 2022. DocLayNet: A Large Human-Annotated Dataset for DocumentLayout Analysis. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '22), August 14-18, 2022, Washington, DC, USA. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/ 3534678.3539043", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.1117248535156, 84.62297058105469, 559.5494995117188, 142.41151428222656], "page": 1, "span": [0, 374], "__ref_s3_data": null}]}, {"text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [53.19501876831055, 722.7692260742188, 558.4357299804688, 732.1524047851562], "page": 2, "span": [0, 130], "__ref_s3_data": null}]}, {"text": "1 INTRODUCTION", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [53.79800033569336, 695.8309936523438, 156.52899169921875, 706.4523315429688], "page": 2, "span": [0, 14], "__ref_s3_data": null}]}, {"text": "Despite the substantial improvements achieved with machine-learning (ML) approaches and deep neural networks in recent years, document conversion remains a challenging problem, as demonstrated by the numerous public competitions held on this topic [1-4]. The challenge originates from the huge variability in PDF documents regarding layout, language and formats (scanned, programmatic or a combination of both). Engineering a single ML model that can be applied on all types of documents and provides high-quality layout segmentation remains to this day extremely challenging [5]. To highlight the variability in document layouts, we show a few example documents from the DocLayNet dataset in Figure 1.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [52.80397415161133, 562.986572265625, 303.1766357421875, 681.3472290039062], "page": 2, "span": [0, 702], "__ref_s3_data": null}]}, {"text": "A key problem in the process of document conversion is to understand the structure of a single document page, i.e. which segments of text should be grouped together in a unit. To train models for this task, there are currently two large datasets available to the community, PubLayNet [6] and DocBank [7]. They were introduced in 2019 and 2020 respectively and significantly accelerated the implementation of layout detection and segmentation models due to their sizes of 300K and 500K ground-truth pages. These sizes were achieved by leveraging an automation approach. The benefit of automated ground-truth generation is obvious: one can generate large ground-truth datasets at virtually no cost. However, the automation introduces a constraint on the variability in the dataset, because corresponding structured source data must be available. PubLayNet and DocBank were both generated from scientific document repositories (PubMed and arXiv), which provide XML or L A T E X sources. Those scientific documents present a limited variability in their layouts, because they are typeset in uniform templates provided by the publishers. Obviously, documents such as technical manuals, annual company reports, legal text, government tenders, etc. have very different and partially unique layouts. As a consequence, the layout predictions obtained from models trained on PubLayNet or DocBank is very reasonable when applied on scientific documents. However, for more artistic or free-style layouts, we see sub-par prediction quality from these models, which we demonstrate in Section 5.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [52.89326477050781, 289.0808410644531, 295.5641174316406, 561.2902221679688], "page": 2, "span": [0, 1580], "__ref_s3_data": null}]}, {"text": "In this paper, we present the DocLayNet dataset. It provides pageby-page layout annotation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique document pages, of which a fraction carry double- or triple-annotations. DocLayNet is similar in spirit to PubLayNet and DocBank and will likewise be made available to the public 1 in order to stimulate the document-layout analysis community. It distinguishes itself in the following aspects:", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.12458419799805, 212.36782836914062, 295.56396484375, 287.0208740234375], "page": 2, "span": [0, 462], "__ref_s3_data": null}]}, {"text": "(1) Human Annotation : In contrast to PubLayNet and DocBank, we relied on human annotation instead of automation approaches to generate the data set.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [64.64593505859375, 176.96405029296875, 295.5616455078125, 208.28524780273438], "page": 2, "span": [0, 149], "__ref_s3_data": null}]}, {"text": "(2) Large Layout Variability : We include diverse and complex layouts from a large variety of public sources.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [64.50244140625, 154.92233276367188, 294.3029479980469, 174.95782470703125], "page": 2, "span": [0, 109], "__ref_s3_data": null}]}, {"text": "(3) Detailed Label Set : We define 11 class labels to distinguish layout features in high detail. PubLayNet provides 5 labels; DocBank provides 13, although not a superset of ours.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [64.18266296386719, 121.99307250976562, 294.6838073730469, 153.57122802734375], "page": 2, "span": [0, 180], "__ref_s3_data": null}]}, {"text": "(4) Redundant Annotations : A fraction of the pages in the DocLayNet data set carry more than one human annotation.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [64.30329132080078, 99.92230987548828, 295.56439208984375, 120.3491439819336], "page": 2, "span": [0, 115], "__ref_s3_data": null}]}, {"text": "$^{1}$https://developer.ibm.com/exchanges/data/all/doclaynet", "type": "footnote", "name": "Footnote", "font": null, "prov": [{"bbox": [53.60314178466797, 82.76702880859375, 216.05824279785156, 90.63584899902344], "page": 2, "span": [0, 60], "__ref_s3_data": null}]}, {"text": "This enables experimentation with annotation uncertainty and quality control analysis.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [341.2403564453125, 685.3028564453125, 558.5009765625, 705.5034790039062], "page": 2, "span": [0, 86], "__ref_s3_data": null}]}, {"text": "(5) Pre-defined Train-, Test- & Validation-set : Like DocBank, we provide fixed train-, test- & validation-sets to ensure proportional representation of the class-labels. Further, we prevent leakage of unique layouts across sets, which has a large effect on model accuracy scores.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [328.06146240234375, 630.4351806640625, 559.7210083007812, 683.4995727539062], "page": 2, "span": [0, 280], "__ref_s3_data": null}]}, {"text": "All aspects outlined above are detailed in Section 3. In Section 4, we will elaborate on how we designed and executed this large-scale human annotation campaign. We will also share key insights and lessons learned that might prove helpful for other parties planning to set up annotation campaigns.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.0706787109375, 571.292724609375, 559.1903076171875, 624.9239501953125], "page": 2, "span": [0, 297], "__ref_s3_data": null}]}, {"text": "In Section 5, we will present baseline accuracy numbers for a variety of object detection methods (Faster R-CNN, Mask R-CNN and YOLOv5) trained on DocLayNet. We further show how the model performance is impacted by varying the DocLayNet dataset size, reducing the label set and modifying the train/test-split. Last but not least, we compare the performance of models trained on PubLayNet, DocBank and DocLayNet and demonstrate that a model trained on DocLayNet provides overall more robust layout recovery.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [316.9918518066406, 483.6390686035156, 559.5819702148438, 569.6455078125], "page": 2, "span": [0, 506], "__ref_s3_data": null}]}, {"text": "2 RELATED WORK", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [317.33935546875, 460.4820251464844, 422.0046081542969, 471.2471923828125], "page": 2, "span": [0, 14], "__ref_s3_data": null}]}, {"text": "While early approaches in document-layout analysis used rulebased algorithms and heuristics [8], the problem is lately addressed with deep learning methods. The most common approach is to leverage object detection models [9-15]. In the last decade, the accuracy and speed of these models has increased dramatically. Furthermore, most state-of-the-art object detection methods can be trained and applied with very little work, thanks to a standardisation effort of the ground-truth data format [16] and common deep-learning frameworks [17]. Reference data sets such as PubLayNet [6] and DocBank provide their data in the commonly accepted COCO format [16].", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [316.9687805175781, 327.7038269042969, 559.7161254882812, 446.38397216796875], "page": 2, "span": [0, 655], "__ref_s3_data": null}]}, {"text": "Lately, new types of ML models for document-layout analysis have emerged in the community [18-21]. These models do not approach the problem of layout analysis purely based on an image representation of the page, as computer vision methods do. Instead, they combine the text tokens and image representation of a page in order to obtain a segmentation. While the reported accuracies appear to be promising, a broadly accepted data format which links geometric and textual features has yet to establish.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.156982421875, 239.59246826171875, 559.1864624023438, 325.6906433105469], "page": 2, "span": [0, 500], "__ref_s3_data": null}]}, {"text": "3 THE DOCLAYNET DATASET", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [317.58740234375, 216.37100219726562, 477.8531799316406, 226.6800994873047], "page": 2, "span": [0, 23], "__ref_s3_data": null}]}, {"text": "DocLayNet contains 80863 PDF pages. Among these, 7059 carry two instances of human annotations, and 1591 carry three. This amounts to 91104 total annotation instances. The annotations provide layout information in the shape of labeled, rectangular boundingboxes. We define 11 distinct labels for layout features, namely Caption , Footnote , Formula , List-item , Page-footer , Page-header , Picture , Section-header , Table , Text , and Title . Our reasoning for picking this particular label set is detailed in Section 4.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.11236572265625, 116.19312286376953, 559.7131958007812, 202.27523803710938], "page": 2, "span": [0, 522], "__ref_s3_data": null}]}, {"text": "In addition to open intellectual property constraints for the source documents, we required that the documents in DocLayNet adhere to a few conditions. Firstly, we kept scanned documents", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.34619140625, 83.59282684326172, 558.5303344726562, 114.41421508789062], "page": 2, "span": [0, 186], "__ref_s3_data": null}]}, {"text": "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [53.4626579284668, 722.95458984375, 347.0511779785156, 732.11474609375], "page": 3, "span": [0, 71], "__ref_s3_data": null}]}, {"text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [365.31488037109375, 723.0569458007812, 558.807861328125, 731.9796142578125], "page": 3, "span": [0, 48], "__ref_s3_data": null}]}, {"text": "Figure 2: Distribution of DocLayNet pages across document categories.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [53.28777313232422, 536.294677734375, 294.0437316894531, 556.148193359375], "page": 3, "span": [0, 69], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/1"}, {"text": "to a minimum, since they introduce difficulties in annotation (see Section 4). As a second condition, we focussed on medium to large documents ( > 10 pages) with technical content, dense in complex tables, figures, plots and captions. Such documents carry a lot of information value, but are often hard to analyse with high accuracy due to their challenging layouts. Counterexamples of documents not included in the dataset are receipts, invoices, hand-written documents or photographs showing \"text in the wild\".", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.244232177734375, 424.931396484375, 294.5379943847656, 510.7526550292969], "page": 3, "span": [0, 513], "__ref_s3_data": null}]}, {"text": "The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports , Manuals , Scientific Articles , Laws & Regulations , Patents and Government Tenders . Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports 2 which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories ( Financial Reports and Manuals ) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.10974884033203, 282.6438293457031, 295.5604553222656, 423.1407775878906], "page": 3, "span": [0, 810], "__ref_s3_data": null}]}, {"text": "We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [52.8973388671875, 183.77932739257812, 295.5615539550781, 281.3227233886719], "page": 3, "span": [0, 535], "__ref_s3_data": null}]}, {"text": "To ensure that future benchmarks in the document-layout analysis community can be easily compared, we have split up DocLayNet into pre-defined train-, test- and validation-sets. In this way, we can avoid spurious variations in the evaluation scores due to random splitting in train-, test- and validation-sets. We also ensured that less frequent labels are represented in train and test sets in equal proportions.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.209388732910156, 106.8985824584961, 295.56396484375, 182.471923828125], "page": 3, "span": [0, 413], "__ref_s3_data": null}]}, {"text": "$^{2}$e.g. AAPL from https://www.annualreports.com/", "type": "footnote", "name": "Footnote", "font": null, "prov": [{"bbox": [53.352603912353516, 83.35768127441406, 195.78997802734375, 91.47167205810547], "page": 3, "span": [0, 51], "__ref_s3_data": null}]}, {"text": "Table 1 shows the overall frequency and distribution of the labels among the different sets. Importantly, we ensure that subsets are only split on full-document boundaries. This avoids that pages of the same document are spread over train, test and validation set, which can give an undesired evaluation advantage to models and lead to overestimation of their prediction accuracy. We will show the impact of this decision in Section 5.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.0691833496094, 630.5088500976562, 559.1918334960938, 705.8527221679688], "page": 3, "span": [0, 435], "__ref_s3_data": null}]}, {"text": "In order to accommodate the different types of models currently in use by the community, we provide DocLayNet in an augmented COCO format [16]. This entails the standard COCO ground-truth file (in JSON format) with the associated page images (in PNG format, 1025 \u00d7 1025 pixels). Furthermore, custom fields have been added to each COCO record to specify document category, original document filename and page number. In addition, we also provide the original PDF pages, as well as sidecar files containing parsed PDF text and text-cell coordinates (in JSON). All additional files are linked to the primary page images by their matching filenames.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.05938720703125, 520.8086547851562, 558.862060546875, 628.44580078125], "page": 3, "span": [0, 645], "__ref_s3_data": null}]}, {"text": "Despite being cost-intense and far less scalable than automation, human annotation has several benefits over automated groundtruth generation. The first and most obvious reason to leverage human annotations is the freedom to annotate any type of document without requiring a programmatic source. For most PDF documents, the original source document is not available. The latter is not a hard constraint with human annotation, but it is for automated methods. A second reason to use human annotations is that the latter usually provide a more natural interpretation of the page layout. The human-interpreted layout can significantly deviate from the programmatic layout used in typesetting. For example, \"invisible\" tables might be used solely for aligning text paragraphs on columns. Such typesetting tricks might be interpreted by automated methods incorrectly as an actual table, while the human annotation will interpret it correctly as Text or other styles. The same applies to multi-line text elements, when authors decided to space them as \"invisible\" list elements without bullet symbols. A third reason to gather ground-truth through human annotation is to estimate a \"natural\" upper bound on the segmentation accuracy. As we will show in Section 4, certain documents featuring complex layouts can have different but equally acceptable layout interpretations. This natural upper bound for segmentation accuracy can be found by annotating the same pages multiple times by different people and evaluating the inter-annotator agreement. Such a baseline consistency evaluation is very useful to define expectations for a good target accuracy in trained deep neural network models and avoid overfitting (see Table 1). On the flip side, achieving high annotation consistency proved to be a key challenge in human annotation, as we outline in Section 4.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [316.88604736328125, 203.11082458496094, 559.7215576171875, 518.6715087890625], "page": 3, "span": [0, 1854], "__ref_s3_data": null}]}, {"text": "4 ANNOTATION CAMPAIGN", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [317.66510009765625, 174.8409881591797, 470.2132568359375, 185.15008544921875], "page": 3, "span": [0, 21], "__ref_s3_data": null}]}, {"text": "The annotation campaign was carried out in four phases. In phase one, we identified and prepared the data sources for annotation. In phase two, we determined the class labels and how annotations should be done on the documents in order to obtain maximum consistency. The latter was guided by a detailed requirement analysis and exhaustive experiments. In phase three, we trained the annotation staff and performed exams for quality assurance. In phase four,", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.0245056152344, 85.38961791992188, 559.7138061523438, 160.93588256835938], "page": 3, "span": [0, 457], "__ref_s3_data": null}]}, {"text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [53.345272064208984, 723.0101318359375, 558.5491943359375, 732.1525268554688], "page": 4, "span": [0, 130], "__ref_s3_data": null}]}, {"text": "Table 1: DocLayNet dataset overview. Along with the frequency of each class label, we present the relative occurrence (as % of row \"Total\") in the train, test and validation sets. The inter-annotator agreement is computed as the mAP@0.5-0.95 metric between pairwise annotations from the triple-annotated pages, from which we obtain accuracy ranges.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [52.74671936035156, 676.2418212890625, 558.5100708007812, 707.6976928710938], "page": 4, "span": [0, 348], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/0"}, {"text": "Figure 3: Corpus Conversion Service annotation user interface. The PDF page is shown in the background, with overlaid text-cells (in darker shades). The annotation boxes can be drawn by dragging a rectangle over each segment with the respective label from the palette on the right.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [53.28383255004883, 185.58580017089844, 295.64874267578125, 237.99000549316406], "page": 4, "span": [0, 281], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/2"}, {"text": "we distributed the annotation workload and performed continuous quality controls. Phase one and two required a small team of experts only. For phases three and four, a group of 40 dedicated annotators were assembled and supervised.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [52.954681396484375, 116.45683288574219, 294.3648681640625, 158.3203887939453], "page": 4, "span": [0, 231], "__ref_s3_data": null}]}, {"text": "Phase 1: Data selection and preparation. Our inclusion criteria for documents were described in Section 3. A large effort went into ensuring that all documents are free to use. The data sources", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.368797302246094, 83.57982635498047, 295.5584411621094, 114.14925384521484], "page": 4, "span": [0, 193], "__ref_s3_data": null}]}, {"text": "include publication repositories such as arXiv$^{3}$, government offices, company websites as well as data directory services for financial reports and patents. Scanned documents were excluded wherever possible because they can be rotated or skewed. This would not allow us to perform annotation with rectangular bounding-boxes and therefore complicate the annotation process.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.2582702636719, 416.48919677734375, 559.1853637695312, 481.0997619628906], "page": 4, "span": [0, 376], "__ref_s3_data": null}]}, {"text": "Preparation work included uploading and parsing the sourced PDF documents in the Corpus Conversion Service (CCS) [22], a cloud-native platform which provides a visual annotation interface and allows for dataset inspection and analysis. The annotation interface of CCS is shown in Figure 3. The desired balance of pages between the different document categories was achieved by selective subsampling of pages with certain desired properties. For example, we made sure to include the title page of each document and bias the remaining page selection to those with figures or tables. The latter was achieved by leveraging pre-trained object detection models from PubLayNet, which helped us estimate how many figures and tables a given page contains.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.0777587890625, 284.9187316894531, 559.7130737304688, 415.02398681640625], "page": 4, "span": [0, 746], "__ref_s3_data": null}]}, {"text": "Phase 2: Label selection and guideline. We reviewed the collected documents and identified the most common structural features they exhibit. This was achieved by identifying recurrent layout elements and lead us to the definition of 11 distinct class labels. These 11 class labels are Caption , Footnote , Formula , List-item , Pagefooter , Page-header , Picture , Section-header , Table , Text , and Title . Critical factors that were considered for the choice of these class labels were (1) the overall occurrence of the label, (2) the specificity of the label, (3) recognisability on a single page (i.e. no need for context from previous or next page) and (4) overall coverage of the page. Specificity ensures that the choice of label is not ambiguous, while coverage ensures that all meaningful items on a page can be annotated. We refrained from class labels that are very specific to a document category, such as Abstract in the Scientific Articles category. We also avoided class labels that are tightly linked to the semantics of the text. Labels such as Author and Affiliation , as seen in DocBank, are often only distinguishable by discriminating on", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [316.9024963378906, 98.9438247680664, 559.7176513671875, 283.8972473144531], "page": 4, "span": [0, 1159], "__ref_s3_data": null}]}, {"text": "$^{3}$https://arxiv.org/", "type": "footnote", "name": "Footnote", "font": null, "prov": [{"bbox": [317.7030029296875, 82.5821304321289, 369.40142822265625, 90.54422760009766], "page": 4, "span": [0, 24], "__ref_s3_data": null}]}, {"text": "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [53.456207275390625, 723.0143432617188, 347.07373046875, 732.0245361328125], "page": 5, "span": [0, 71], "__ref_s3_data": null}]}, {"text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [365.2621765136719, 723.0404663085938, 558.9374389648438, 731.9317626953125], "page": 5, "span": [0, 48], "__ref_s3_data": null}]}, {"text": "the textual content of an element, which goes beyond visual layout recognition, in particular outside the Scientific Articles category.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.24338912963867, 684.8170166015625, 294.04541015625, 705.5283813476562], "page": 5, "span": [0, 135], "__ref_s3_data": null}]}, {"text": "At first sight, the task of visual document-layout interpretation appears intuitive enough to obtain plausible annotations in most cases. However, during early trial-runs in the core team, we observed many cases in which annotators use different annotation styles, especially for documents with challenging layouts. For example, if a figure is presented with subfigures, one annotator might draw a single figure bounding-box, while another might annotate each subfigure separately. The same applies for lists, where one might annotate all list items in one block or each list item separately. In essence, we observed that challenging layouts would be annotated in different but plausible ways. To illustrate this, we show in Figure 4 multiple examples of plausible but inconsistent annotations on the same pages.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.124725341796875, 542.8159790039062, 295.5592346191406, 683.8748168945312], "page": 5, "span": [0, 812], "__ref_s3_data": null}]}, {"text": "Obviously, this inconsistency in annotations is not desirable for datasets which are intended to be used for model training. To minimise these inconsistencies, we created a detailed annotation guideline. While perfect consistency across 40 annotation staff members is clearly not possible to achieve, we saw a huge improvement in annotation consistency after the introduction of our annotation guideline. A few selected, non-trivial highlights of the guideline are:", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.339271545410156, 455.16583251953125, 295.56005859375, 541.1383666992188], "page": 5, "span": [0, 465], "__ref_s3_data": null}]}, {"text": "(1) Every list-item is an individual object instance with class label List-item . This definition is different from PubLayNet and DocBank, where all list-items are grouped together into one List object.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [64.39098358154297, 402.13092041015625, 294.42474365234375, 444.29510498046875], "page": 5, "span": [0, 202], "__ref_s3_data": null}]}, {"text": "(2) A List-item is a paragraph with hanging indentation. Singleline elements can qualify as List-item if the neighbour elements expose hanging indentation. Bullet or enumeration symbols are not a requirement.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [64.31100463867188, 358.39984130859375, 295.563720703125, 400.2758483886719], "page": 5, "span": [0, 208], "__ref_s3_data": null}]}, {"text": "(3) For every Caption , there must be exactly one corresponding Picture or Table .", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [64.26787567138672, 336.4728698730469, 294.60943603515625, 356.2404479980469], "page": 5, "span": [0, 82], "__ref_s3_data": null}]}, {"text": "(4) Connected sub-pictures are grouped together in one Picture object.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [64.2632064819336, 314.5648193359375, 294.7487487792969, 334.179443359375], "page": 5, "span": [0, 70], "__ref_s3_data": null}]}, {"text": "(5) Formula numbers are included in a Formula object.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [63.9930305480957, 303.59686279296875, 264.5057067871094, 312.8252868652344], "page": 5, "span": [0, 53], "__ref_s3_data": null}]}, {"text": "(6) Emphasised text (e.g. in italic or bold) at the beginning of a paragraph is not considered a Section-header , unless it appears exclusively on its own line.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [64.07823181152344, 270.048095703125, 295.0240783691406, 301.5160827636719], "page": 5, "span": [0, 160], "__ref_s3_data": null}]}, {"text": "The complete annotation guideline is over 100 pages long and a detailed description is obviously out of scope for this paper. Nevertheless, it will be made publicly available alongside with DocLayNet for future reference.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [52.994422912597656, 217.798828125, 295.5625305175781, 259.6097106933594], "page": 5, "span": [0, 221], "__ref_s3_data": null}]}, {"text": "Phase 3: Training. After a first trial with a small group of people, we realised that providing the annotation guideline and a set of random practice pages did not yield the desired quality level for layout annotation. Therefore we prepared a subset of pages with two different complexity levels, each with a practice and an exam part. 974 pages were reference-annotated by one proficient core team member. Annotation staff were then given the task to annotate the same subsets (blinded from the reference). By comparing the annotations of each staff member with the reference annotations, we could quantify how closely their annotations matched the reference. Only after passing two exam levels with high annotation quality, staff were admitted into the production phase. Practice iterations", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.26631546020508, 86.24749755859375, 295.562255859375, 215.95584106445312], "page": 5, "span": [0, 792], "__ref_s3_data": null}]}, {"text": "Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [316.9992980957031, 287.86785888671875, 559.8057861328125, 318.7776794433594], "page": 5, "span": [0, 173], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/3"}, {"text": "were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [316.8349914550781, 247.1688232421875, 558.204345703125, 266.81207275390625], "page": 5, "span": [0, 123], "__ref_s3_data": null}]}, {"text": "Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.00592041015625, 82.7375717163086, 559.7149047851562, 245.28392028808594], "page": 5, "span": [0, 987], "__ref_s3_data": null}]}, {"text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [53.30706024169922, 722.92333984375, 558.4274291992188, 732.1127319335938], "page": 6, "span": [0, 130], "__ref_s3_data": null}]}, {"text": "Table 2: Prediction performance (mAP@0.5-0.95) of object detection networks on DocLayNet test set. The MRCNN (Mask R-CNN) and FRCNN (Faster R-CNN) models with ResNet-50 or ResNet-101 backbone were trained based on the network architectures from the detectron2 model zoo (Mask R-CNN R50, R101-FPN 3x, Faster R-CNN R101-FPN 3x), with default configurations. The YOLO implementation utilized was YOLOv5x6 [13]. All models were initialised using pre-trained weights from the COCO 2017 dataset.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [52.78031539916992, 608.98291015625, 295.64874267578125, 705.8385620117188], "page": 6, "span": [0, 489], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/1"}, {"text": "to avoid this at any cost in order to have clear, unbiased baseline numbers for human document-layout annotation. Third, we introduced the feature of snapping boxes around text segments to obtain a pixel-accurate annotation and again reduce time and effort. The CCS annotation tool automatically shrinks every user-drawn box to the minimum bounding-box around the enclosed text-cells for all purely text-based segments, which excludes only Table and Picture . For the latter, we instructed annotation staff to minimise inclusion of surrounding whitespace while including all graphical lines. A downside of snapping boxes to enclosed text cells is that some wrongly parsed PDF pages cannot be annotated correctly and need to be skipped. Fourth, we established a way to flag pages as rejected for cases where no valid annotation according to the label guidelines could be achieved. Example cases for this would be PDF pages that render incorrectly or contain layouts that are impossible to capture with non-overlapping rectangles. Such rejected pages are not contained in the final dataset. With all these measures in place, experienced annotation staff managed to annotate a single page in a typical timeframe of 20s to 60s, depending on its complexity.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.25688552856445, 214.2948760986328, 295.5561218261719, 421.4337158203125], "page": 6, "span": [0, 1252], "__ref_s3_data": null}]}, {"text": "5 EXPERIMENTS", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [53.62337875366211, 193.5609893798828, 147.4853515625, 203.87008666992188], "page": 6, "span": [0, 13], "__ref_s3_data": null}]}, {"text": "The primary goal of DocLayNet is to obtain high-quality ML models capable of accurate document-layout analysis on a wide variety of challenging layouts. As discussed in Section 2, object detection models are currently the easiest to use, due to the standardisation of ground-truth data in COCO format [16] and the availability of general frameworks such as detectron2 [17]. Furthermore, baseline numbers in PubLayNet and DocBank were obtained using standard object detection models such as Mask R-CNN and Faster R-CNN. As such, we will relate to these object detection methods in this", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.076290130615234, 82.4822006225586, 295.4281005859375, 179.65382385253906], "page": 6, "span": [0, 584], "__ref_s3_data": null}]}, {"text": "Figure 5: Prediction performance (mAP@0.5-0.95) of a Mask R-CNN network with ResNet50 backbone trained on increasing fractions of the DocLayNet dataset. The learning curve flattens around the 80% mark, indicating that increasing the size of the DocLayNet dataset with similar data will not yield significantly better predictions.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [317.10931396484375, 449.6510009765625, 559.8057861328125, 513.7953491210938], "page": 6, "span": [0, 329], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/4"}, {"text": "paper and leave the detailed evaluation of more recent methods mentioned in Section 2 for future work.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.2011413574219, 388.6548156738281, 558.2041625976562, 408.8042297363281], "page": 6, "span": [0, 102], "__ref_s3_data": null}]}, {"text": "In this section, we will present several aspects related to the performance of object detection models on DocLayNet. Similarly as in PubLayNet, we will evaluate the quality of their predictions using mean average precision (mAP) with 10 overlaps that range from 0.5 to 0.95 in steps of 0.05 (mAP@0.5-0.95). These scores are computed by leveraging the evaluation code provided by the COCO API [16].", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.0830078125, 311.45587158203125, 558.4364013671875, 386.632568359375], "page": 6, "span": [0, 397], "__ref_s3_data": null}]}, {"text": "Baselines for Object Detection", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [317.1941223144531, 284.5037841796875, 466.8532409667969, 295.42913818359375], "page": 6, "span": [0, 30], "__ref_s3_data": null}]}, {"text": "In Table 2, we present baseline experiments (given in mAP) on Mask R-CNN [12], Faster R-CNN [11], and YOLOv5 [13]. Both training and evaluation were performed on RGB images with dimensions of 1025 \u00d7 1025 pixels. For training, we only used one annotation in case of redundantly annotated pages. As one can observe, the variation in mAP between the models is rather low, but overall between 6 and 10% lower than the mAP computed from the pairwise human annotations on triple-annotated pages. This gives a good indication that the DocLayNet dataset poses a worthwhile challenge for the research community to close the gap between human recognition and ML approaches. It is interesting to see that Mask R-CNN and Faster R-CNN produce very comparable mAP scores, indicating that pixel-based image segmentation derived from bounding-boxes does not help to obtain better predictions. On the other hand, the more recent Yolov5x model does very well and even out-performs humans on selected labels such as Text , Table and Picture . This is not entirely surprising, as Text , Table and Picture are abundant and the most visually distinctive in a document.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.0144348144531, 85.2998275756836, 558.7822875976562, 280.8944396972656], "page": 6, "span": [0, 1146], "__ref_s3_data": null}]}, {"text": "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [53.35094451904297, 722.9555053710938, 347.0172424316406, 732.038818359375], "page": 7, "span": [0, 71], "__ref_s3_data": null}]}, {"text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [365.1936950683594, 723.0802001953125, 558.7797241210938, 731.8773803710938], "page": 7, "span": [0, 48], "__ref_s3_data": null}]}, {"text": "Table 3: Performance of a Mask R-CNN R50 network in mAP@0.5-0.95 scores trained on DocLayNet with different class label sets. The reduced label sets were obtained by either down-mapping or dropping labels.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [52.8690299987793, 663.3739624023438, 295.6486511230469, 705.8510131835938], "page": 7, "span": [0, 205], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/2"}, {"text": "Learning Curve", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [53.446834564208984, 461.592041015625, 131.05624389648438, 472.6955871582031], "page": 7, "span": [0, 14], "__ref_s3_data": null}]}, {"text": "One of the fundamental questions related to any dataset is if it is \"large enough\". To answer this question for DocLayNet, we performed a data ablation study in which we evaluated a Mask R-CNN model trained on increasing fractions of the DocLayNet dataset. As can be seen in Figure 5, the mAP score rises sharply in the beginning and eventually levels out. To estimate the error-bar on the metrics, we ran the training five times on the entire data-set. This resulted in a 1% error-bar, depicted by the shaded area in Figure 5. In the inset of Figure 5, we show the exact same data-points, but with a logarithmic scale on the x-axis. As is expected, the mAP score increases linearly as a function of the data-size in the inset. The curve ultimately flattens out between the 80% and 100% mark, with the 80% mark falling within the error-bars of the 100% mark. This provides a good indication that the model would not improve significantly by yet increasing the data size. Rather, it would probably benefit more from improved data consistency (as discussed in Section 3), data augmentation methods [23], or the addition of more document categories and styles.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [52.78499984741211, 262.38037109375, 295.558349609375, 457.72955322265625], "page": 7, "span": [0, 1157], "__ref_s3_data": null}]}, {"text": "Impact of Class Labels", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [53.37664794921875, 239.1809844970703, 164.3289794921875, 250.044677734375], "page": 7, "span": [0, 22], "__ref_s3_data": null}]}, {"text": "The choice and number of labels can have a significant effect on the overall model performance. Since PubLayNet, DocBank and DocLayNet all have different label sets, it is of particular interest to understand and quantify this influence of the label set on the model performance. We investigate this by either down-mapping labels into more common ones (e.g. Caption \u2192 Text ) or excluding them from the annotations entirely. Furthermore, it must be stressed that all mappings and exclusions were performed on the data before model training. In Table 3, we present the mAP scores for a Mask R-CNN R50 network on different label sets. Where a label is down-mapped, we show its corresponding label, otherwise it was excluded. We present three different label sets, with 6, 5 and 4 different labels respectively. The set of 5 labels contains the same labels as PubLayNet. However, due to the different definition of", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.06760787963867, 83.39567565917969, 295.5567932128906, 235.12689208984375], "page": 7, "span": [0, 910], "__ref_s3_data": null}]}, {"text": "Table 4: Performance of a Mask R-CNN R50 network with document-wise and page-wise split for different label sets. Naive page-wise split will result in GLYPH 10% point improvement.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [316.9989929199219, 663.7767944335938, 559.8068237304688, 705.6134643554688], "page": 7, "span": [0, 189], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/3"}, {"text": "lists in PubLayNet (grouped list-items) versus DocLayNet (separate list-items), the label set of size 4 is the closest to PubLayNet, in the assumption that the List is down-mapped to Text in PubLayNet. The results in Table 3 show that the prediction accuracy on the remaining class labels does not change significantly when other classes are merged into them. The overall macro-average improves by around 5%, in particular when Page-footer and Page-header are excluded.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.03326416015625, 375.50982666015625, 559.5849609375, 460.6855163574219], "page": 7, "span": [0, 469], "__ref_s3_data": null}]}, {"text": "Impact of Document Split in Train and Test Set", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [317.4661865234375, 351.4896545410156, 549.860595703125, 362.8900451660156], "page": 7, "span": [0, 46], "__ref_s3_data": null}]}, {"text": "Many documents in DocLayNet have a unique styling. In order to avoid overfitting on a particular style, we have split the train-, test- and validation-sets of DocLayNet on document boundaries, i.e. every document contributes pages to only one set. To the best of our knowledge, this was not considered in PubLayNet or DocBank. To quantify how this affects model performance, we trained and evaluated a Mask R-CNN R50 model on a modified dataset version. Here, the train-, test- and validation-sets were obtained by a randomised draw over the individual pages. As can be seen in Table 4, the difference in model performance is surprisingly large: pagewise splitting gains \u02dc 10% in mAP over the document-wise splitting. Thus, random page-wise splitting of DocLayNet can easily lead to accidental overestimation of model performance and should be avoided.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [316.9546813964844, 196.5628204345703, 559.7138061523438, 348.10198974609375], "page": 7, "span": [0, 852], "__ref_s3_data": null}]}, {"text": "Dataset Comparison", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [317.3337707519531, 173.20875549316406, 418.5477600097656, 183.94322204589844], "page": 7, "span": [0, 18], "__ref_s3_data": null}]}, {"text": "Throughout this paper, we claim that DocLayNet's wider variety of document layouts leads to more robust layout detection models. In Table 5, we provide evidence for that. We trained models on each of the available datasets (PubLayNet, DocBank and DocLayNet) and evaluated them on the test sets of the other datasets. Due to the different label sets and annotation styles, a direct comparison is not possible. Hence, we focussed on the common labels among the datasets. Between PubLayNet and DocLayNet, these are Picture ,", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [316.7283935546875, 83.24566650390625, 559.1881713867188, 168.86700439453125], "page": 7, "span": [0, 521], "__ref_s3_data": null}]}, {"text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [53.288330078125, 722.9171142578125, 558.4634399414062, 732.134033203125], "page": 8, "span": [0, 130], "__ref_s3_data": null}]}, {"text": "Table 5: Prediction Performance (mAP@0.5-0.95) of a Mask R-CNN R50 network across the PubLayNet, DocBank & DocLayNet data-sets. By evaluating on common label classes of each dataset, we observe that the DocLayNet-trained model has much less pronounced variations in performance across all datasets.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [52.89757537841797, 641.85888671875, 295.648681640625, 705.7824096679688], "page": 8, "span": [0, 298], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/4"}, {"text": "Section-header , Table and Text . Before training, we either mapped or excluded DocLayNet's other labels as specified in table 3, and also PubLayNet's List to Text . Note that the different clustering of lists (by list-element vs. whole list objects) naturally decreases the mAP score for Text .", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.279537200927734, 348.85986328125, 294.6396789550781, 401.5162658691406], "page": 8, "span": [0, 295], "__ref_s3_data": null}]}, {"text": "For comparison of DocBank with DocLayNet, we trained only on Picture and Table clusters of each dataset. We had to exclude Text because successive paragraphs are often grouped together into a single object in DocBank. This paragraph grouping is incompatible with the individual paragraphs of DocLayNet. As can be seen in Table 5, DocLayNet trained models yield better performance compared to the previous datasets. It is noteworthy that the models trained on PubLayNet and DocBank perform very well on their own test set, but have a much lower performance on the foreign datasets. While this also applies to DocLayNet, the difference is far less pronounced. Thus we conclude that DocLayNet trained models are overall more robust and will produce better results for challenging, unseen layouts.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.04817581176758, 205.98951721191406, 295.55908203125, 346.9607849121094], "page": 8, "span": [0, 793], "__ref_s3_data": null}]}, {"text": "Example Predictions", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [53.05388259887695, 176.33340454101562, 156.02235412597656, 187.29098510742188], "page": 8, "span": [0, 19], "__ref_s3_data": null}]}, {"text": "To conclude this section, we illustrate the quality of layout predictions one can expect from DocLayNet-trained models by providing a selection of examples without any further post-processing applied. Figure 6 shows selected layout predictions on pages from the test-set of DocLayNet. Results look decent in general across document categories, however one can also observe mistakes such as overlapping clusters of different classes, or entirely missing boxes due to low confidence.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [53.07720184326172, 86.64982604980469, 295.5584411621094, 172.26492309570312], "page": 8, "span": [0, 481], "__ref_s3_data": null}]}, {"text": "6 CONCLUSION", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [317.4961853027344, 695.8309936523438, 405.7296142578125, 706.4700317382812], "page": 8, "span": [0, 12], "__ref_s3_data": null}]}, {"text": "In this paper, we presented the DocLayNet dataset. It provides the document conversion and layout analysis research community a new and challenging dataset to improve and fine-tune novel ML methods on. In contrast to many other datasets, DocLayNet was created by human annotation in order to obtain reliable layout ground-truth on a wide variety of publication- and typesettingstyles. Including a large proportion of documents outside the scientific publishing domain adds significant value in this respect.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.0487976074219, 605.4117431640625, 559.7137451171875, 691.6207275390625], "page": 8, "span": [0, 507], "__ref_s3_data": null}]}, {"text": "From the dataset, we have derived on the one hand reference metrics for human performance on document-layout annotation (through double and triple annotations) and on the other hand evaluated the baseline performance of commonly used object detection methods. We also illustrated the impact of various dataset-related aspects on model performance through data-ablation experiments, both from a size and class-label perspective. Last but not least, we compared the accuracy of models trained on other public datasets and showed that DocLayNet trained models are more robust.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.03955078125, 506.7440185546875, 559.717041015625, 603.672607421875], "page": 8, "span": [0, 573], "__ref_s3_data": null}]}, {"text": "To date, there is still a significant gap between human and ML accuracy on the layout interpretation task, and we hope that this work will inspire the research community to close that gap.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [317.1865234375, 474.2935791015625, 558.6325073242188, 505.4895324707031], "page": 8, "span": [0, 188], "__ref_s3_data": null}]}, {"text": "REFERENCES", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [317.4455871582031, 446.5990295410156, 387.5806579589844, 457.4013366699219], "page": 8, "span": [0, 10], "__ref_s3_data": null}]}, {"text": "[1] Max G\u00f6bel, Tamir Hassan, Ermelinda Oro, and Giorgio Orsi. Icdar 2013 table competition. In 2013 12th International Conference on Document Analysis and Recognition , pages 1449-1453, 2013.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [320.5848693847656, 420.8371276855469, 559.0187377929688, 444.4063415527344], "page": 8, "span": [0, 191], "__ref_s3_data": null}]}, {"text": "[2] Christian Clausner, Apostolos Antonacopoulos, and Stefan Pletschacher. Icdar2017 competition on recognition of documents with complex layouts rdcl2017. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) , volume 01, pages 1404-1410, 2017.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [320.76806640625, 388.9571228027344, 559.7276000976562, 420.2254333496094], "page": 8, "span": [0, 279], "__ref_s3_data": null}]}, {"text": "[3] Herv\u00e9 D\u00e9jean, Jean-Luc Meunier, Liangcai Gao, Yilun Huang, Yu Fang, Florian Kleber, and Eva-Maria Lang. ICDAR 2019 Competition on Table Detection and Recognition (cTDaR), April 2019. http://sac.founderit.com/.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [320.58111572265625, 364.88128662109375, 558.4269409179688, 388.028076171875], "page": 8, "span": [0, 213], "__ref_s3_data": null}]}, {"text": "[4] Antonio Jimeno Yepes, Peter Zhong, and Douglas Burdick. Competition on scientific literature parsing. In Proceedings of the International Conference on Document Analysis and Recognition , ICDAR, pages 605-617. LNCS 12824, SpringerVerlag, sep 2021.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [320.72210693359375, 333.173095703125, 559.3787231445312, 364.17962646484375], "page": 8, "span": [0, 251], "__ref_s3_data": null}]}, {"text": "[5] Logan Markewich, Hao Zhang, Yubin Xing, Navid Lambert-Shirzad, Jiang Zhexin, Roy Lee, Zhi Li, and Seok-Bum Ko. Segmentation for document layout analysis: not dead yet. International Journal on Document Analysis and Recognition (IJDAR) , pages 1-11, 01 2022.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [320.47723388671875, 300.9960021972656, 559.2555541992188, 332.2057800292969], "page": 8, "span": [0, 261], "__ref_s3_data": null}]}, {"text": "[6] Xu Zhong, Jianbin Tang, and Antonio Jimeno-Yepes. Publaynet: Largest dataset ever for document layout analysis. In Proceedings of the International Conference on Document Analysis and Recognition , ICDAR, pages 1015-1022, sep 2019.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [320.7210998535156, 277.3751220703125, 558.6044921875, 300.1542053222656], "page": 8, "span": [0, 235], "__ref_s3_data": null}]}, {"text": "[7] Minghao Li, Yiheng Xu, Lei Cui, Shaohan Huang, Furu Wei, Zhoujun Li, and Ming Zhou. Docbank: A benchmark dataset for document layout analysis. In Proceedings of the 28th International Conference on Computational Linguistics , COLING, pages 949-960. International Committee on Computational Linguistics, dec 2020.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [320.7048034667969, 237.53111267089844, 559.0962524414062, 276.57550048828125], "page": 8, "span": [0, 316], "__ref_s3_data": null}]}, {"text": "[8] Riaz Ahmad, Muhammad Tanvir Afzal, and M. Qadir. Information extraction from pdf sources based on rule-based system using integrated formats. In SemWebEval@ESWC , 2016.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [320.6175537109375, 213.6141357421875, 558.9022216796875, 236.84490966796875], "page": 8, "span": [0, 172], "__ref_s3_data": null}]}, {"text": "[9] Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition , CVPR, pages 580-587. IEEE Computer Society, jun 2014.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [320.695556640625, 181.74110412597656, 559.2744750976562, 212.77767944335938], "page": 8, "span": [0, 271], "__ref_s3_data": null}]}, {"text": "[10] Ross B. Girshick. Fast R-CNN. In 2015 IEEE International Conference on Computer Vision , ICCV, pages 1440-1448. IEEE Computer Society, dec 2015.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [317.74908447265625, 165.5072479248047, 558.8585205078125, 181.0753173828125], "page": 8, "span": [0, 149], "__ref_s3_data": null}]}, {"text": "[11] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence , 39(6):1137-1149, 2017.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [317.71527099609375, 141.8831329345703, 558.4170532226562, 164.63047790527344], "page": 8, "span": [0, 227], "__ref_s3_data": null}]}, {"text": "[12] Kaiming He, Georgia Gkioxari, Piotr Doll\u00e1r, and Ross B. Girshick. Mask R-CNN. In IEEE International Conference on Computer Vision , ICCV, pages 2980-2988. IEEE Computer Society, Oct 2017.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [317.5010986328125, 117.60646057128906, 559.278076171875, 141.50643920898438], "page": 8, "span": [0, 192], "__ref_s3_data": null}]}, {"text": "[13] Glenn Jocher, Alex Stoken, Ayush Chaurasia, Jirka Borovec, NanoCode012, TaoXie, Yonghye Kwon, Kalen Michael, Liu Changyu, Jiacong Fang, Abhiram V, Laughing, tkianai, yxNONG, Piotr Skalski, Adam Hogan, Jebastin Nadar, imyhxy, Lorenzo Mammana, Alex Wang, Cristi Fati, Diego Montes, Jan Hajek, Laurentiu", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [317.4837341308594, 86.09910583496094, 559.0487670898438, 116.94155883789062], "page": 8, "span": [0, 305], "__ref_s3_data": null}]}, {"text": "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [53.55940246582031, 722.9329223632812, 347.0838623046875, 731.9924926757812], "page": 9, "span": [0, 71], "__ref_s3_data": null}]}, {"text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [365.1275329589844, 723.0497436523438, 558.905029296875, 731.96435546875], "page": 9, "span": [0, 48], "__ref_s3_data": null}]}, {"text": "Figure 6: Example layout predictions on selected pages from the DocLayNet test-set. (A, D) exhibit favourable results on coloured backgrounds. (B, C) show accurate list-item and paragraph differentiation despite densely-spaced lines. (E) demonstrates good table and figure distinction. (F) shows predictions on a Chinese patent with multiple overlaps, label confusion and missing boxes.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [53.39582824707031, 285.65704345703125, 559.807861328125, 328.056396484375], "page": 9, "span": [0, 386], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/5"}, {"text": "Diaconu, Mai Thanh Minh, Marc, albinxavi, fatih, oleg, and wanghao yang. ultralytics/yolov5: v6.0 - yolov5n nano models, roboflow integration, tensorflow export, opencv dnn support, October 2021.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [68.69137573242188, 242.22409057617188, 295.22406005859375, 265.4314270019531], "page": 9, "span": [0, 195], "__ref_s3_data": null}]}, {"text": "[14] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. CoRR , abs/2005.12872, 2020.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [53.56020736694336, 218.56314086914062, 295.12176513671875, 241.63282775878906], "page": 9, "span": [0, 190], "__ref_s3_data": null}]}, {"text": "[15] Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. CoRR , abs/1911.09070, 2019.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [53.61275863647461, 202.62213134765625, 294.3653869628906, 217.57615661621094], "page": 9, "span": [0, 132], "__ref_s3_data": null}]}, {"text": "[16] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. Microsoft COCO: common objects in context, 2014.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [53.668941497802734, 178.71910095214844, 295.2226257324219, 201.57443237304688], "page": 9, "span": [0, 219], "__ref_s3_data": null}]}, {"text": "[17] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2, 2019.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [53.54263687133789, 162.77911376953125, 295.1200866699219, 178.3345947265625], "page": 9, "span": [0, 100], "__ref_s3_data": null}]}, {"text": "[18] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter W. J. Staar. Robust pdf document conversion using recurrent neural networks. In Proceedings of the 35th Conference on Artificial Intelligence , AAAI, pages 1513715145, feb 2021.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [53.569610595703125, 122.92810821533203, 294.8847351074219, 162.23497009277344], "page": 9, "span": [0, 339], "__ref_s3_data": null}]}, {"text": "[19] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutlm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 1192-1200, New York, USA, 2020. Association for Computing Machinery.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [53.4610595703125, 82.67352294921875, 295.22174072265625, 122.19474029541016], "page": 9, "span": [0, 336], "__ref_s3_data": null}]}, {"text": "[20] Shoubin Li, Xuyan Ma, Shuaiqun Pan, Jun Hu, Lin Shi, and Qing Wang. Vtlayout: Fusion of visual and text features for document layout analysis, 2021.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [317.6278076171875, 249.62921142578125, 559.0263671875, 265.5798645019531], "page": 9, "span": [0, 153], "__ref_s3_data": null}]}, {"text": "[21] Peng Zhang, Can Li, Liang Qiao, Zhanzhan Cheng, Shiliang Pu, Yi Niu, and Fei Wu. Vsr: A unified framework for document layout analysis combining vision, semantics and relations, 2021.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [317.53033447265625, 226.54010009765625, 559.0158081054688, 249.28826904296875], "page": 9, "span": [0, 188], "__ref_s3_data": null}]}, {"text": "[22] Peter W J Staar, Michele Dolfi, Christoph Auer, and Costas Bekas. Corpus conversion service: A machine learning platform to ingest documents at scale. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 774-782. ACM, 2018.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [317.6616516113281, 194.28546142578125, 559.275390625, 225.54457092285156], "page": 9, "span": [0, 290], "__ref_s3_data": null}]}, {"text": "[23] Connor Shorten and Taghi M. Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data , 6(1):60, 2019.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [317.65606689453125, 178.71212768554688, 559.3782958984375, 193.30506896972656], "page": 9, "span": [0, 138], "__ref_s3_data": null}]}], "figures": [{"bounding-box": null, "prov": [{"bbox": [324.3027038574219, 266.1221618652344, 554.91796875, 543.5838623046875], "page": 1, "span": [0, 84], "__ref_s3_data": null}], "text": "Figure 1: Four examples of complex page layouts across different document categories", "type": "figure"}, {"bounding-box": null, "prov": [{"bbox": [88.16680145263672, 569.726806640625, 264.2818298339844, 698.8894653320312], "page": 3, "span": [0, 69], "__ref_s3_data": null}], "text": "Figure 2: Distribution of DocLayNet pages across document categories.", "type": "figure"}, {"bounding-box": null, "prov": [{"bbox": [53.179771423339844, 250.80191040039062, 295.3565368652344, 481.6382141113281], "page": 4, "span": [0, 281], "__ref_s3_data": null}], "text": "Figure 3: Corpus Conversion Service annotation user interface. The PDF page is shown in the background, with overlaid text-cells (in darker shades). The annotation boxes can be drawn by dragging a rectangle over each segment with the respective label from the palette on the right.", "type": "figure"}, {"bounding-box": null, "prov": [{"bbox": [315.8857116699219, 331.43994140625, 559.6527709960938, 707.0224609375], "page": 5, "span": [0, 173], "__ref_s3_data": null}], "text": "Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.", "type": "figure"}, {"bounding-box": null, "prov": [{"bbox": [322.7086486816406, 531.372314453125, 553.7246704101562, 701.6975708007812], "page": 6, "span": [0, 329], "__ref_s3_data": null}], "text": "Figure 5: Prediction performance (mAP@0.5-0.95) of a Mask R-CNN network with ResNet50 backbone trained on increasing fractions of the DocLayNet dataset. The learning curve flattens around the 80% mark, indicating that increasing the size of the DocLayNet dataset with similar data will not yield significantly better predictions.", "type": "figure"}, {"bounding-box": null, "prov": [{"bbox": [53.59891891479492, 343.73516845703125, 554.9424438476562, 708.443115234375], "page": 9, "span": [0, 386], "__ref_s3_data": null}], "text": "Figure 6: Example layout predictions on selected pages from the DocLayNet test-set. (A, D) exhibit favourable results on coloured backgrounds. (B, C) show accurate list-item and paragraph differentiation despite densely-spaced lines. (E) demonstrates good table and figure distinction. (F) shows predictions on a Chinese patent with multiple overlaps, label confusion and missing boxes.", "type": "figure"}], "tables": [{"bounding-box": null, "prov": [{"bbox": [98.96420288085938, 498.30108642578125, 512.7739868164062, 654.1231689453125], "page": 4, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 1: DocLayNet dataset overview. Along with the frequency of each class label, we present the relative occurrence (as % of row \"Total\") in the train, test and validation sets. The inter-annotator agreement is computed as the mAP@0.5-0.95 metric between pairwise annotations from the triple-annotated pages, from which we obtain accuracy ranges.", "type": "table", "#-cols": 12, "#-rows": 14, "data": [[{"bbox": null, "spans": [[0, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": null, "spans": [[0, 1]], "text": "", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [233.94400024414062, 643.40185546875, 270.042724609375, 651.7764892578125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "% of Total", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [233.94400024414062, 643.40185546875, 270.042724609375, 651.7764892578125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "% of Total", "type": "col_header", "col": 3, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [233.94400024414062, 643.40185546875, 270.042724609375, 651.7764892578125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "% of Total", "type": "col_header", "col": 4, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 6, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 7, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 8, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 9, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 10, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 11, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [104.82499694824219, 632.4428100585938, 141.7127685546875, 640.8174438476562], "spans": [[1, 0]], "text": "class label", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [175.94700622558594, 632.4428100585938, 198.7126922607422, 640.8174438476562], "spans": [[1, 1]], "text": "Count", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [213.7949981689453, 632.4428100585938, 233.69143676757812, 640.8174438476562], "spans": [[1, 2]], "text": "Train", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [249.37367248535156, 632.4428100585938, 264.5, 640.8174438476562], "spans": [[1, 3]], "text": "Test", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [283.5356750488281, 632.4428100585938, 295.3085632324219, 640.8174438476562], "spans": [[1, 4]], "text": "Val", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [314.0150146484375, 632.4428100585938, 324.9809265136719, 640.8174438476562], "spans": [[1, 5]], "text": "All", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [343.0123596191406, 632.4428100585938, 354.6507568359375, 640.8174438476562], "spans": [[1, 6]], "text": "Fin", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [367.84033203125, 632.4428100585938, 384.3205871582031, 640.8174438476562], "spans": [[1, 7]], "text": "Man", "type": "col_header", "col": 7, "col-header": false, "col-span": [7, 8], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [407.5435791015625, 632.4428100585938, 418.1597900390625, 640.8174438476562], "spans": [[1, 8]], "text": "Sci", "type": "col_header", "col": 8, "col-header": false, "col-span": [8, 9], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [432.2998046875, 632.4428100585938, 447.8296203613281, 640.8174438476562], "spans": [[1, 9]], "text": "Law", "type": "col_header", "col": 9, "col-header": false, "col-span": [9, 10], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [465.7265625, 632.4428100585938, 477.5084228515625, 640.8174438476562], "spans": [[1, 10]], "text": "Pat", "type": "col_header", "col": 10, "col-header": false, "col-span": [10, 11], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [493.52239990234375, 632.4428100585938, 507.17822265625, 640.8174438476562], "spans": [[1, 11]], "text": "Ten", "type": "col_header", "col": 11, "col-header": false, "col-span": [11, 12], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [104.82499694824219, 621.0858154296875, 134.01063537597656, 629.46044921875], "spans": [[2, 0]], "text": "Caption", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [177.86599731445312, 621.0858154296875, 198.71287536621094, 629.46044921875], "spans": [[2, 1]], "text": "22524", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [219.21099853515625, 621.0858154296875, 233.69174194335938, 629.46044921875], "spans": [[2, 2]], "text": "2.04", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [250.01956176757812, 621.0858154296875, 264.50030517578125, 629.46044921875], "spans": [[2, 3]], "text": "1.77", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [280.828125, 621.0858154296875, 295.3088684082031, 629.46044921875], "spans": [[2, 4]], "text": "2.32", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [305.27301025390625, 621.0858154296875, 324.9811706542969, 629.46044921875], "spans": [[2, 5]], "text": "84-89", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [334.9428405761719, 621.0858154296875, 354.6510009765625, 629.46044921875], "spans": [[2, 6]], "text": "40-61", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [364.6126708984375, 621.0858154296875, 384.3208312988281, 629.46044921875], "spans": [[2, 7]], "text": "86-92", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [398.4518737792969, 621.0858154296875, 418.1600341796875, 629.46044921875], "spans": [[2, 8]], "text": "94-99", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [428.1217041015625, 621.0858154296875, 447.8298645019531, 629.46044921875], "spans": [[2, 9]], "text": "95-99", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [457.8005065917969, 621.0858154296875, 477.5086669921875, 629.46044921875], "spans": [[2, 10]], "text": "69-78", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [495.32489013671875, 621.0858154296875, 507.178466796875, 629.46044921875], "spans": [[2, 11]], "text": "n/a", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [104.82499694824219, 610.1268310546875, 137.3282012939453, 618.50146484375], "spans": [[3, 0]], "text": "Footnote", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [182.03500366210938, 610.1268310546875, 198.71250915527344, 618.50146484375], "spans": [[3, 1]], "text": "6318", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [219.21099853515625, 610.1268310546875, 233.69174194335938, 618.50146484375], "spans": [[3, 2]], "text": "0.60", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [250.01956176757812, 610.1268310546875, 264.50030517578125, 618.50146484375], "spans": [[3, 3]], "text": "0.31", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [280.828125, 610.1268310546875, 295.3088684082031, 618.50146484375], "spans": [[3, 4]], "text": "0.58", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [305.27301025390625, 610.1268310546875, 324.9811706542969, 618.50146484375], "spans": [[3, 5]], "text": "83-91", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [342.7973937988281, 610.1268310546875, 354.6509704589844, 618.50146484375], "spans": [[3, 6]], "text": "n/a", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [371.8126525878906, 610.1268310546875, 384.3207702636719, 618.50146484375], "spans": [[3, 7]], "text": "100", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [398.4518127441406, 610.1268310546875, 418.15997314453125, 618.50146484375], "spans": [[3, 8]], "text": "62-88", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [428.12164306640625, 610.1268310546875, 447.8298034667969, 618.50146484375], "spans": [[3, 9]], "text": "85-94", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [465.6549987792969, 610.1268310546875, 477.5085754394531, 618.50146484375], "spans": [[3, 10]], "text": "n/a", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [487.4702453613281, 610.1268310546875, 507.17840576171875, 618.50146484375], "spans": [[3, 11]], "text": "82-97", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [104.82499694824219, 599.1678466796875, 135.33766174316406, 607.54248046875], "spans": [[4, 0]], "text": "Formula", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [177.86599731445312, 599.1678466796875, 198.71287536621094, 607.54248046875], "spans": [[4, 1]], "text": "25027", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [219.21099853515625, 599.1678466796875, 233.69174194335938, 607.54248046875], "spans": [[4, 2]], "text": "2.25", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [250.01956176757812, 599.1678466796875, 264.50030517578125, 607.54248046875], "spans": [[4, 3]], "text": "1.90", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [280.828125, 599.1678466796875, 295.3088684082031, 607.54248046875], "spans": [[4, 4]], "text": "2.96", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [305.27301025390625, 599.1678466796875, 324.9811706542969, 607.54248046875], "spans": [[4, 5]], "text": "83-85", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [342.7973937988281, 599.1678466796875, 354.6509704589844, 607.54248046875], "spans": [[4, 6]], "text": "n/a", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [372.4671936035156, 599.1678466796875, 384.3207702636719, 607.54248046875], "spans": [[4, 7]], "text": "n/a", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [398.4518127441406, 599.1678466796875, 418.15997314453125, 607.54248046875], "spans": [[4, 8]], "text": "84-87", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [428.12164306640625, 599.1678466796875, 447.8298034667969, 607.54248046875], "spans": [[4, 9]], "text": "86-96", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [465.6549987792969, 599.1678466796875, 477.5085754394531, 607.54248046875], "spans": [[4, 10]], "text": "n/a", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [495.3247985839844, 599.1678466796875, 507.1783752441406, 607.54248046875], "spans": [[4, 11]], "text": "n/a", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [104.82499694824219, 588.2088012695312, 137.7047882080078, 596.5834350585938], "spans": [[5, 0]], "text": "List-item", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [173.69700622558594, 588.2088012695312, 198.7132568359375, 596.5834350585938], "spans": [[5, 1]], "text": "185660", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [215.04200744628906, 588.2088012695312, 233.69212341308594, 596.5834350585938], "spans": [[5, 2]], "text": "17.19", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [245.85055541992188, 588.2088012695312, 264.50067138671875, 596.5834350585938], "spans": [[5, 3]], "text": "13.34", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [276.65911865234375, 588.2088012695312, 295.3092346191406, 596.5834350585938], "spans": [[5, 4]], "text": "15.82", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [305.27301025390625, 588.2088012695312, 324.9811706542969, 596.5834350585938], "spans": [[5, 5]], "text": "87-88", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [334.9428405761719, 588.2088012695312, 354.6510009765625, 596.5834350585938], "spans": [[5, 6]], "text": "74-83", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [364.6126708984375, 588.2088012695312, 384.3208312988281, 596.5834350585938], "spans": [[5, 7]], "text": "90-92", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [398.4518737792969, 588.2088012695312, 418.1600341796875, 596.5834350585938], "spans": [[5, 8]], "text": "97-97", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [428.1217041015625, 588.2088012695312, 447.8298645019531, 596.5834350585938], "spans": [[5, 9]], "text": "81-85", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [457.8005065917969, 588.2088012695312, 477.5086669921875, 596.5834350585938], "spans": [[5, 10]], "text": "75-88", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [487.4703369140625, 588.2088012695312, 507.1784973144531, 596.5834350585938], "spans": [[5, 11]], "text": "93-95", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [104.82499694824219, 577.2498168945312, 147.3526153564453, 585.6244506835938], "spans": [[6, 0]], "text": "Page-footer", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [177.86599731445312, 577.2498168945312, 198.71287536621094, 585.6244506835938], "spans": [[6, 1]], "text": "70878", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [219.21099853515625, 577.2498168945312, 233.69174194335938, 585.6244506835938], "spans": [[6, 2]], "text": "6.51", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [250.01956176757812, 577.2498168945312, 264.50030517578125, 585.6244506835938], "spans": [[6, 3]], "text": "5.58", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [280.828125, 577.2498168945312, 295.3088684082031, 585.6244506835938], "spans": [[6, 4]], "text": "6.00", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [305.27301025390625, 577.2498168945312, 324.9811706542969, 585.6244506835938], "spans": [[6, 5]], "text": "93-94", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [334.9428405761719, 577.2498168945312, 354.6510009765625, 585.6244506835938], "spans": [[6, 6]], "text": "88-90", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [364.6126708984375, 577.2498168945312, 384.3208312988281, 585.6244506835938], "spans": [[6, 7]], "text": "95-96", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [405.6518859863281, 577.2498168945312, 418.1600036621094, 585.6244506835938], "spans": [[6, 8]], "text": "100", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [428.1216735839844, 577.2498168945312, 447.829833984375, 585.6244506835938], "spans": [[6, 9]], "text": "92-97", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [465.00048828125, 577.2498168945312, 477.50860595703125, 585.6244506835938], "spans": [[6, 10]], "text": "100", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [487.47027587890625, 577.2498168945312, 507.1784362792969, 585.6244506835938], "spans": [[6, 11]], "text": "96-98", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [104.82499694824219, 566.2908325195312, 150.10531616210938, 574.6654663085938], "spans": [[7, 0]], "text": "Page-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [177.86599731445312, 566.2908325195312, 198.71287536621094, 574.6654663085938], "spans": [[7, 1]], "text": "58022", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [219.21099853515625, 566.2908325195312, 233.69174194335938, 574.6654663085938], "spans": [[7, 2]], "text": "5.10", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [250.01956176757812, 566.2908325195312, 264.50030517578125, 574.6654663085938], "spans": [[7, 3]], "text": "6.70", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [280.828125, 566.2908325195312, 295.3088684082031, 574.6654663085938], "spans": [[7, 4]], "text": "5.06", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [305.27301025390625, 566.2908325195312, 324.9811706542969, 574.6654663085938], "spans": [[7, 5]], "text": "85-89", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [334.9428405761719, 566.2908325195312, 354.6510009765625, 574.6654663085938], "spans": [[7, 6]], "text": "66-76", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [364.6126708984375, 566.2908325195312, 384.3208312988281, 574.6654663085938], "spans": [[7, 7]], "text": "90-94", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [394.2825012207031, 566.2908325195312, 418.1600341796875, 574.6654663085938], "spans": [[7, 8]], "text": "98-100", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [428.1217041015625, 566.2908325195312, 447.8298645019531, 574.6654663085938], "spans": [[7, 9]], "text": "91-92", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [457.8005065917969, 566.2908325195312, 477.5086669921875, 574.6654663085938], "spans": [[7, 10]], "text": "97-99", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [487.4703369140625, 566.2908325195312, 507.1784973144531, 574.6654663085938], "spans": [[7, 11]], "text": "81-86", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 7, "row-header": false, "row-span": [7, 8]}], [{"bbox": [104.82499694824219, 555.3318481445312, 130.80963134765625, 563.7064819335938], "spans": [[8, 0]], "text": "Picture", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [177.86599731445312, 555.3318481445312, 198.71287536621094, 563.7064819335938], "spans": [[8, 1]], "text": "45976", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [219.21099853515625, 555.3318481445312, 233.69174194335938, 563.7064819335938], "spans": [[8, 2]], "text": "4.21", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [250.01956176757812, 555.3318481445312, 264.50030517578125, 563.7064819335938], "spans": [[8, 3]], "text": "2.78", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [280.828125, 555.3318481445312, 295.3088684082031, 563.7064819335938], "spans": [[8, 4]], "text": "5.31", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [305.27301025390625, 555.3318481445312, 324.9811706542969, 563.7064819335938], "spans": [[8, 5]], "text": "69-71", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [334.9428405761719, 555.3318481445312, 354.6510009765625, 563.7064819335938], "spans": [[8, 6]], "text": "56-59", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [364.6126708984375, 555.3318481445312, 384.3208312988281, 563.7064819335938], "spans": [[8, 7]], "text": "82-86", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [398.4518737792969, 555.3318481445312, 418.1600341796875, 563.7064819335938], "spans": [[8, 8]], "text": "69-82", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [428.1217041015625, 555.3318481445312, 447.8298645019531, 563.7064819335938], "spans": [[8, 9]], "text": "80-95", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [457.8005065917969, 555.3318481445312, 477.5086669921875, 563.7064819335938], "spans": [[8, 10]], "text": "66-71", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [487.4703369140625, 555.3318481445312, 507.1784973144531, 563.7064819335938], "spans": [[8, 11]], "text": "59-76", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 8, "row-header": false, "row-span": [8, 9]}], [{"bbox": [104.82499694824219, 544.372802734375, 159.5648651123047, 552.7474365234375], "spans": [[9, 0]], "text": "Section-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [173.69700622558594, 544.372802734375, 198.7132568359375, 552.7474365234375], "spans": [[9, 1]], "text": "142884", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [215.04200744628906, 544.372802734375, 233.69212341308594, 552.7474365234375], "spans": [[9, 2]], "text": "12.60", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [245.85055541992188, 544.372802734375, 264.50067138671875, 552.7474365234375], "spans": [[9, 3]], "text": "15.77", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [276.65911865234375, 544.372802734375, 295.3092346191406, 552.7474365234375], "spans": [[9, 4]], "text": "12.85", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [305.27301025390625, 544.372802734375, 324.9811706542969, 552.7474365234375], "spans": [[9, 5]], "text": "83-84", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [334.9428405761719, 544.372802734375, 354.6510009765625, 552.7474365234375], "spans": [[9, 6]], "text": "76-81", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [364.6126708984375, 544.372802734375, 384.3208312988281, 552.7474365234375], "spans": [[9, 7]], "text": "90-92", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [398.4518737792969, 544.372802734375, 418.1600341796875, 552.7474365234375], "spans": [[9, 8]], "text": "94-95", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [428.1217041015625, 544.372802734375, 447.8298645019531, 552.7474365234375], "spans": [[9, 9]], "text": "87-94", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [457.8005065917969, 544.372802734375, 477.5086669921875, 552.7474365234375], "spans": [[9, 10]], "text": "69-73", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [487.4703369140625, 544.372802734375, 507.1784973144531, 552.7474365234375], "spans": [[9, 11]], "text": "78-86", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 9, "row-header": false, "row-span": [9, 10]}], [{"bbox": [104.82499694824219, 533.413818359375, 124.63176727294922, 541.7884521484375], "spans": [[10, 0]], "text": "Table", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [177.86599731445312, 533.413818359375, 198.71287536621094, 541.7884521484375], "spans": [[10, 1]], "text": "34733", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [219.21099853515625, 533.413818359375, 233.69174194335938, 541.7884521484375], "spans": [[10, 2]], "text": "3.20", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [250.01956176757812, 533.413818359375, 264.50030517578125, 541.7884521484375], "spans": [[10, 3]], "text": "2.27", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [280.828125, 533.413818359375, 295.3088684082031, 541.7884521484375], "spans": [[10, 4]], "text": "3.60", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [305.27301025390625, 533.413818359375, 324.9811706542969, 541.7884521484375], "spans": [[10, 5]], "text": "77-81", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [334.9428405761719, 533.413818359375, 354.6510009765625, 541.7884521484375], "spans": [[10, 6]], "text": "75-80", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [364.6126708984375, 533.413818359375, 384.3208312988281, 541.7884521484375], "spans": [[10, 7]], "text": "83-86", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [398.4518737792969, 533.413818359375, 418.1600341796875, 541.7884521484375], "spans": [[10, 8]], "text": "98-99", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [428.1217041015625, 533.413818359375, 447.8298645019531, 541.7884521484375], "spans": [[10, 9]], "text": "58-80", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [457.8005065917969, 533.413818359375, 477.5086669921875, 541.7884521484375], "spans": [[10, 10]], "text": "79-84", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [487.4703369140625, 533.413818359375, 507.1784973144531, 541.7884521484375], "spans": [[10, 11]], "text": "70-85", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 10, "row-header": false, "row-span": [10, 11]}], [{"bbox": [104.82499694824219, 522.455810546875, 120.78518676757812, 530.8304443359375], "spans": [[11, 0]], "text": "Text", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [173.69700622558594, 522.455810546875, 198.7132568359375, 530.8304443359375], "spans": [[11, 1]], "text": "510377", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [215.04200744628906, 522.455810546875, 233.69212341308594, 530.8304443359375], "spans": [[11, 2]], "text": "45.82", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [245.85055541992188, 522.455810546875, 264.50067138671875, 530.8304443359375], "spans": [[11, 3]], "text": "49.28", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [276.65911865234375, 522.455810546875, 295.3092346191406, 530.8304443359375], "spans": [[11, 4]], "text": "45.00", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [305.27301025390625, 522.455810546875, 324.9811706542969, 530.8304443359375], "spans": [[11, 5]], "text": "84-86", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [334.9428405761719, 522.455810546875, 354.6510009765625, 530.8304443359375], "spans": [[11, 6]], "text": "81-86", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [364.6126708984375, 522.455810546875, 384.3208312988281, 530.8304443359375], "spans": [[11, 7]], "text": "88-93", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [398.4518737792969, 522.455810546875, 418.1600341796875, 530.8304443359375], "spans": [[11, 8]], "text": "89-93", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [428.1217041015625, 522.455810546875, 447.8298645019531, 530.8304443359375], "spans": [[11, 9]], "text": "87-92", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [457.8005065917969, 522.455810546875, 477.5086669921875, 530.8304443359375], "spans": [[11, 10]], "text": "71-79", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [487.4703369140625, 522.455810546875, 507.1784973144531, 530.8304443359375], "spans": [[11, 11]], "text": "87-95", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 11, "row-header": false, "row-span": [11, 12]}], [{"bbox": [104.82499694824219, 511.496826171875, 121.81632995605469, 519.8714599609375], "spans": [[12, 0]], "text": "Title", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [182.03500366210938, 511.496826171875, 198.71250915527344, 519.8714599609375], "spans": [[12, 1]], "text": "5071", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [219.21099853515625, 511.496826171875, 233.69174194335938, 519.8714599609375], "spans": [[12, 2]], "text": "0.47", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [250.01956176757812, 511.496826171875, 264.50030517578125, 519.8714599609375], "spans": [[12, 3]], "text": "0.30", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [280.828125, 511.496826171875, 295.3088684082031, 519.8714599609375], "spans": [[12, 4]], "text": "0.50", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [305.27301025390625, 511.496826171875, 324.9811706542969, 519.8714599609375], "spans": [[12, 5]], "text": "60-72", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [334.9428405761719, 511.496826171875, 354.6510009765625, 519.8714599609375], "spans": [[12, 6]], "text": "24-63", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [364.6126708984375, 511.496826171875, 384.3208312988281, 519.8714599609375], "spans": [[12, 7]], "text": "50-63", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [394.2825012207031, 511.496826171875, 418.1600341796875, 519.8714599609375], "spans": [[12, 8]], "text": "94-100", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [428.1217041015625, 511.496826171875, 447.8298645019531, 519.8714599609375], "spans": [[12, 9]], "text": "82-96", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [457.8005065917969, 511.496826171875, 477.5086669921875, 519.8714599609375], "spans": [[12, 10]], "text": "68-79", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [487.4703369140625, 511.496826171875, 507.1784973144531, 519.8714599609375], "spans": [[12, 11]], "text": "24-56", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 12, "row-header": false, "row-span": [12, 13]}], [{"bbox": [104.82499694824219, 500.1388244628906, 123.43028259277344, 508.5134582519531], "spans": [[13, 0]], "text": "Total", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [169.52699279785156, 500.1388244628906, 198.71263122558594, 508.5134582519531], "spans": [[13, 1]], "text": "1107470", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [208.6750030517578, 500.1388244628906, 233.69125366210938, 508.5134582519531], "spans": [[13, 2]], "text": "941123", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [243.65292358398438, 500.1388244628906, 264.49981689453125, 508.5134582519531], "spans": [[13, 3]], "text": "99816", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [274.46148681640625, 500.1388244628906, 295.3083801269531, 508.5134582519531], "spans": [[13, 4]], "text": "66531", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [305.27301025390625, 500.1388244628906, 324.9811706542969, 508.5134582519531], "spans": [[13, 5]], "text": "82-83", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [334.9428405761719, 500.1388244628906, 354.6510009765625, 508.5134582519531], "spans": [[13, 6]], "text": "71-74", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [364.6126708984375, 500.1388244628906, 384.3208312988281, 508.5134582519531], "spans": [[13, 7]], "text": "79-81", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [398.4518737792969, 500.1388244628906, 418.1600341796875, 508.5134582519531], "spans": [[13, 8]], "text": "89-94", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [428.1217041015625, 500.1388244628906, 447.8298645019531, 508.5134582519531], "spans": [[13, 9]], "text": "86-91", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [457.8005065917969, 500.1388244628906, 477.5086669921875, 508.5134582519531], "spans": [[13, 10]], "text": "71-76", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [487.4703369140625, 500.1388244628906, 507.1784973144531, 508.5134582519531], "spans": [[13, 11]], "text": "68-85", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 13, "row-header": false, "row-span": [13, 14]}]], "model": null}, {"bounding-box": null, "prov": [{"bbox": [61.93328094482422, 440.30438232421875, 285.75616455078125, 596.587158203125], "page": 6, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 2: Prediction performance (mAP@0.5-0.95) of object detection networks on DocLayNet test set. The MRCNN (Mask R-CNN) and FRCNN (Faster R-CNN) models with ResNet-50 or ResNet-101 backbone were trained based on the network architectures from the detectron2 model zoo (Mask R-CNN R50, R101-FPN 3x, Faster R-CNN R101-FPN 3x), with default configurations. The YOLO implementation utilized was YOLOv5x6 [13]. All models were initialised using pre-trained weights from the COCO 2017 dataset.", "type": "table", "#-cols": 6, "#-rows": 14, "data": [[{"bbox": null, "spans": [[0, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [132.36500549316406, 585.65185546875, 157.99098205566406, 594.0264892578125], "spans": [[0, 1], [1, 1]], "text": "human", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 2]}, {"bbox": [173.5050048828125, 585.65185546875, 204.618408203125, 594.0264892578125], "spans": [[0, 2], [0, 3]], "text": "MRCNN", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 4], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [173.5050048828125, 585.65185546875, 204.618408203125, 594.0264892578125], "spans": [[0, 2], [0, 3]], "text": "MRCNN", "type": "col_header", "col": 3, "col-header": false, "col-span": [2, 4], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [220.13027954101562, 585.65185546875, 248.069580078125, 594.0264892578125], "spans": [[0, 4]], "text": "FRCNN", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [258.03125, 585.65185546875, 280.1782531738281, 594.0264892578125], "spans": [[0, 5]], "text": "YOLO", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": null, "spans": [[1, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [132.36500549316406, 585.65185546875, 157.99098205566406, 594.0264892578125], "spans": [[0, 1], [1, 1]], "text": "human", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [0, 2]}, {"bbox": [168.39300537109375, 574.6928100585938, 181.9950408935547, 583.0674438476562], "spans": [[1, 2]], "text": "R50", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [192.39605712890625, 574.6928100585938, 210.16746520996094, 583.0674438476562], "spans": [[1, 3]], "text": "R101", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [225.2130889892578, 574.6928100585938, 242.9844970703125, 583.0674438476562], "spans": [[1, 4]], "text": "R101", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [260.5137939453125, 574.6928100585938, 277.702392578125, 583.0674438476562], "spans": [[1, 5]], "text": "v5x6", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [67.66300201416016, 563.3358154296875, 96.8486328125, 571.71044921875], "spans": [[2, 0]], "text": "Caption", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [135.32400512695312, 563.3358154296875, 155.0321502685547, 571.71044921875], "spans": [[2, 1]], "text": "84-89", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [167.95399475097656, 563.3358154296875, 182.43472290039062, 571.71044921875], "spans": [[2, 2]], "text": "68.4", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [194.04620361328125, 563.3358154296875, 208.52694702148438, 571.71044921875], "spans": [[2, 3]], "text": "71.5", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [226.8632354736328, 563.3358154296875, 241.34396362304688, 571.71044921875], "spans": [[2, 4]], "text": "70.1", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [261.8680419921875, 563.3358154296875, 276.3487854003906, 571.71044921875], "spans": [[2, 5]], "text": "77.7", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [67.66300201416016, 552.3768310546875, 100.16619873046875, 560.75146484375], "spans": [[3, 0]], "text": "Footnote", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [135.32400512695312, 552.3768310546875, 155.0321502685547, 560.75146484375], "spans": [[3, 1]], "text": "83-91", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [167.95399475097656, 552.3768310546875, 182.43472290039062, 560.75146484375], "spans": [[3, 2]], "text": "70.9", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [194.04620361328125, 552.3768310546875, 208.52694702148438, 560.75146484375], "spans": [[3, 3]], "text": "71.8", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [226.8632354736328, 552.3768310546875, 241.34396362304688, 560.75146484375], "spans": [[3, 4]], "text": "73.7", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [261.8680419921875, 552.3768310546875, 276.3487854003906, 560.75146484375], "spans": [[3, 5]], "text": "77.2", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [67.66300201416016, 541.4178466796875, 98.1756591796875, 549.79248046875], "spans": [[4, 0]], "text": "Formula", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [135.32400512695312, 541.4178466796875, 155.0321502685547, 549.79248046875], "spans": [[4, 1]], "text": "83-85", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [167.95399475097656, 541.4178466796875, 182.43472290039062, 549.79248046875], "spans": [[4, 2]], "text": "60.1", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [194.04620361328125, 541.4178466796875, 208.52694702148438, 549.79248046875], "spans": [[4, 3]], "text": "63.4", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [226.8632354736328, 541.4178466796875, 241.34396362304688, 549.79248046875], "spans": [[4, 4]], "text": "63.5", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [261.8680419921875, 541.4178466796875, 276.3487854003906, 549.79248046875], "spans": [[4, 5]], "text": "66.2", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [67.66300201416016, 530.4588012695312, 100.54279327392578, 538.8334350585938], "spans": [[5, 0]], "text": "List-item", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [135.32400512695312, 530.4588012695312, 155.0321502685547, 538.8334350585938], "spans": [[5, 1]], "text": "87-88", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [167.95399475097656, 530.4588012695312, 182.43472290039062, 538.8334350585938], "spans": [[5, 2]], "text": "81.2", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [194.04620361328125, 530.4588012695312, 208.52694702148438, 538.8334350585938], "spans": [[5, 3]], "text": "80.8", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [226.8632354736328, 530.4588012695312, 241.34396362304688, 538.8334350585938], "spans": [[5, 4]], "text": "81.0", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [261.8680419921875, 530.4588012695312, 276.3487854003906, 538.8334350585938], "spans": [[5, 5]], "text": "86.2", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [67.66300201416016, 519.4998168945312, 110.19064331054688, 527.8744506835938], "spans": [[6, 0]], "text": "Page-footer", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [135.32400512695312, 519.4998168945312, 155.0321502685547, 527.8744506835938], "spans": [[6, 1]], "text": "93-94", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [167.95399475097656, 519.4998168945312, 182.43472290039062, 527.8744506835938], "spans": [[6, 2]], "text": "61.6", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [194.04620361328125, 519.4998168945312, 208.52694702148438, 527.8744506835938], "spans": [[6, 3]], "text": "59.3", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [226.8632354736328, 519.4998168945312, 241.34396362304688, 527.8744506835938], "spans": [[6, 4]], "text": "58.9", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [261.8680419921875, 519.4998168945312, 276.3487854003906, 527.8744506835938], "spans": [[6, 5]], "text": "61.1", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [67.66300201416016, 508.54083251953125, 112.94332122802734, 516.9154663085938], "spans": [[7, 0]], "text": "Page-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [135.32400512695312, 508.54083251953125, 155.0321502685547, 516.9154663085938], "spans": [[7, 1]], "text": "85-89", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [167.95399475097656, 508.54083251953125, 182.43472290039062, 516.9154663085938], "spans": [[7, 2]], "text": "71.9", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [194.04620361328125, 508.54083251953125, 208.52694702148438, 516.9154663085938], "spans": [[7, 3]], "text": "70.0", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [226.8632354736328, 508.54083251953125, 241.34396362304688, 516.9154663085938], "spans": [[7, 4]], "text": "72.0", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [261.8680419921875, 508.54083251953125, 276.3487854003906, 516.9154663085938], "spans": [[7, 5]], "text": "67.9", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 7, "row-header": false, "row-span": [7, 8]}], [{"bbox": [67.66300201416016, 497.5818176269531, 93.64762878417969, 505.9564514160156], "spans": [[8, 0]], "text": "Picture", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [135.32400512695312, 497.5818176269531, 155.0321502685547, 505.9564514160156], "spans": [[8, 1]], "text": "69-71", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [167.95399475097656, 497.5818176269531, 182.43472290039062, 505.9564514160156], "spans": [[8, 2]], "text": "71.7", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [194.04620361328125, 497.5818176269531, 208.52694702148438, 505.9564514160156], "spans": [[8, 3]], "text": "72.7", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [226.8632354736328, 497.5818176269531, 241.34396362304688, 505.9564514160156], "spans": [[8, 4]], "text": "72.0", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [261.8680419921875, 497.5818176269531, 276.3487854003906, 505.9564514160156], "spans": [[8, 5]], "text": "77.1", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 8, "row-header": false, "row-span": [8, 9]}], [{"bbox": [67.66300201416016, 486.6228332519531, 122.40287780761719, 494.9974670410156], "spans": [[9, 0]], "text": "Section-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [135.32400512695312, 486.6228332519531, 155.0321502685547, 494.9974670410156], "spans": [[9, 1]], "text": "83-84", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [167.95399475097656, 486.6228332519531, 182.43472290039062, 494.9974670410156], "spans": [[9, 2]], "text": "67.6", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [194.04620361328125, 486.6228332519531, 208.52694702148438, 494.9974670410156], "spans": [[9, 3]], "text": "69.3", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [226.8632354736328, 486.6228332519531, 241.34396362304688, 494.9974670410156], "spans": [[9, 4]], "text": "68.4", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [261.8680419921875, 486.6228332519531, 276.3487854003906, 494.9974670410156], "spans": [[9, 5]], "text": "74.6", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 9, "row-header": false, "row-span": [9, 10]}], [{"bbox": [67.66300201416016, 475.663818359375, 87.46977996826172, 484.0384521484375], "spans": [[10, 0]], "text": "Table", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [135.32400512695312, 475.663818359375, 155.0321502685547, 484.0384521484375], "spans": [[10, 1]], "text": "77-81", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [167.95399475097656, 475.663818359375, 182.43472290039062, 484.0384521484375], "spans": [[10, 2]], "text": "82.2", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [194.04620361328125, 475.663818359375, 208.52694702148438, 484.0384521484375], "spans": [[10, 3]], "text": "82.9", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [226.8632354736328, 475.663818359375, 241.34396362304688, 484.0384521484375], "spans": [[10, 4]], "text": "82.2", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [261.8680419921875, 475.663818359375, 276.3487854003906, 484.0384521484375], "spans": [[10, 5]], "text": "86.3", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 10, "row-header": false, "row-span": [10, 11]}], [{"bbox": [67.66300201416016, 464.7058410644531, 83.62319946289062, 473.0804748535156], "spans": [[11, 0]], "text": "Text", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [135.32400512695312, 464.7058410644531, 155.0321502685547, 473.0804748535156], "spans": [[11, 1]], "text": "84-86", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [167.95399475097656, 464.7058410644531, 182.43472290039062, 473.0804748535156], "spans": [[11, 2]], "text": "84.6", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [194.04620361328125, 464.7058410644531, 208.52694702148438, 473.0804748535156], "spans": [[11, 3]], "text": "85.8", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [226.8632354736328, 464.7058410644531, 241.34396362304688, 473.0804748535156], "spans": [[11, 4]], "text": "85.4", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [261.8680419921875, 464.7058410644531, 276.3487854003906, 473.0804748535156], "spans": [[11, 5]], "text": "88.1", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 11, "row-header": false, "row-span": [11, 12]}], [{"bbox": [67.66300201416016, 453.746826171875, 84.65432739257812, 462.1214599609375], "spans": [[12, 0]], "text": "Title", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [135.32400512695312, 453.746826171875, 155.0321502685547, 462.1214599609375], "spans": [[12, 1]], "text": "60-72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [167.95399475097656, 453.746826171875, 182.43472290039062, 462.1214599609375], "spans": [[12, 2]], "text": "76.7", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [194.04620361328125, 453.746826171875, 208.52694702148438, 462.1214599609375], "spans": [[12, 3]], "text": "80.4", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [226.8632354736328, 453.746826171875, 241.34396362304688, 462.1214599609375], "spans": [[12, 4]], "text": "79.9", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [261.8680419921875, 453.746826171875, 276.3487854003906, 462.1214599609375], "spans": [[12, 5]], "text": "82.7", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 12, "row-header": false, "row-span": [12, 13]}], [{"bbox": [67.66300201416016, 442.3888244628906, 78.62890625, 450.7634582519531], "spans": [[13, 0]], "text": "All", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [135.32400512695312, 442.3888244628906, 155.0321502685547, 450.7634582519531], "spans": [[13, 1]], "text": "82-83", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [167.95399475097656, 442.3888244628906, 182.43472290039062, 450.7634582519531], "spans": [[13, 2]], "text": "72.4", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [194.04620361328125, 442.3888244628906, 208.52694702148438, 450.7634582519531], "spans": [[13, 3]], "text": "73.5", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [226.8632354736328, 442.3888244628906, 241.34396362304688, 450.7634582519531], "spans": [[13, 4]], "text": "73.4", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [261.8680419921875, 442.3888244628906, 276.3487854003906, 450.7634582519531], "spans": [[13, 5]], "text": "76.8", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 13, "row-header": false, "row-span": [13, 14]}]], "model": null}, {"bounding-box": null, "prov": [{"bbox": [80.5073471069336, 496.419189453125, 267.3428649902344, 640.9814453125], "page": 7, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 3: Performance of a Mask R-CNN R50 network in mAP@0.5-0.95 scores trained on DocLayNet with different class label sets. The reduced label sets were obtained by either down-mapping or dropping labels.", "type": "table", "#-cols": 5, "#-rows": 13, "data": [[{"bbox": [86.37200164794922, 630.5248413085938, 129.4645233154297, 638.8994750976562], "spans": [[0, 0]], "text": "Class-count", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [151.07400512695312, 630.5248413085938, 159.41275024414062, 638.8994750976562], "spans": [[0, 1]], "text": "11", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [179.3181610107422, 630.5248413085938, 183.48753356933594, 638.8994750976562], "spans": [[0, 2]], "text": "6", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [213.33668518066406, 630.5248413085938, 217.5060577392578, 638.8994750976562], "spans": [[0, 3]], "text": "5", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [247.35520935058594, 630.5248413085938, 251.5245819091797, 638.8994750976562], "spans": [[0, 4]], "text": "4", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [86.37200164794922, 619.1678466796875, 115.55763244628906, 627.54248046875], "spans": [[1, 0]], "text": "Caption", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [151.07400512695312, 619.1678466796875, 159.41275024414062, 627.54248046875], "spans": [[1, 1]], "text": "68", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [173.42723083496094, 619.1678466796875, 189.38742065429688, 627.54248046875], "spans": [[1, 2]], "text": "Text", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [207.4457550048828, 619.1678466796875, 223.40594482421875, 627.54248046875], "spans": [[1, 3]], "text": "Text", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [241.4642791748047, 619.1678466796875, 257.4244689941406, 627.54248046875], "spans": [[1, 4]], "text": "Text", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [86.37200164794922, 608.2088012695312, 118.87519836425781, 616.5834350585938], "spans": [[2, 0]], "text": "Footnote", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [151.07400512695312, 608.2088012695312, 159.41275024414062, 616.5834350585938], "spans": [[2, 1]], "text": "71", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [173.42723083496094, 608.2088012695312, 189.38742065429688, 616.5834350585938], "spans": [[2, 2]], "text": "Text", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [207.4457550048828, 608.2088012695312, 223.40594482421875, 616.5834350585938], "spans": [[2, 3]], "text": "Text", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [241.4642791748047, 608.2088012695312, 257.4244689941406, 616.5834350585938], "spans": [[2, 4]], "text": "Text", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [86.37200164794922, 597.2498168945312, 116.88465881347656, 605.6244506835938], "spans": [[3, 0]], "text": "Formula", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [151.07400512695312, 597.2498168945312, 159.41275024414062, 605.6244506835938], "spans": [[3, 1]], "text": "60", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [173.42723083496094, 597.2498168945312, 189.38742065429688, 605.6244506835938], "spans": [[3, 2]], "text": "Text", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [207.4457550048828, 597.2498168945312, 223.40594482421875, 605.6244506835938], "spans": [[3, 3]], "text": "Text", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [241.4642791748047, 597.2498168945312, 257.4244689941406, 605.6244506835938], "spans": [[3, 4]], "text": "Text", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [86.37200164794922, 586.2908325195312, 119.25179290771484, 594.6654663085938], "spans": [[4, 0]], "text": "List-item", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [151.07400512695312, 586.2908325195312, 159.41275024414062, 594.6654663085938], "spans": [[4, 1]], "text": "81", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [173.42723083496094, 586.2908325195312, 189.38742065429688, 594.6654663085938], "spans": [[4, 2]], "text": "Text", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [211.2564697265625, 586.2908325195312, 219.59521484375, 594.6654663085938], "spans": [[4, 3]], "text": "82", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [241.46426391601562, 586.2908325195312, 257.4244689941406, 594.6654663085938], "spans": [[4, 4]], "text": "Text", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [86.37200164794922, 575.3318481445312, 128.89964294433594, 583.7064819335938], "spans": [[5, 0]], "text": "Page-footer", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [151.07400512695312, 575.3318481445312, 159.41275024414062, 583.7064819335938], "spans": [[5, 1]], "text": "62", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [177.23794555664062, 575.3318481445312, 185.57669067382812, 583.7064819335938], "spans": [[5, 2]], "text": "62", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [213.9105224609375, 575.3318481445312, 216.941162109375, 583.7064819335938], "spans": [[5, 3]], "text": "-", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [247.92904663085938, 575.3318481445312, 250.95968627929688, 583.7064819335938], "spans": [[5, 4]], "text": "-", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [86.37200164794922, 564.372802734375, 131.65231323242188, 572.7474365234375], "spans": [[6, 0]], "text": "Page-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [151.07400512695312, 564.372802734375, 159.41275024414062, 572.7474365234375], "spans": [[6, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [177.23794555664062, 564.372802734375, 185.57669067382812, 572.7474365234375], "spans": [[6, 2]], "text": "68", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [213.9105224609375, 564.372802734375, 216.941162109375, 572.7474365234375], "spans": [[6, 3]], "text": "-", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [247.92904663085938, 564.372802734375, 250.95968627929688, 572.7474365234375], "spans": [[6, 4]], "text": "-", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [86.37200164794922, 553.413818359375, 112.35662841796875, 561.7884521484375], "spans": [[7, 0]], "text": "Picture", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [151.07400512695312, 553.413818359375, 159.41275024414062, 561.7884521484375], "spans": [[7, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [177.23794555664062, 553.413818359375, 185.57669067382812, 561.7884521484375], "spans": [[7, 2]], "text": "72", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [211.25645446777344, 553.413818359375, 219.59519958496094, 561.7884521484375], "spans": [[7, 3]], "text": "72", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [245.27496337890625, 553.413818359375, 253.61370849609375, 561.7884521484375], "spans": [[7, 4]], "text": "72", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 7, "row-header": false, "row-span": [7, 8]}], [{"bbox": [86.37200164794922, 542.455810546875, 141.11187744140625, 550.8304443359375], "spans": [[8, 0]], "text": "Section-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [151.07400512695312, 542.455810546875, 159.41275024414062, 550.8304443359375], "spans": [[8, 1]], "text": "68", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [177.23794555664062, 542.455810546875, 185.57669067382812, 550.8304443359375], "spans": [[8, 2]], "text": "67", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [211.25645446777344, 542.455810546875, 219.59519958496094, 550.8304443359375], "spans": [[8, 3]], "text": "69", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [245.27496337890625, 542.455810546875, 253.61370849609375, 550.8304443359375], "spans": [[8, 4]], "text": "68", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 8, "row-header": false, "row-span": [8, 9]}], [{"bbox": [86.37200164794922, 531.496826171875, 106.17877960205078, 539.8714599609375], "spans": [[9, 0]], "text": "Table", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [151.07400512695312, 531.496826171875, 159.41275024414062, 539.8714599609375], "spans": [[9, 1]], "text": "82", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [177.23794555664062, 531.496826171875, 185.57669067382812, 539.8714599609375], "spans": [[9, 2]], "text": "83", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [211.25645446777344, 531.496826171875, 219.59519958496094, 539.8714599609375], "spans": [[9, 3]], "text": "82", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [245.27496337890625, 531.496826171875, 253.61370849609375, 539.8714599609375], "spans": [[9, 4]], "text": "82", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 9, "row-header": false, "row-span": [9, 10]}], [{"bbox": [86.37200164794922, 520.537841796875, 102.33219909667969, 528.9124755859375], "spans": [[10, 0]], "text": "Text", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [151.07400512695312, 520.537841796875, 159.41275024414062, 528.9124755859375], "spans": [[10, 1]], "text": "85", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [177.23794555664062, 520.537841796875, 185.57669067382812, 528.9124755859375], "spans": [[10, 2]], "text": "84", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [211.25645446777344, 520.537841796875, 219.59519958496094, 528.9124755859375], "spans": [[10, 3]], "text": "84", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [245.27496337890625, 520.537841796875, 253.61370849609375, 528.9124755859375], "spans": [[10, 4]], "text": "84", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 10, "row-header": false, "row-span": [10, 11]}], [{"bbox": [86.37200164794922, 509.5788269042969, 103.36332702636719, 517.9534301757812], "spans": [[11, 0]], "text": "Title", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [151.07400512695312, 509.5788269042969, 159.41275024414062, 517.9534301757812], "spans": [[11, 1]], "text": "77", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [169.37442016601562, 509.5788269042969, 193.4312744140625, 517.9534301757812], "spans": [[11, 2]], "text": "Sec.-h.", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [203.3929443359375, 509.5788269042969, 227.44979858398438, 517.9534301757812], "spans": [[11, 3]], "text": "Sec.-h.", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [237.41146850585938, 509.5788269042969, 261.46832275390625, 517.9534301757812], "spans": [[11, 4]], "text": "Sec.-h.", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 11, "row-header": false, "row-span": [11, 12]}], [{"bbox": [86.37200164794922, 498.2208251953125, 113.3160171508789, 506.595458984375], "spans": [[12, 0]], "text": "Overall", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [151.07400512695312, 498.2208251953125, 159.41275024414062, 506.595458984375], "spans": [[12, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [177.23794555664062, 498.2208251953125, 185.57669067382812, 506.595458984375], "spans": [[12, 2]], "text": "73", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [211.25645446777344, 498.2208251953125, 219.59519958496094, 506.595458984375], "spans": [[12, 3]], "text": "78", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [245.27496337890625, 498.2208251953125, 253.61370849609375, 506.595458984375], "spans": [[12, 4]], "text": "77", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 12, "row-header": false, "row-span": [12, 13]}]], "model": null}, {"bounding-box": null, "prov": [{"bbox": [353.065185546875, 485.2873840332031, 523.3069458007812, 641.25341796875], "page": 7, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 4: Performance of a Mask R-CNN R50 network with document-wise and page-wise split for different label sets. Naive page-wise split will result in GLYPH 10% point improvement.", "type": "table", "#-cols": 5, "#-rows": 14, "data": [[{"bbox": [358.6390075683594, 630.5248413085938, 401.7315368652344, 638.8994750976562], "spans": [[0, 0]], "text": "Class-count", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [440.2250061035156, 630.5248413085938, 448.5637512207031, 638.8994750976562], "spans": [[0, 1], [0, 2]], "text": "11", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 3], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [440.2250061035156, 630.5248413085938, 448.5637512207031, 638.8994750976562], "spans": [[0, 1], [0, 2]], "text": "11", "type": "col_header", "col": 2, "col-header": false, "col-span": [1, 3], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [494.3800048828125, 630.5248413085938, 498.54937744140625, 638.8994750976562], "spans": [[0, 3], [0, 4]], "text": "5", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [494.3800048828125, 630.5248413085938, 498.54937744140625, 638.8994750976562], "spans": [[0, 3], [0, 4]], "text": "5", "type": "col_header", "col": 4, "col-header": false, "col-span": [3, 5], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [358.6390075683594, 619.5658569335938, 375.27166748046875, 627.9404907226562], "spans": [[1, 0]], "text": "Split", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [423.34100341796875, 619.5658569335938, 438.0458984375, 627.9404907226562], "spans": [[1, 1]], "text": "Doc", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [448.007568359375, 619.5658569335938, 465.44720458984375, 627.9404907226562], "spans": [[1, 2]], "text": "Page", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [475.4110107421875, 619.5658569335938, 490.11590576171875, 627.9404907226562], "spans": [[1, 3]], "text": "Doc", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [500.07757568359375, 619.5658569335938, 517.5172119140625, 627.9404907226562], "spans": [[1, 4]], "text": "Page", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [358.6390075683594, 608.2088012695312, 387.82464599609375, 616.5834350585938], "spans": [[2, 0]], "text": "Caption", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [426.52398681640625, 608.2088012695312, 434.86273193359375, 616.5834350585938], "spans": [[2, 1]], "text": "68", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [452.5624084472656, 608.2088012695312, 460.9011535644531, 616.5834350585938], "spans": [[2, 2]], "text": "83", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": null, "spans": [[2, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": null, "spans": [[2, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [358.6390075683594, 597.2498168945312, 391.1422119140625, 605.6244506835938], "spans": [[3, 0]], "text": "Footnote", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [426.52398681640625, 597.2498168945312, 434.86273193359375, 605.6244506835938], "spans": [[3, 1]], "text": "71", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [452.5624084472656, 597.2498168945312, 460.9011535644531, 605.6244506835938], "spans": [[3, 2]], "text": "84", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": null, "spans": [[3, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": null, "spans": [[3, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [358.6390075683594, 586.2908325195312, 389.15167236328125, 594.6654663085938], "spans": [[4, 0]], "text": "Formula", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [426.52398681640625, 586.2908325195312, 434.86273193359375, 594.6654663085938], "spans": [[4, 1]], "text": "60", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [452.5624084472656, 586.2908325195312, 460.9011535644531, 594.6654663085938], "spans": [[4, 2]], "text": "66", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": null, "spans": [[4, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": null, "spans": [[4, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [358.6390075683594, 575.3318481445312, 391.518798828125, 583.7064819335938], "spans": [[5, 0]], "text": "List-item", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [426.52398681640625, 575.3318481445312, 434.86273193359375, 583.7064819335938], "spans": [[5, 1]], "text": "81", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [452.5624084472656, 575.3318481445312, 460.9011535644531, 583.7064819335938], "spans": [[5, 2]], "text": "88", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [478.593994140625, 575.3318481445312, 486.9327392578125, 583.7064819335938], "spans": [[5, 3]], "text": "82", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [504.6324157714844, 575.3318481445312, 512.97119140625, 583.7064819335938], "spans": [[5, 4]], "text": "88", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [358.6390075683594, 564.372802734375, 401.1666564941406, 572.7474365234375], "spans": [[6, 0]], "text": "Page-footer", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [426.52398681640625, 564.372802734375, 434.86273193359375, 572.7474365234375], "spans": [[6, 1]], "text": "62", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [452.5624084472656, 564.372802734375, 460.9011535644531, 572.7474365234375], "spans": [[6, 2]], "text": "89", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": null, "spans": [[6, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": null, "spans": [[6, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [358.6390075683594, 553.413818359375, 403.9193115234375, 561.7884521484375], "spans": [[7, 0]], "text": "Page-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [426.52398681640625, 553.413818359375, 434.86273193359375, 561.7884521484375], "spans": [[7, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [452.5624084472656, 553.413818359375, 460.9011535644531, 561.7884521484375], "spans": [[7, 2]], "text": "90", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": null, "spans": [[7, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": null, "spans": [[7, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 7, "row-header": false, "row-span": [7, 8]}], [{"bbox": [358.6390075683594, 542.455810546875, 384.6236572265625, 550.8304443359375], "spans": [[8, 0]], "text": "Picture", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [426.52398681640625, 542.455810546875, 434.86273193359375, 550.8304443359375], "spans": [[8, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [452.5624084472656, 542.455810546875, 460.9011535644531, 550.8304443359375], "spans": [[8, 2]], "text": "82", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [478.593994140625, 542.455810546875, 486.9327392578125, 550.8304443359375], "spans": [[8, 3]], "text": "72", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [504.6324157714844, 542.455810546875, 512.97119140625, 550.8304443359375], "spans": [[8, 4]], "text": "82", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 8, "row-header": false, "row-span": [8, 9]}], [{"bbox": [358.6390075683594, 531.496826171875, 413.37890625, 539.8714599609375], "spans": [[9, 0]], "text": "Section-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [426.52398681640625, 531.496826171875, 434.86273193359375, 539.8714599609375], "spans": [[9, 1]], "text": "68", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [452.5624084472656, 531.496826171875, 460.9011535644531, 539.8714599609375], "spans": [[9, 2]], "text": "83", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [478.593994140625, 531.496826171875, 486.9327392578125, 539.8714599609375], "spans": [[9, 3]], "text": "69", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [504.6324157714844, 531.496826171875, 512.97119140625, 539.8714599609375], "spans": [[9, 4]], "text": "83", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 9, "row-header": false, "row-span": [9, 10]}], [{"bbox": [358.6390075683594, 520.537841796875, 378.4457702636719, 528.9124755859375], "spans": [[10, 0]], "text": "Table", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [426.52398681640625, 520.537841796875, 434.86273193359375, 528.9124755859375], "spans": [[10, 1]], "text": "82", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [452.5624084472656, 520.537841796875, 460.9011535644531, 528.9124755859375], "spans": [[10, 2]], "text": "89", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [478.593994140625, 520.537841796875, 486.9327392578125, 528.9124755859375], "spans": [[10, 3]], "text": "82", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [504.6324157714844, 520.537841796875, 512.97119140625, 528.9124755859375], "spans": [[10, 4]], "text": "90", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 10, "row-header": false, "row-span": [10, 11]}], [{"bbox": [358.6390075683594, 509.5788269042969, 374.5992126464844, 517.9534301757812], "spans": [[11, 0]], "text": "Text", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [426.52398681640625, 509.5788269042969, 434.86273193359375, 517.9534301757812], "spans": [[11, 1]], "text": "85", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [452.5624084472656, 509.5788269042969, 460.9011535644531, 517.9534301757812], "spans": [[11, 2]], "text": "91", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [478.593994140625, 509.5788269042969, 486.9327392578125, 517.9534301757812], "spans": [[11, 3]], "text": "84", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [504.6324157714844, 509.5788269042969, 512.97119140625, 517.9534301757812], "spans": [[11, 4]], "text": "90", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 11, "row-header": false, "row-span": [11, 12]}], [{"bbox": [358.6390075683594, 498.6198425292969, 375.6303405761719, 506.9944763183594], "spans": [[12, 0]], "text": "Title", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [426.52398681640625, 498.6198425292969, 434.86273193359375, 506.9944763183594], "spans": [[12, 1]], "text": "77", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [452.5624084472656, 498.6198425292969, 460.9011535644531, 506.9944763183594], "spans": [[12, 2]], "text": "81", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": null, "spans": [[12, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": null, "spans": [[12, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 12, "row-header": false, "row-span": [12, 13]}], [{"bbox": [358.6390075683594, 487.2628173828125, 369.60491943359375, 495.637451171875], "spans": [[13, 0]], "text": "All", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [426.52398681640625, 487.2628173828125, 434.86273193359375, 495.637451171875], "spans": [[13, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [452.5624084472656, 487.2628173828125, 460.9011535644531, 495.637451171875], "spans": [[13, 2]], "text": "84", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [478.593994140625, 487.2628173828125, 486.9327392578125, 495.637451171875], "spans": [[13, 3]], "text": "78", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [504.6324157714844, 487.2628173828125, 512.97119140625, 495.637451171875], "spans": [[13, 4]], "text": "87", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 13, "row-header": false, "row-span": [13, 14]}]], "model": null}, {"bounding-box": null, "prov": [{"bbox": [72.87370300292969, 452.12615966796875, 274.87945556640625, 619.3699951171875], "page": 8, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 5: Prediction Performance (mAP@0.5-0.95) of a Mask R-CNN R50 network across the PubLayNet, DocBank & DocLayNet data-sets. By evaluating on common label classes of each dataset, we observe that the DocLayNet-trained model has much less pronounced variations in performance across all datasets.", "type": "table", "#-cols": 4, "#-rows": 15, "data": [[{"bbox": null, "spans": [[0, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [217.74099731445312, 608.6068115234375, 256.2606506347656, 616.9814453125], "spans": [[0, 1], [0, 2], [0, 3]], "text": "Testing on", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 4], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [217.74099731445312, 608.6068115234375, 256.2606506347656, 616.9814453125], "spans": [[0, 1], [0, 2], [0, 3]], "text": "Testing on", "type": "col_header", "col": 2, "col-header": false, "col-span": [1, 4], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [217.74099731445312, 608.6068115234375, 256.2606506347656, 616.9814453125], "spans": [[0, 1], [0, 2], [0, 3]], "text": "Testing on", "type": "col_header", "col": 3, "col-header": false, "col-span": [1, 4], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [154.62899780273438, 597.6488037109375, 175.4758758544922, 606.0234375], "spans": [[1, 0]], "text": "labels", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [204.69000244140625, 597.6488037109375, 220.5426025390625, 606.0234375], "spans": [[1, 1]], "text": "PLN", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [230.5042724609375, 597.6488037109375, 242.0619659423828, 606.0234375], "spans": [[1, 2]], "text": "DB", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [252.0236358642578, 597.6488037109375, 269.31085205078125, 606.0234375], "spans": [[1, 3]], "text": "DLN", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [154.62899780273438, 586.2908325195312, 177.9237060546875, 594.6654663085938], "spans": [[2, 0]], "text": "Figure", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [208.44700622558594, 586.2908325195312, 216.78575134277344, 594.6654663085938], "spans": [[2, 1]], "text": "96", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [232.11830139160156, 586.2908325195312, 240.45704650878906, 594.6654663085938], "spans": [[2, 2]], "text": "43", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [256.4979248046875, 586.2908325195312, 264.836669921875, 594.6654663085938], "spans": [[2, 3]], "text": "23", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [154.62899780273438, 575.3318481445312, 194.72674560546875, 583.7064819335938], "spans": [[3, 0]], "text": "Sec-header", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [208.44700622558594, 575.3318481445312, 216.78575134277344, 583.7064819335938], "spans": [[3, 1]], "text": "87", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [234.77235412597656, 575.3318481445312, 237.80299377441406, 583.7064819335938], "spans": [[3, 2]], "text": "-", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [256.4979248046875, 575.3318481445312, 264.836669921875, 583.7064819335938], "spans": [[3, 3]], "text": "32", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [154.62899780273438, 564.372802734375, 174.43577575683594, 572.7474365234375], "spans": [[4, 0]], "text": "Table", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [208.44700622558594, 564.372802734375, 216.78575134277344, 572.7474365234375], "spans": [[4, 1]], "text": "95", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [232.11830139160156, 564.372802734375, 240.45704650878906, 572.7474365234375], "spans": [[4, 2]], "text": "24", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [256.4979248046875, 564.372802734375, 264.836669921875, 572.7474365234375], "spans": [[4, 3]], "text": "49", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [154.62899780273438, 553.413818359375, 170.5891876220703, 561.7884521484375], "spans": [[5, 0]], "text": "Text", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [208.44700622558594, 553.413818359375, 216.78575134277344, 561.7884521484375], "spans": [[5, 1]], "text": "96", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [234.77235412597656, 553.413818359375, 237.80299377441406, 561.7884521484375], "spans": [[5, 2]], "text": "-", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [256.4979248046875, 553.413818359375, 264.836669921875, 561.7884521484375], "spans": [[5, 3]], "text": "42", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [154.62899780273438, 542.455810546875, 171.27960205078125, 550.8304443359375], "spans": [[6, 0]], "text": "total", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [208.44700622558594, 542.455810546875, 216.78575134277344, 550.8304443359375], "spans": [[6, 1]], "text": "93", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [232.11830139160156, 542.455810546875, 240.45704650878906, 550.8304443359375], "spans": [[6, 2]], "text": "34", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [256.4979248046875, 542.455810546875, 264.836669921875, 550.8304443359375], "spans": [[6, 3]], "text": "30", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [154.62899780273438, 531.0978393554688, 177.9237060546875, 539.4724731445312], "spans": [[7, 0]], "text": "Figure", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [208.44700622558594, 531.0978393554688, 216.78575134277344, 539.4724731445312], "spans": [[7, 1]], "text": "77", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [232.11830139160156, 531.0978393554688, 240.45704650878906, 539.4724731445312], "spans": [[7, 2]], "text": "71", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [256.4979248046875, 531.0978393554688, 264.836669921875, 539.4724731445312], "spans": [[7, 3]], "text": "31", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}], [{"bbox": [154.62899780273438, 520.1388549804688, 174.43577575683594, 528.5134887695312], "spans": [[8, 0]], "text": "Table", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [208.44700622558594, 520.1388549804688, 216.78575134277344, 528.5134887695312], "spans": [[8, 1]], "text": "19", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [232.11830139160156, 520.1388549804688, 240.45704650878906, 528.5134887695312], "spans": [[8, 2]], "text": "65", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [256.4979248046875, 520.1388549804688, 264.836669921875, 528.5134887695312], "spans": [[8, 3]], "text": "22", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 8, "row-header": false, "row-span": [8, 9]}], [{"bbox": [154.62899780273438, 509.1798400878906, 171.27960205078125, 517.554443359375], "spans": [[9, 0]], "text": "total", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [208.44700622558594, 509.1798400878906, 216.78575134277344, 517.554443359375], "spans": [[9, 1]], "text": "48", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [232.11830139160156, 509.1798400878906, 240.45704650878906, 517.554443359375], "spans": [[9, 2]], "text": "68", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [256.4979248046875, 509.1798400878906, 264.836669921875, 517.554443359375], "spans": [[9, 3]], "text": "27", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 9, "row-header": false, "row-span": [9, 10]}], [{"bbox": [154.62899780273438, 497.82281494140625, 177.9237060546875, 506.19744873046875], "spans": [[10, 0]], "text": "Figure", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [208.44700622558594, 497.82281494140625, 216.78575134277344, 506.19744873046875], "spans": [[10, 1]], "text": "67", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [232.11830139160156, 497.82281494140625, 240.45704650878906, 506.19744873046875], "spans": [[10, 2]], "text": "51", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [256.4979248046875, 497.82281494140625, 264.836669921875, 506.19744873046875], "spans": [[10, 3]], "text": "72", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 10, "row-header": false, "row-span": [10, 11]}], [{"bbox": [154.62899780273438, 486.86383056640625, 194.72674560546875, 495.23846435546875], "spans": [[11, 0]], "text": "Sec-header", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [208.44700622558594, 486.86383056640625, 216.78575134277344, 495.23846435546875], "spans": [[11, 1]], "text": "53", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [234.77235412597656, 486.86383056640625, 237.80299377441406, 495.23846435546875], "spans": [[11, 2]], "text": "-", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [256.4979248046875, 486.86383056640625, 264.836669921875, 495.23846435546875], "spans": [[11, 3]], "text": "68", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 11, "row-header": false, "row-span": [11, 12]}], [{"bbox": [154.62899780273438, 475.9048156738281, 174.43577575683594, 484.2794494628906], "spans": [[12, 0]], "text": "Table", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [208.44700622558594, 475.9048156738281, 216.78575134277344, 484.2794494628906], "spans": [[12, 1]], "text": "87", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [232.11830139160156, 475.9048156738281, 240.45704650878906, 484.2794494628906], "spans": [[12, 2]], "text": "43", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [256.4979248046875, 475.9048156738281, 264.836669921875, 484.2794494628906], "spans": [[12, 3]], "text": "82", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 12, "row-header": false, "row-span": [12, 13]}], [{"bbox": [154.62899780273438, 464.9458312988281, 170.5891876220703, 473.3204650878906], "spans": [[13, 0]], "text": "Text", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [208.44700622558594, 464.9458312988281, 216.78575134277344, 473.3204650878906], "spans": [[13, 1]], "text": "77", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [234.77235412597656, 464.9458312988281, 237.80299377441406, 473.3204650878906], "spans": [[13, 2]], "text": "-", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [256.4979248046875, 464.9458312988281, 264.836669921875, 473.3204650878906], "spans": [[13, 3]], "text": "84", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 13, "row-header": false, "row-span": [13, 14]}], [{"bbox": [154.62899780273438, 453.98681640625, 171.27960205078125, 462.3614501953125], "spans": [[14, 0]], "text": "total", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 14, "row-header": false, "row-span": [14, 15]}, {"bbox": [208.44700622558594, 453.98681640625, 216.78575134277344, 462.3614501953125], "spans": [[14, 1]], "text": "59", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 14, "row-header": false, "row-span": [14, 15]}, {"bbox": [232.11830139160156, 453.98681640625, 240.45704650878906, 462.3614501953125], "spans": [[14, 2]], "text": "47", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 14, "row-header": false, "row-span": [14, 15]}, {"bbox": [256.4979248046875, 453.98681640625, 264.836669921875, 462.3614501953125], "spans": [[14, 3]], "text": "78", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 14, "row-header": false, "row-span": [14, 15]}]], "model": null}], "bitmaps": null, "equations": [], "footnotes": [], "page-dimensions": [{"height": 792.0, "page": 1, "width": 612.0}, {"height": 792.0, "page": 2, "width": 612.0}, {"height": 792.0, "page": 3, "width": 612.0}, {"height": 792.0, "page": 4, "width": 612.0}, {"height": 792.0, "page": 5, "width": 612.0}, {"height": 792.0, "page": 6, "width": 612.0}, {"height": 792.0, "page": 7, "width": 612.0}, {"height": 792.0, "page": 8, "width": 612.0}, {"height": 792.0, "page": 9, "width": 612.0}], "page-footers": [], "page-headers": [], "_s3_data": null, "identifiers": null}
\ No newline at end of file
+{"_name": "", "type": "pdf-document", "description": {"title": null, "abstract": null, "authors": null, "affiliations": null, "subjects": null, "keywords": null, "publication_date": null, "languages": null, "license": null, "publishers": null, "url_refs": null, "references": null, "publication": null, "reference_count": null, "citation_count": null, "citation_date": null, "advanced": null, "analytics": null, "logs": [], "collection": null, "acquisition": null}, "file-info": {"filename": "2206.01062.pdf", "filename-prov": null, "document-hash": "5dfbd8c115a15fd3396b68409124cfee29fc8efac7b5c846634ff924e635e0dc", "#-pages": 9, "collection-name": null, "description": null, "page-hashes": [{"hash": "3c76b6d3fd82865e42c51d5cbd7d1a9996dba7902643b919acc581e866b92716", "model": "default", "page": 1}, {"hash": "5ccfaddd314d3712cbabc857c8c0f33d1268341ce37b27089857cbf09f0522d4", "model": "default", "page": 2}, {"hash": "d2dc51ad0a01ee9486ffe248649ee1cd10ce35773de8e4b21abf30d310f4fc26", "model": "default", "page": 3}, {"hash": "310121977375f8f1106412189943bd70f121629b2b4d35394077233dedbfb041", "model": "default", "page": 4}, {"hash": "09fa72b602eb0640669844acabc17ef494802a4a9188aeaaf0e0131c496e6951", "model": "default", "page": 5}, {"hash": "ec3fa60f136f3d9f5fa790ab27f5d1c14e5622573c52377b909b591d0be0ea44", "model": "default", "page": 6}, {"hash": "ec1bc56fe581ce95615b1fab11c3ba8fc89662acf2f53446decd380a155b06dd", "model": "default", "page": 7}, {"hash": "fbd2b06876dddc19ee08e0a9751d978c03e6943b74bedf1d83d6528cd4f8954d", "model": "default", "page": 8}, {"hash": "6cfa4eb4410fa9972da289dbf8d8cc585d317a192e1214c778ddd7768e98f311", "model": "default", "page": 9}]}, "main-text": [{"prov": [{"bbox": [107.30000305175781, 672.3833618164062, 505.1857604980469, 709.082275390625], "page": 1, "span": [0, 71], "__ref_s3_data": null}], "text": "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [90.94670867919922, 611.2825317382812, 193.91998291015625, 658.7803344726562], "page": 1, "span": [0, 73], "__ref_s3_data": null}], "text": "Birgit Pfitzmann IBM Research Rueschlikon, Switzerland bpf@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [254.97935485839844, 611.7597045898438, 357.8802490234375, 658.7174072265625], "page": 1, "span": [0, 71], "__ref_s3_data": null}], "text": "Christoph Auer IBM Research Rueschlikon, Switzerland cau@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [419.0672302246094, 611.7597045898438, 522.0595703125, 658.9878540039062], "page": 1, "span": [0, 70], "__ref_s3_data": null}], "text": "Michele Dolfi IBM Research Rueschlikon, Switzerland dol@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [171.90907287597656, 553.3746948242188, 275.3072509765625, 600.1580200195312], "page": 1, "span": [0, 72], "__ref_s3_data": null}], "text": "Ahmed S. Nassar IBM Research Rueschlikon, Switzerland ahn@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [336.5292053222656, 553.3746948242188, 439.84405517578125, 599.942626953125], "page": 1, "span": [0, 68], "__ref_s3_data": null}], "text": "Peter Staar IBM Research Rueschlikon, Switzerland taa@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.33011245727539, 533.9879760742188, 112.2127456665039, 544.47509765625], "page": 1, "span": [0, 8], "__ref_s3_data": null}], "text": "ABSTRACT", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [52.857933044433594, 257.10565185546875, 295.5601806640625, 529.5941162109375], "page": 1, "span": [0, 1595], "__ref_s3_data": null}], "text": "Accurate document layout analysis is a key requirement for highquality PDF document conversion. With the recent availability of public, large ground-truth datasets such as PubLayNet and DocBank, deep-learning models have proven to be very effective at layout detection and segmentation. While these datasets are of adequate size to train such models, they severely lack in layout variability since they are sourced from scientific article repositories such as PubMed and arXiv only. Consequently, the accuracy of the layout segmentation drops significantly when these models are applied on more challenging and diverse layouts. In this paper, we present DocLayNet , a new, publicly available, document-layout annotation dataset in COCO format. It contains 80863 manually annotated pages from diverse data sources to represent a wide variability in layouts. For each PDF page, the layout annotations provide labelled bounding-boxes with a choice of 11 distinct classes. DocLayNet also provides a subset of double- and triple-annotated pages to determine the inter-annotator agreement. In multiple experiments, we provide baseline accuracy scores (in mAP) for a set of popular object detection models. We also demonstrate that these models fall approximately 10% behind the inter-annotator agreement. Furthermore, we provide evidence that DocLayNet is of sufficient size. Lastly, we compare models trained on PubLayNet, DocBank and DocLayNet, showing that layout predictions of the DocLayNettrained models are more robust and thus the preferred choice for general-purpose document-layout analysis.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.36912155151367, 230.69398498535156, 134.81988525390625, 241.21551513671875], "page": 1, "span": [0, 12], "__ref_s3_data": null}], "text": "CCS CONCEPTS", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [53.02470016479492, 194.8704071044922, 297.8529357910156, 226.241455078125], "page": 1, "span": [0, 170], "__ref_s3_data": null}], "text": "\u00b7 Information systems \u2192 Document structure ; \u00b7 Applied computing \u2192 Document analysis ; \u00b7 Computing methodologies \u2192 Machine learning ; Computer vision ; Object detection ;", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.33460235595703, 117.82738494873047, 295.11798095703125, 158.33511352539062], "page": 1, "span": [0, 397], "__ref_s3_data": null}], "text": "Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.31700134277344, 85.73310852050781, 197.8627471923828, 116.91976928710938], "page": 1, "span": [0, 168], "__ref_s3_data": null}], "text": "KDD '22, August 14-18, 2022, Washington, DC, USA \u00a9 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9385-0/22/08. https://doi.org/10.1145/3534678.3539043", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.2291564941406, 232.3291473388672, 559.8057861328125, 252.12974548339844], "page": 1, "span": [0, 84], "__ref_s3_data": null}], "text": "Figure 1: Four examples of complex page layouts across different document categories", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/0"}, {"prov": [{"bbox": [317.11431884765625, 189.22499084472656, 379.82049560546875, 199.97215270996094], "page": 1, "span": [0, 8], "__ref_s3_data": null}], "text": "KEYWORDS", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [317.2037658691406, 164.9988250732422, 559.2164306640625, 184.67845153808594], "page": 1, "span": [0, 90], "__ref_s3_data": null}], "text": "PDF document conversion, layout segmentation, object-detection, data set, Machine Learning", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.3434753417969, 144.41390991210938, 404.6536560058594, 152.36439514160156], "page": 1, "span": [0, 21], "__ref_s3_data": null}], "text": "ACM Reference Format:", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [317.1117248535156, 84.62297058105469, 559.5494995117188, 142.41151428222656], "page": 1, "span": [0, 374], "__ref_s3_data": null}], "text": "Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar. 2022. DocLayNet: A Large Human-Annotated Dataset for DocumentLayout Analysis. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '22), August 14-18, 2022, Washington, DC, USA. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/ 3534678.3539043", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.19501876831055, 722.7692260742188, 558.4357299804688, 732.1524047851562], "page": 2, "span": [0, 130], "__ref_s3_data": null}], "text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [53.79800033569336, 695.8309936523438, 156.52899169921875, 706.4523315429688], "page": 2, "span": [0, 14], "__ref_s3_data": null}], "text": "1 INTRODUCTION", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [52.80397415161133, 562.986572265625, 303.1766357421875, 681.3472290039062], "page": 2, "span": [0, 702], "__ref_s3_data": null}], "text": "Despite the substantial improvements achieved with machine-learning (ML) approaches and deep neural networks in recent years, document conversion remains a challenging problem, as demonstrated by the numerous public competitions held on this topic [1-4]. The challenge originates from the huge variability in PDF documents regarding layout, language and formats (scanned, programmatic or a combination of both). Engineering a single ML model that can be applied on all types of documents and provides high-quality layout segmentation remains to this day extremely challenging [5]. To highlight the variability in document layouts, we show a few example documents from the DocLayNet dataset in Figure 1.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [52.89326477050781, 289.0808410644531, 295.5641174316406, 561.2902221679688], "page": 2, "span": [0, 1580], "__ref_s3_data": null}], "text": "A key problem in the process of document conversion is to understand the structure of a single document page, i.e. which segments of text should be grouped together in a unit. To train models for this task, there are currently two large datasets available to the community, PubLayNet [6] and DocBank [7]. They were introduced in 2019 and 2020 respectively and significantly accelerated the implementation of layout detection and segmentation models due to their sizes of 300K and 500K ground-truth pages. These sizes were achieved by leveraging an automation approach. The benefit of automated ground-truth generation is obvious: one can generate large ground-truth datasets at virtually no cost. However, the automation introduces a constraint on the variability in the dataset, because corresponding structured source data must be available. PubLayNet and DocBank were both generated from scientific document repositories (PubMed and arXiv), which provide XML or L A T E X sources. Those scientific documents present a limited variability in their layouts, because they are typeset in uniform templates provided by the publishers. Obviously, documents such as technical manuals, annual company reports, legal text, government tenders, etc. have very different and partially unique layouts. As a consequence, the layout predictions obtained from models trained on PubLayNet or DocBank is very reasonable when applied on scientific documents. However, for more artistic or free-style layouts, we see sub-par prediction quality from these models, which we demonstrate in Section 5.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.12458419799805, 212.36782836914062, 295.56396484375, 287.0208740234375], "page": 2, "span": [0, 462], "__ref_s3_data": null}], "text": "In this paper, we present the DocLayNet dataset. It provides pageby-page layout annotation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique document pages, of which a fraction carry double- or triple-annotations. DocLayNet is similar in spirit to PubLayNet and DocBank and will likewise be made available to the public 1 in order to stimulate the document-layout analysis community. It distinguishes itself in the following aspects:", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [64.64593505859375, 176.96405029296875, 295.5616455078125, 208.28524780273438], "page": 2, "span": [0, 149], "__ref_s3_data": null}], "text": "(1) Human Annotation : In contrast to PubLayNet and DocBank, we relied on human annotation instead of automation approaches to generate the data set.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [64.50244140625, 154.92233276367188, 294.3029479980469, 174.95782470703125], "page": 2, "span": [0, 109], "__ref_s3_data": null}], "text": "(2) Large Layout Variability : We include diverse and complex layouts from a large variety of public sources.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [64.18266296386719, 121.99307250976562, 294.6838073730469, 153.57122802734375], "page": 2, "span": [0, 180], "__ref_s3_data": null}], "text": "(3) Detailed Label Set : We define 11 class labels to distinguish layout features in high detail. PubLayNet provides 5 labels; DocBank provides 13, although not a superset of ours.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [64.30329132080078, 99.92230987548828, 295.56439208984375, 120.3491439819336], "page": 2, "span": [0, 115], "__ref_s3_data": null}], "text": "(4) Redundant Annotations : A fraction of the pages in the DocLayNet data set carry more than one human annotation.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [53.60314178466797, 82.76702880859375, 216.05824279785156, 90.63584899902344], "page": 2, "span": [0, 60], "__ref_s3_data": null}], "text": "$^{1}$https://developer.ibm.com/exchanges/data/all/doclaynet", "type": "footnote", "name": "Footnote", "font": null}, {"prov": [{"bbox": [341.2403564453125, 685.3028564453125, 558.5009765625, 705.5034790039062], "page": 2, "span": [0, 86], "__ref_s3_data": null}], "text": "This enables experimentation with annotation uncertainty and quality control analysis.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [328.06146240234375, 630.4351806640625, 559.7210083007812, 683.4995727539062], "page": 2, "span": [0, 280], "__ref_s3_data": null}], "text": "(5) Pre-defined Train-, Test- & Validation-set : Like DocBank, we provide fixed train-, test- & validation-sets to ensure proportional representation of the class-labels. Further, we prevent leakage of unique layouts across sets, which has a large effect on model accuracy scores.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [317.0706787109375, 571.292724609375, 559.1903076171875, 624.9239501953125], "page": 2, "span": [0, 297], "__ref_s3_data": null}], "text": "All aspects outlined above are detailed in Section 3. In Section 4, we will elaborate on how we designed and executed this large-scale human annotation campaign. We will also share key insights and lessons learned that might prove helpful for other parties planning to set up annotation campaigns.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [316.9918518066406, 483.6390686035156, 559.5819702148438, 569.6455078125], "page": 2, "span": [0, 506], "__ref_s3_data": null}], "text": "In Section 5, we will present baseline accuracy numbers for a variety of object detection methods (Faster R-CNN, Mask R-CNN and YOLOv5) trained on DocLayNet. We further show how the model performance is impacted by varying the DocLayNet dataset size, reducing the label set and modifying the train/test-split. Last but not least, we compare the performance of models trained on PubLayNet, DocBank and DocLayNet and demonstrate that a model trained on DocLayNet provides overall more robust layout recovery.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.33935546875, 460.4820251464844, 422.0046081542969, 471.2471923828125], "page": 2, "span": [0, 14], "__ref_s3_data": null}], "text": "2 RELATED WORK", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [316.9687805175781, 327.7038269042969, 559.7161254882812, 446.38397216796875], "page": 2, "span": [0, 655], "__ref_s3_data": null}], "text": "While early approaches in document-layout analysis used rulebased algorithms and heuristics [8], the problem is lately addressed with deep learning methods. The most common approach is to leverage object detection models [9-15]. In the last decade, the accuracy and speed of these models has increased dramatically. Furthermore, most state-of-the-art object detection methods can be trained and applied with very little work, thanks to a standardisation effort of the ground-truth data format [16] and common deep-learning frameworks [17]. Reference data sets such as PubLayNet [6] and DocBank provide their data in the commonly accepted COCO format [16].", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.156982421875, 239.59246826171875, 559.1864624023438, 325.6906433105469], "page": 2, "span": [0, 500], "__ref_s3_data": null}], "text": "Lately, new types of ML models for document-layout analysis have emerged in the community [18-21]. These models do not approach the problem of layout analysis purely based on an image representation of the page, as computer vision methods do. Instead, they combine the text tokens and image representation of a page in order to obtain a segmentation. While the reported accuracies appear to be promising, a broadly accepted data format which links geometric and textual features has yet to establish.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.58740234375, 216.37100219726562, 477.8531799316406, 226.6800994873047], "page": 2, "span": [0, 23], "__ref_s3_data": null}], "text": "3 THE DOCLAYNET DATASET", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [317.11236572265625, 116.19312286376953, 559.7131958007812, 202.27523803710938], "page": 2, "span": [0, 522], "__ref_s3_data": null}], "text": "DocLayNet contains 80863 PDF pages. Among these, 7059 carry two instances of human annotations, and 1591 carry three. This amounts to 91104 total annotation instances. The annotations provide layout information in the shape of labeled, rectangular boundingboxes. We define 11 distinct labels for layout features, namely Caption , Footnote , Formula , List-item , Page-footer , Page-header , Picture , Section-header , Table , Text , and Title . Our reasoning for picking this particular label set is detailed in Section 4.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.34619140625, 83.59282684326172, 558.5303344726562, 114.41421508789062], "page": 2, "span": [0, 186], "__ref_s3_data": null}], "text": "In addition to open intellectual property constraints for the source documents, we required that the documents in DocLayNet adhere to a few conditions. Firstly, we kept scanned documents", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.4626579284668, 722.95458984375, 347.0511779785156, 732.11474609375], "page": 3, "span": [0, 71], "__ref_s3_data": null}], "text": "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [365.31488037109375, 723.0569458007812, 558.807861328125, 731.9796142578125], "page": 3, "span": [0, 48], "__ref_s3_data": null}], "text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [53.28777313232422, 536.294677734375, 294.0437316894531, 556.148193359375], "page": 3, "span": [0, 69], "__ref_s3_data": null}], "text": "Figure 2: Distribution of DocLayNet pages across document categories.", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/1"}, {"prov": [{"bbox": [53.244232177734375, 424.931396484375, 294.5379943847656, 510.7526550292969], "page": 3, "span": [0, 513], "__ref_s3_data": null}], "text": "to a minimum, since they introduce difficulties in annotation (see Section 4). As a second condition, we focussed on medium to large documents ( > 10 pages) with technical content, dense in complex tables, figures, plots and captions. Such documents carry a lot of information value, but are often hard to analyse with high accuracy due to their challenging layouts. Counterexamples of documents not included in the dataset are receipts, invoices, hand-written documents or photographs showing \"text in the wild\".", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.10974884033203, 282.6438293457031, 295.5604553222656, 423.1407775878906], "page": 3, "span": [0, 810], "__ref_s3_data": null}], "text": "The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports , Manuals , Scientific Articles , Laws & Regulations , Patents and Government Tenders . Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports 2 which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories ( Financial Reports and Manuals ) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [52.8973388671875, 183.77932739257812, 295.5615539550781, 281.3227233886719], "page": 3, "span": [0, 535], "__ref_s3_data": null}], "text": "We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.209388732910156, 106.8985824584961, 295.56396484375, 182.471923828125], "page": 3, "span": [0, 413], "__ref_s3_data": null}], "text": "To ensure that future benchmarks in the document-layout analysis community can be easily compared, we have split up DocLayNet into pre-defined train-, test- and validation-sets. In this way, we can avoid spurious variations in the evaluation scores due to random splitting in train-, test- and validation-sets. We also ensured that less frequent labels are represented in train and test sets in equal proportions.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.352603912353516, 83.35768127441406, 195.78997802734375, 91.47167205810547], "page": 3, "span": [0, 51], "__ref_s3_data": null}], "text": "$^{2}$e.g. AAPL from https://www.annualreports.com/", "type": "footnote", "name": "Footnote", "font": null}, {"prov": [{"bbox": [317.0691833496094, 630.5088500976562, 559.1918334960938, 705.8527221679688], "page": 3, "span": [0, 435], "__ref_s3_data": null}], "text": "Table 1 shows the overall frequency and distribution of the labels among the different sets. Importantly, we ensure that subsets are only split on full-document boundaries. This avoids that pages of the same document are spread over train, test and validation set, which can give an undesired evaluation advantage to models and lead to overestimation of their prediction accuracy. We will show the impact of this decision in Section 5.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.05938720703125, 520.8086547851562, 558.862060546875, 628.44580078125], "page": 3, "span": [0, 645], "__ref_s3_data": null}], "text": "In order to accommodate the different types of models currently in use by the community, we provide DocLayNet in an augmented COCO format [16]. This entails the standard COCO ground-truth file (in JSON format) with the associated page images (in PNG format, 1025 \u00d7 1025 pixels). Furthermore, custom fields have been added to each COCO record to specify document category, original document filename and page number. In addition, we also provide the original PDF pages, as well as sidecar files containing parsed PDF text and text-cell coordinates (in JSON). All additional files are linked to the primary page images by their matching filenames.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [316.88604736328125, 203.11082458496094, 559.7215576171875, 518.6715087890625], "page": 3, "span": [0, 1854], "__ref_s3_data": null}], "text": "Despite being cost-intense and far less scalable than automation, human annotation has several benefits over automated groundtruth generation. The first and most obvious reason to leverage human annotations is the freedom to annotate any type of document without requiring a programmatic source. For most PDF documents, the original source document is not available. The latter is not a hard constraint with human annotation, but it is for automated methods. A second reason to use human annotations is that the latter usually provide a more natural interpretation of the page layout. The human-interpreted layout can significantly deviate from the programmatic layout used in typesetting. For example, \"invisible\" tables might be used solely for aligning text paragraphs on columns. Such typesetting tricks might be interpreted by automated methods incorrectly as an actual table, while the human annotation will interpret it correctly as Text or other styles. The same applies to multi-line text elements, when authors decided to space them as \"invisible\" list elements without bullet symbols. A third reason to gather ground-truth through human annotation is to estimate a \"natural\" upper bound on the segmentation accuracy. As we will show in Section 4, certain documents featuring complex layouts can have different but equally acceptable layout interpretations. This natural upper bound for segmentation accuracy can be found by annotating the same pages multiple times by different people and evaluating the inter-annotator agreement. Such a baseline consistency evaluation is very useful to define expectations for a good target accuracy in trained deep neural network models and avoid overfitting (see Table 1). On the flip side, achieving high annotation consistency proved to be a key challenge in human annotation, as we outline in Section 4.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.66510009765625, 174.8409881591797, 470.2132568359375, 185.15008544921875], "page": 3, "span": [0, 21], "__ref_s3_data": null}], "text": "4 ANNOTATION CAMPAIGN", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [317.0245056152344, 85.38961791992188, 559.7138061523438, 160.93588256835938], "page": 3, "span": [0, 457], "__ref_s3_data": null}], "text": "The annotation campaign was carried out in four phases. In phase one, we identified and prepared the data sources for annotation. In phase two, we determined the class labels and how annotations should be done on the documents in order to obtain maximum consistency. The latter was guided by a detailed requirement analysis and exhaustive experiments. In phase three, we trained the annotation staff and performed exams for quality assurance. In phase four,", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.345272064208984, 723.0101318359375, 558.5491943359375, 732.1525268554688], "page": 4, "span": [0, 130], "__ref_s3_data": null}], "text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [52.74671936035156, 676.2418212890625, 558.5100708007812, 707.6976928710938], "page": 4, "span": [0, 348], "__ref_s3_data": null}], "text": "Table 1: DocLayNet dataset overview. Along with the frequency of each class label, we present the relative occurrence (as % of row \"Total\") in the train, test and validation sets. The inter-annotator agreement is computed as the mAP@0.5-0.95 metric between pairwise annotations from the triple-annotated pages, from which we obtain accuracy ranges.", "type": "caption", "name": "Caption", "font": null}, {"name": "Table", "type": "table", "$ref": "#/tables/0"}, {"prov": [{"bbox": [53.28383255004883, 185.58580017089844, 295.64874267578125, 237.99000549316406], "page": 4, "span": [0, 281], "__ref_s3_data": null}], "text": "Figure 3: Corpus Conversion Service annotation user interface. The PDF page is shown in the background, with overlaid text-cells (in darker shades). The annotation boxes can be drawn by dragging a rectangle over each segment with the respective label from the palette on the right.", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/2"}, {"prov": [{"bbox": [52.954681396484375, 116.45683288574219, 294.3648681640625, 158.3203887939453], "page": 4, "span": [0, 231], "__ref_s3_data": null}], "text": "we distributed the annotation workload and performed continuous quality controls. Phase one and two required a small team of experts only. For phases three and four, a group of 40 dedicated annotators were assembled and supervised.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.368797302246094, 83.57982635498047, 295.5584411621094, 114.14925384521484], "page": 4, "span": [0, 193], "__ref_s3_data": null}], "text": "Phase 1: Data selection and preparation. Our inclusion criteria for documents were described in Section 3. A large effort went into ensuring that all documents are free to use. The data sources", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.2582702636719, 416.48919677734375, 559.1853637695312, 481.0997619628906], "page": 4, "span": [0, 376], "__ref_s3_data": null}], "text": "include publication repositories such as arXiv$^{3}$, government offices, company websites as well as data directory services for financial reports and patents. Scanned documents were excluded wherever possible because they can be rotated or skewed. This would not allow us to perform annotation with rectangular bounding-boxes and therefore complicate the annotation process.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.0777587890625, 284.9187316894531, 559.7130737304688, 415.02398681640625], "page": 4, "span": [0, 746], "__ref_s3_data": null}], "text": "Preparation work included uploading and parsing the sourced PDF documents in the Corpus Conversion Service (CCS) [22], a cloud-native platform which provides a visual annotation interface and allows for dataset inspection and analysis. The annotation interface of CCS is shown in Figure 3. The desired balance of pages between the different document categories was achieved by selective subsampling of pages with certain desired properties. For example, we made sure to include the title page of each document and bias the remaining page selection to those with figures or tables. The latter was achieved by leveraging pre-trained object detection models from PubLayNet, which helped us estimate how many figures and tables a given page contains.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [316.9024963378906, 98.9438247680664, 559.7176513671875, 283.8972473144531], "page": 4, "span": [0, 1159], "__ref_s3_data": null}], "text": "Phase 2: Label selection and guideline. We reviewed the collected documents and identified the most common structural features they exhibit. This was achieved by identifying recurrent layout elements and lead us to the definition of 11 distinct class labels. These 11 class labels are Caption , Footnote , Formula , List-item , Pagefooter , Page-header , Picture , Section-header , Table , Text , and Title . Critical factors that were considered for the choice of these class labels were (1) the overall occurrence of the label, (2) the specificity of the label, (3) recognisability on a single page (i.e. no need for context from previous or next page) and (4) overall coverage of the page. Specificity ensures that the choice of label is not ambiguous, while coverage ensures that all meaningful items on a page can be annotated. We refrained from class labels that are very specific to a document category, such as Abstract in the Scientific Articles category. We also avoided class labels that are tightly linked to the semantics of the text. Labels such as Author and Affiliation , as seen in DocBank, are often only distinguishable by discriminating on", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.7030029296875, 82.5821304321289, 369.40142822265625, 90.54422760009766], "page": 4, "span": [0, 24], "__ref_s3_data": null}], "text": "$^{3}$https://arxiv.org/", "type": "footnote", "name": "Footnote", "font": null}, {"prov": [{"bbox": [53.456207275390625, 723.0143432617188, 347.07373046875, 732.0245361328125], "page": 5, "span": [0, 71], "__ref_s3_data": null}], "text": "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [365.2621765136719, 723.0404663085938, 558.9374389648438, 731.9317626953125], "page": 5, "span": [0, 48], "__ref_s3_data": null}], "text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [53.24338912963867, 684.8170166015625, 294.04541015625, 705.5283813476562], "page": 5, "span": [0, 135], "__ref_s3_data": null}], "text": "the textual content of an element, which goes beyond visual layout recognition, in particular outside the Scientific Articles category.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.124725341796875, 542.8159790039062, 295.5592346191406, 683.8748168945312], "page": 5, "span": [0, 812], "__ref_s3_data": null}], "text": "At first sight, the task of visual document-layout interpretation appears intuitive enough to obtain plausible annotations in most cases. However, during early trial-runs in the core team, we observed many cases in which annotators use different annotation styles, especially for documents with challenging layouts. For example, if a figure is presented with subfigures, one annotator might draw a single figure bounding-box, while another might annotate each subfigure separately. The same applies for lists, where one might annotate all list items in one block or each list item separately. In essence, we observed that challenging layouts would be annotated in different but plausible ways. To illustrate this, we show in Figure 4 multiple examples of plausible but inconsistent annotations on the same pages.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.339271545410156, 455.16583251953125, 295.56005859375, 541.1383666992188], "page": 5, "span": [0, 465], "__ref_s3_data": null}], "text": "Obviously, this inconsistency in annotations is not desirable for datasets which are intended to be used for model training. To minimise these inconsistencies, we created a detailed annotation guideline. While perfect consistency across 40 annotation staff members is clearly not possible to achieve, we saw a huge improvement in annotation consistency after the introduction of our annotation guideline. A few selected, non-trivial highlights of the guideline are:", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [64.39098358154297, 402.13092041015625, 294.42474365234375, 444.29510498046875], "page": 5, "span": [0, 202], "__ref_s3_data": null}], "text": "(1) Every list-item is an individual object instance with class label List-item . This definition is different from PubLayNet and DocBank, where all list-items are grouped together into one List object.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [64.31100463867188, 358.39984130859375, 295.563720703125, 400.2758483886719], "page": 5, "span": [0, 208], "__ref_s3_data": null}], "text": "(2) A List-item is a paragraph with hanging indentation. Singleline elements can qualify as List-item if the neighbour elements expose hanging indentation. Bullet or enumeration symbols are not a requirement.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [64.26787567138672, 336.4728698730469, 294.60943603515625, 356.2404479980469], "page": 5, "span": [0, 82], "__ref_s3_data": null}], "text": "(3) For every Caption , there must be exactly one corresponding Picture or Table .", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [64.2632064819336, 314.5648193359375, 294.7487487792969, 334.179443359375], "page": 5, "span": [0, 70], "__ref_s3_data": null}], "text": "(4) Connected sub-pictures are grouped together in one Picture object.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [63.9930305480957, 303.59686279296875, 264.5057067871094, 312.8252868652344], "page": 5, "span": [0, 53], "__ref_s3_data": null}], "text": "(5) Formula numbers are included in a Formula object.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [64.07823181152344, 270.048095703125, 295.0240783691406, 301.5160827636719], "page": 5, "span": [0, 160], "__ref_s3_data": null}], "text": "(6) Emphasised text (e.g. in italic or bold) at the beginning of a paragraph is not considered a Section-header , unless it appears exclusively on its own line.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [52.994422912597656, 217.798828125, 295.5625305175781, 259.6097106933594], "page": 5, "span": [0, 221], "__ref_s3_data": null}], "text": "The complete annotation guideline is over 100 pages long and a detailed description is obviously out of scope for this paper. Nevertheless, it will be made publicly available alongside with DocLayNet for future reference.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.26631546020508, 86.24749755859375, 295.562255859375, 215.95584106445312], "page": 5, "span": [0, 792], "__ref_s3_data": null}], "text": "Phase 3: Training. After a first trial with a small group of people, we realised that providing the annotation guideline and a set of random practice pages did not yield the desired quality level for layout annotation. Therefore we prepared a subset of pages with two different complexity levels, each with a practice and an exam part. 974 pages were reference-annotated by one proficient core team member. Annotation staff were then given the task to annotate the same subsets (blinded from the reference). By comparing the annotations of each staff member with the reference annotations, we could quantify how closely their annotations matched the reference. Only after passing two exam levels with high annotation quality, staff were admitted into the production phase. Practice iterations", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [316.9992980957031, 287.86785888671875, 559.8057861328125, 318.7776794433594], "page": 5, "span": [0, 173], "__ref_s3_data": null}], "text": "Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/3"}, {"prov": [{"bbox": [316.8349914550781, 247.1688232421875, 558.204345703125, 266.81207275390625], "page": 5, "span": [0, 123], "__ref_s3_data": null}], "text": "were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.00592041015625, 82.7375717163086, 559.7149047851562, 245.28392028808594], "page": 5, "span": [0, 987], "__ref_s3_data": null}], "text": "Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.30706024169922, 722.92333984375, 558.4274291992188, 732.1127319335938], "page": 6, "span": [0, 130], "__ref_s3_data": null}], "text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [52.78031539916992, 608.98291015625, 295.64874267578125, 705.8385620117188], "page": 6, "span": [0, 489], "__ref_s3_data": null}], "text": "Table 2: Prediction performance (mAP@0.5-0.95) of object detection networks on DocLayNet test set. The MRCNN (Mask R-CNN) and FRCNN (Faster R-CNN) models with ResNet-50 or ResNet-101 backbone were trained based on the network architectures from the detectron2 model zoo (Mask R-CNN R50, R101-FPN 3x, Faster R-CNN R101-FPN 3x), with default configurations. The YOLO implementation utilized was YOLOv5x6 [13]. All models were initialised using pre-trained weights from the COCO 2017 dataset.", "type": "paragraph", "name": "Text", "font": null}, {"name": "Table", "type": "table", "$ref": "#/tables/1"}, {"prov": [{"bbox": [53.25688552856445, 214.2948760986328, 295.5561218261719, 421.4337158203125], "page": 6, "span": [0, 1252], "__ref_s3_data": null}], "text": "to avoid this at any cost in order to have clear, unbiased baseline numbers for human document-layout annotation. Third, we introduced the feature of snapping boxes around text segments to obtain a pixel-accurate annotation and again reduce time and effort. The CCS annotation tool automatically shrinks every user-drawn box to the minimum bounding-box around the enclosed text-cells for all purely text-based segments, which excludes only Table and Picture . For the latter, we instructed annotation staff to minimise inclusion of surrounding whitespace while including all graphical lines. A downside of snapping boxes to enclosed text cells is that some wrongly parsed PDF pages cannot be annotated correctly and need to be skipped. Fourth, we established a way to flag pages as rejected for cases where no valid annotation according to the label guidelines could be achieved. Example cases for this would be PDF pages that render incorrectly or contain layouts that are impossible to capture with non-overlapping rectangles. Such rejected pages are not contained in the final dataset. With all these measures in place, experienced annotation staff managed to annotate a single page in a typical timeframe of 20s to 60s, depending on its complexity.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.62337875366211, 193.5609893798828, 147.4853515625, 203.87008666992188], "page": 6, "span": [0, 13], "__ref_s3_data": null}], "text": "5 EXPERIMENTS", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [53.076290130615234, 82.4822006225586, 295.4281005859375, 179.65382385253906], "page": 6, "span": [0, 584], "__ref_s3_data": null}], "text": "The primary goal of DocLayNet is to obtain high-quality ML models capable of accurate document-layout analysis on a wide variety of challenging layouts. As discussed in Section 2, object detection models are currently the easiest to use, due to the standardisation of ground-truth data in COCO format [16] and the availability of general frameworks such as detectron2 [17]. Furthermore, baseline numbers in PubLayNet and DocBank were obtained using standard object detection models such as Mask R-CNN and Faster R-CNN. As such, we will relate to these object detection methods in this", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.10931396484375, 449.6510009765625, 559.8057861328125, 513.7953491210938], "page": 6, "span": [0, 329], "__ref_s3_data": null}], "text": "Figure 5: Prediction performance (mAP@0.5-0.95) of a Mask R-CNN network with ResNet50 backbone trained on increasing fractions of the DocLayNet dataset. The learning curve flattens around the 80% mark, indicating that increasing the size of the DocLayNet dataset with similar data will not yield significantly better predictions.", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/4"}, {"prov": [{"bbox": [317.2011413574219, 388.6548156738281, 558.2041625976562, 408.8042297363281], "page": 6, "span": [0, 102], "__ref_s3_data": null}], "text": "paper and leave the detailed evaluation of more recent methods mentioned in Section 2 for future work.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.0830078125, 311.45587158203125, 558.4364013671875, 386.632568359375], "page": 6, "span": [0, 397], "__ref_s3_data": null}], "text": "In this section, we will present several aspects related to the performance of object detection models on DocLayNet. Similarly as in PubLayNet, we will evaluate the quality of their predictions using mean average precision (mAP) with 10 overlaps that range from 0.5 to 0.95 in steps of 0.05 (mAP@0.5-0.95). These scores are computed by leveraging the evaluation code provided by the COCO API [16].", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.1941223144531, 284.5037841796875, 466.8532409667969, 295.42913818359375], "page": 6, "span": [0, 30], "__ref_s3_data": null}], "text": "Baselines for Object Detection", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [317.0144348144531, 85.2998275756836, 558.7822875976562, 280.8944396972656], "page": 6, "span": [0, 1146], "__ref_s3_data": null}], "text": "In Table 2, we present baseline experiments (given in mAP) on Mask R-CNN [12], Faster R-CNN [11], and YOLOv5 [13]. Both training and evaluation were performed on RGB images with dimensions of 1025 \u00d7 1025 pixels. For training, we only used one annotation in case of redundantly annotated pages. As one can observe, the variation in mAP between the models is rather low, but overall between 6 and 10% lower than the mAP computed from the pairwise human annotations on triple-annotated pages. This gives a good indication that the DocLayNet dataset poses a worthwhile challenge for the research community to close the gap between human recognition and ML approaches. It is interesting to see that Mask R-CNN and Faster R-CNN produce very comparable mAP scores, indicating that pixel-based image segmentation derived from bounding-boxes does not help to obtain better predictions. On the other hand, the more recent Yolov5x model does very well and even out-performs humans on selected labels such as Text , Table and Picture . This is not entirely surprising, as Text , Table and Picture are abundant and the most visually distinctive in a document.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.35094451904297, 722.9555053710938, 347.0172424316406, 732.038818359375], "page": 7, "span": [0, 71], "__ref_s3_data": null}], "text": "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [365.1936950683594, 723.0802001953125, 558.7797241210938, 731.8773803710938], "page": 7, "span": [0, 48], "__ref_s3_data": null}], "text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [52.8690299987793, 663.3739624023438, 295.6486511230469, 705.8510131835938], "page": 7, "span": [0, 205], "__ref_s3_data": null}], "text": "Table 3: Performance of a Mask R-CNN R50 network in mAP@0.5-0.95 scores trained on DocLayNet with different class label sets. The reduced label sets were obtained by either down-mapping or dropping labels.", "type": "caption", "name": "Caption", "font": null}, {"name": "Table", "type": "table", "$ref": "#/tables/2"}, {"prov": [{"bbox": [53.446834564208984, 461.592041015625, 131.05624389648438, 472.6955871582031], "page": 7, "span": [0, 14], "__ref_s3_data": null}], "text": "Learning Curve", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [52.78499984741211, 262.38037109375, 295.558349609375, 457.72955322265625], "page": 7, "span": [0, 1157], "__ref_s3_data": null}], "text": "One of the fundamental questions related to any dataset is if it is \"large enough\". To answer this question for DocLayNet, we performed a data ablation study in which we evaluated a Mask R-CNN model trained on increasing fractions of the DocLayNet dataset. As can be seen in Figure 5, the mAP score rises sharply in the beginning and eventually levels out. To estimate the error-bar on the metrics, we ran the training five times on the entire data-set. This resulted in a 1% error-bar, depicted by the shaded area in Figure 5. In the inset of Figure 5, we show the exact same data-points, but with a logarithmic scale on the x-axis. As is expected, the mAP score increases linearly as a function of the data-size in the inset. The curve ultimately flattens out between the 80% and 100% mark, with the 80% mark falling within the error-bars of the 100% mark. This provides a good indication that the model would not improve significantly by yet increasing the data size. Rather, it would probably benefit more from improved data consistency (as discussed in Section 3), data augmentation methods [23], or the addition of more document categories and styles.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.37664794921875, 239.1809844970703, 164.3289794921875, 250.044677734375], "page": 7, "span": [0, 22], "__ref_s3_data": null}], "text": "Impact of Class Labels", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [53.06760787963867, 83.39567565917969, 295.5567932128906, 235.12689208984375], "page": 7, "span": [0, 910], "__ref_s3_data": null}], "text": "The choice and number of labels can have a significant effect on the overall model performance. Since PubLayNet, DocBank and DocLayNet all have different label sets, it is of particular interest to understand and quantify this influence of the label set on the model performance. We investigate this by either down-mapping labels into more common ones (e.g. Caption \u2192 Text ) or excluding them from the annotations entirely. Furthermore, it must be stressed that all mappings and exclusions were performed on the data before model training. In Table 3, we present the mAP scores for a Mask R-CNN R50 network on different label sets. Where a label is down-mapped, we show its corresponding label, otherwise it was excluded. We present three different label sets, with 6, 5 and 4 different labels respectively. The set of 5 labels contains the same labels as PubLayNet. However, due to the different definition of", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [316.9989929199219, 663.7767944335938, 559.8068237304688, 705.6134643554688], "page": 7, "span": [0, 189], "__ref_s3_data": null}], "text": "Table 4: Performance of a Mask R-CNN R50 network with document-wise and page-wise split for different label sets. Naive page-wise split will result in GLYPH 10% point improvement.", "type": "caption", "name": "Caption", "font": null}, {"name": "Table", "type": "table", "$ref": "#/tables/3"}, {"prov": [{"bbox": [317.03326416015625, 375.50982666015625, 559.5849609375, 460.6855163574219], "page": 7, "span": [0, 469], "__ref_s3_data": null}], "text": "lists in PubLayNet (grouped list-items) versus DocLayNet (separate list-items), the label set of size 4 is the closest to PubLayNet, in the assumption that the List is down-mapped to Text in PubLayNet. The results in Table 3 show that the prediction accuracy on the remaining class labels does not change significantly when other classes are merged into them. The overall macro-average improves by around 5%, in particular when Page-footer and Page-header are excluded.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.4661865234375, 351.4896545410156, 549.860595703125, 362.8900451660156], "page": 7, "span": [0, 46], "__ref_s3_data": null}], "text": "Impact of Document Split in Train and Test Set", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [316.9546813964844, 196.5628204345703, 559.7138061523438, 348.10198974609375], "page": 7, "span": [0, 852], "__ref_s3_data": null}], "text": "Many documents in DocLayNet have a unique styling. In order to avoid overfitting on a particular style, we have split the train-, test- and validation-sets of DocLayNet on document boundaries, i.e. every document contributes pages to only one set. To the best of our knowledge, this was not considered in PubLayNet or DocBank. To quantify how this affects model performance, we trained and evaluated a Mask R-CNN R50 model on a modified dataset version. Here, the train-, test- and validation-sets were obtained by a randomised draw over the individual pages. As can be seen in Table 4, the difference in model performance is surprisingly large: pagewise splitting gains \u02dc 10% in mAP over the document-wise splitting. Thus, random page-wise splitting of DocLayNet can easily lead to accidental overestimation of model performance and should be avoided.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.3337707519531, 173.20875549316406, 418.5477600097656, 183.94322204589844], "page": 7, "span": [0, 18], "__ref_s3_data": null}], "text": "Dataset Comparison", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [316.7283935546875, 83.24566650390625, 559.1881713867188, 168.86700439453125], "page": 7, "span": [0, 521], "__ref_s3_data": null}], "text": "Throughout this paper, we claim that DocLayNet's wider variety of document layouts leads to more robust layout detection models. In Table 5, we provide evidence for that. We trained models on each of the available datasets (PubLayNet, DocBank and DocLayNet) and evaluated them on the test sets of the other datasets. Due to the different label sets and annotation styles, a direct comparison is not possible. Hence, we focussed on the common labels among the datasets. Between PubLayNet and DocLayNet, these are Picture ,", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.288330078125, 722.9171142578125, 558.4634399414062, 732.134033203125], "page": 8, "span": [0, 130], "__ref_s3_data": null}], "text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA Birgit Pfitzmann, Christoph Auer, Michele Dolfi, Ahmed S. Nassar, and Peter Staar", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [52.89757537841797, 641.85888671875, 295.648681640625, 705.7824096679688], "page": 8, "span": [0, 298], "__ref_s3_data": null}], "text": "Table 5: Prediction Performance (mAP@0.5-0.95) of a Mask R-CNN R50 network across the PubLayNet, DocBank & DocLayNet data-sets. By evaluating on common label classes of each dataset, we observe that the DocLayNet-trained model has much less pronounced variations in performance across all datasets.", "type": "paragraph", "name": "Text", "font": null}, {"name": "Table", "type": "table", "$ref": "#/tables/4"}, {"prov": [{"bbox": [53.279537200927734, 348.85986328125, 294.6396789550781, 401.5162658691406], "page": 8, "span": [0, 295], "__ref_s3_data": null}], "text": "Section-header , Table and Text . Before training, we either mapped or excluded DocLayNet's other labels as specified in table 3, and also PubLayNet's List to Text . Note that the different clustering of lists (by list-element vs. whole list objects) naturally decreases the mAP score for Text .", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.04817581176758, 205.98951721191406, 295.55908203125, 346.9607849121094], "page": 8, "span": [0, 793], "__ref_s3_data": null}], "text": "For comparison of DocBank with DocLayNet, we trained only on Picture and Table clusters of each dataset. We had to exclude Text because successive paragraphs are often grouped together into a single object in DocBank. This paragraph grouping is incompatible with the individual paragraphs of DocLayNet. As can be seen in Table 5, DocLayNet trained models yield better performance compared to the previous datasets. It is noteworthy that the models trained on PubLayNet and DocBank perform very well on their own test set, but have a much lower performance on the foreign datasets. While this also applies to DocLayNet, the difference is far less pronounced. Thus we conclude that DocLayNet trained models are overall more robust and will produce better results for challenging, unseen layouts.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.05388259887695, 176.33340454101562, 156.02235412597656, 187.29098510742188], "page": 8, "span": [0, 19], "__ref_s3_data": null}], "text": "Example Predictions", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [53.07720184326172, 86.64982604980469, 295.5584411621094, 172.26492309570312], "page": 8, "span": [0, 481], "__ref_s3_data": null}], "text": "To conclude this section, we illustrate the quality of layout predictions one can expect from DocLayNet-trained models by providing a selection of examples without any further post-processing applied. Figure 6 shows selected layout predictions on pages from the test-set of DocLayNet. Results look decent in general across document categories, however one can also observe mistakes such as overlapping clusters of different classes, or entirely missing boxes due to low confidence.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.4961853027344, 695.8309936523438, 405.7296142578125, 706.4700317382812], "page": 8, "span": [0, 12], "__ref_s3_data": null}], "text": "6 CONCLUSION", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [317.0487976074219, 605.4117431640625, 559.7137451171875, 691.6207275390625], "page": 8, "span": [0, 507], "__ref_s3_data": null}], "text": "In this paper, we presented the DocLayNet dataset. It provides the document conversion and layout analysis research community a new and challenging dataset to improve and fine-tune novel ML methods on. In contrast to many other datasets, DocLayNet was created by human annotation in order to obtain reliable layout ground-truth on a wide variety of publication- and typesettingstyles. Including a large proportion of documents outside the scientific publishing domain adds significant value in this respect.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.03955078125, 506.7440185546875, 559.717041015625, 603.672607421875], "page": 8, "span": [0, 573], "__ref_s3_data": null}], "text": "From the dataset, we have derived on the one hand reference metrics for human performance on document-layout annotation (through double and triple annotations) and on the other hand evaluated the baseline performance of commonly used object detection methods. We also illustrated the impact of various dataset-related aspects on model performance through data-ablation experiments, both from a size and class-label perspective. Last but not least, we compared the accuracy of models trained on other public datasets and showed that DocLayNet trained models are more robust.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.1865234375, 474.2935791015625, 558.6325073242188, 505.4895324707031], "page": 8, "span": [0, 188], "__ref_s3_data": null}], "text": "To date, there is still a significant gap between human and ML accuracy on the layout interpretation task, and we hope that this work will inspire the research community to close that gap.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [317.4455871582031, 446.5990295410156, 387.5806579589844, 457.4013366699219], "page": 8, "span": [0, 10], "__ref_s3_data": null}], "text": "REFERENCES", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [320.5848693847656, 420.8371276855469, 559.0187377929688, 444.4063415527344], "page": 8, "span": [0, 191], "__ref_s3_data": null}], "text": "[1] Max G\u00f6bel, Tamir Hassan, Ermelinda Oro, and Giorgio Orsi. Icdar 2013 table competition. In 2013 12th International Conference on Document Analysis and Recognition , pages 1449-1453, 2013.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [320.76806640625, 388.9571228027344, 559.7276000976562, 420.2254333496094], "page": 8, "span": [0, 279], "__ref_s3_data": null}], "text": "[2] Christian Clausner, Apostolos Antonacopoulos, and Stefan Pletschacher. Icdar2017 competition on recognition of documents with complex layouts rdcl2017. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) , volume 01, pages 1404-1410, 2017.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [320.58111572265625, 364.88128662109375, 558.4269409179688, 388.028076171875], "page": 8, "span": [0, 213], "__ref_s3_data": null}], "text": "[3] Herv\u00e9 D\u00e9jean, Jean-Luc Meunier, Liangcai Gao, Yilun Huang, Yu Fang, Florian Kleber, and Eva-Maria Lang. ICDAR 2019 Competition on Table Detection and Recognition (cTDaR), April 2019. http://sac.founderit.com/.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [320.72210693359375, 333.173095703125, 559.3787231445312, 364.17962646484375], "page": 8, "span": [0, 251], "__ref_s3_data": null}], "text": "[4] Antonio Jimeno Yepes, Peter Zhong, and Douglas Burdick. Competition on scientific literature parsing. In Proceedings of the International Conference on Document Analysis and Recognition , ICDAR, pages 605-617. LNCS 12824, SpringerVerlag, sep 2021.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [320.47723388671875, 300.9960021972656, 559.2555541992188, 332.2057800292969], "page": 8, "span": [0, 261], "__ref_s3_data": null}], "text": "[5] Logan Markewich, Hao Zhang, Yubin Xing, Navid Lambert-Shirzad, Jiang Zhexin, Roy Lee, Zhi Li, and Seok-Bum Ko. Segmentation for document layout analysis: not dead yet. International Journal on Document Analysis and Recognition (IJDAR) , pages 1-11, 01 2022.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [320.7210998535156, 277.3751220703125, 558.6044921875, 300.1542053222656], "page": 8, "span": [0, 235], "__ref_s3_data": null}], "text": "[6] Xu Zhong, Jianbin Tang, and Antonio Jimeno-Yepes. Publaynet: Largest dataset ever for document layout analysis. In Proceedings of the International Conference on Document Analysis and Recognition , ICDAR, pages 1015-1022, sep 2019.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [320.7048034667969, 237.53111267089844, 559.0962524414062, 276.57550048828125], "page": 8, "span": [0, 316], "__ref_s3_data": null}], "text": "[7] Minghao Li, Yiheng Xu, Lei Cui, Shaohan Huang, Furu Wei, Zhoujun Li, and Ming Zhou. Docbank: A benchmark dataset for document layout analysis. In Proceedings of the 28th International Conference on Computational Linguistics , COLING, pages 949-960. International Committee on Computational Linguistics, dec 2020.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [320.6175537109375, 213.6141357421875, 558.9022216796875, 236.84490966796875], "page": 8, "span": [0, 172], "__ref_s3_data": null}], "text": "[8] Riaz Ahmad, Muhammad Tanvir Afzal, and M. Qadir. Information extraction from pdf sources based on rule-based system using integrated formats. In SemWebEval@ESWC , 2016.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [320.695556640625, 181.74110412597656, 559.2744750976562, 212.77767944335938], "page": 8, "span": [0, 271], "__ref_s3_data": null}], "text": "[9] Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition , CVPR, pages 580-587. IEEE Computer Society, jun 2014.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [317.74908447265625, 165.5072479248047, 558.8585205078125, 181.0753173828125], "page": 8, "span": [0, 149], "__ref_s3_data": null}], "text": "[10] Ross B. Girshick. Fast R-CNN. In 2015 IEEE International Conference on Computer Vision , ICCV, pages 1440-1448. IEEE Computer Society, dec 2015.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [317.71527099609375, 141.8831329345703, 558.4170532226562, 164.63047790527344], "page": 8, "span": [0, 227], "__ref_s3_data": null}], "text": "[11] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence , 39(6):1137-1149, 2017.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [317.5010986328125, 117.60646057128906, 559.278076171875, 141.50643920898438], "page": 8, "span": [0, 192], "__ref_s3_data": null}], "text": "[12] Kaiming He, Georgia Gkioxari, Piotr Doll\u00e1r, and Ross B. Girshick. Mask R-CNN. In IEEE International Conference on Computer Vision , ICCV, pages 2980-2988. IEEE Computer Society, Oct 2017.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [317.4837341308594, 86.09910583496094, 559.0487670898438, 116.94155883789062], "page": 8, "span": [0, 305], "__ref_s3_data": null}], "text": "[13] Glenn Jocher, Alex Stoken, Ayush Chaurasia, Jirka Borovec, NanoCode012, TaoXie, Yonghye Kwon, Kalen Michael, Liu Changyu, Jiacong Fang, Abhiram V, Laughing, tkianai, yxNONG, Piotr Skalski, Adam Hogan, Jebastin Nadar, imyhxy, Lorenzo Mammana, Alex Wang, Cristi Fati, Diego Montes, Jan Hajek, Laurentiu", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [53.55940246582031, 722.9329223632812, 347.0838623046875, 731.9924926757812], "page": 9, "span": [0, 71], "__ref_s3_data": null}], "text": "DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [365.1275329589844, 723.0497436523438, 558.905029296875, 731.96435546875], "page": 9, "span": [0, 48], "__ref_s3_data": null}], "text": "KDD \u201922, August 14-18, 2022, Washington, DC, USA", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [53.39582824707031, 285.65704345703125, 559.807861328125, 328.056396484375], "page": 9, "span": [0, 386], "__ref_s3_data": null}], "text": "Figure 6: Example layout predictions on selected pages from the DocLayNet test-set. (A, D) exhibit favourable results on coloured backgrounds. (B, C) show accurate list-item and paragraph differentiation despite densely-spaced lines. (E) demonstrates good table and figure distinction. (F) shows predictions on a Chinese patent with multiple overlaps, label confusion and missing boxes.", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/5"}, {"prov": [{"bbox": [68.69137573242188, 242.22409057617188, 295.22406005859375, 265.4314270019531], "page": 9, "span": [0, 195], "__ref_s3_data": null}], "text": "Diaconu, Mai Thanh Minh, Marc, albinxavi, fatih, oleg, and wanghao yang. ultralytics/yolov5: v6.0 - yolov5n nano models, roboflow integration, tensorflow export, opencv dnn support, October 2021.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [53.56020736694336, 218.56314086914062, 295.12176513671875, 241.63282775878906], "page": 9, "span": [0, 190], "__ref_s3_data": null}], "text": "[14] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. CoRR , abs/2005.12872, 2020.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [53.61275863647461, 202.62213134765625, 294.3653869628906, 217.57615661621094], "page": 9, "span": [0, 132], "__ref_s3_data": null}], "text": "[15] Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. CoRR , abs/1911.09070, 2019.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [53.668941497802734, 178.71910095214844, 295.2226257324219, 201.57443237304688], "page": 9, "span": [0, 219], "__ref_s3_data": null}], "text": "[16] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. Microsoft COCO: common objects in context, 2014.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [53.54263687133789, 162.77911376953125, 295.1200866699219, 178.3345947265625], "page": 9, "span": [0, 100], "__ref_s3_data": null}], "text": "[17] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2, 2019.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [53.569610595703125, 122.92810821533203, 294.8847351074219, 162.23497009277344], "page": 9, "span": [0, 339], "__ref_s3_data": null}], "text": "[18] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter W. J. Staar. Robust pdf document conversion using recurrent neural networks. In Proceedings of the 35th Conference on Artificial Intelligence , AAAI, pages 1513715145, feb 2021.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [53.4610595703125, 82.67352294921875, 295.22174072265625, 122.19474029541016], "page": 9, "span": [0, 336], "__ref_s3_data": null}], "text": "[19] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutlm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 1192-1200, New York, USA, 2020. Association for Computing Machinery.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [317.6278076171875, 249.62921142578125, 559.0263671875, 265.5798645019531], "page": 9, "span": [0, 153], "__ref_s3_data": null}], "text": "[20] Shoubin Li, Xuyan Ma, Shuaiqun Pan, Jun Hu, Lin Shi, and Qing Wang. Vtlayout: Fusion of visual and text features for document layout analysis, 2021.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [317.53033447265625, 226.54010009765625, 559.0158081054688, 249.28826904296875], "page": 9, "span": [0, 188], "__ref_s3_data": null}], "text": "[21] Peng Zhang, Can Li, Liang Qiao, Zhanzhan Cheng, Shiliang Pu, Yi Niu, and Fei Wu. Vsr: A unified framework for document layout analysis combining vision, semantics and relations, 2021.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [317.6616516113281, 194.28546142578125, 559.275390625, 225.54457092285156], "page": 9, "span": [0, 290], "__ref_s3_data": null}], "text": "[22] Peter W J Staar, Michele Dolfi, Christoph Auer, and Costas Bekas. Corpus conversion service: A machine learning platform to ingest documents at scale. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 774-782. ACM, 2018.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [317.65606689453125, 178.71212768554688, 559.3782958984375, 193.30506896972656], "page": 9, "span": [0, 138], "__ref_s3_data": null}], "text": "[23] Connor Shorten and Taghi M. Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data , 6(1):60, 2019.", "type": "paragraph", "name": "List-item", "font": null}], "figures": [{"prov": [{"bbox": [324.3027038574219, 266.1221618652344, 554.91796875, 543.5838623046875], "page": 1, "span": [0, 84], "__ref_s3_data": null}], "text": "Figure 1: Four examples of complex page layouts across different document categories", "type": "figure", "bounding-box": null}, {"prov": [{"bbox": [88.16680145263672, 569.726806640625, 264.2818298339844, 698.8894653320312], "page": 3, "span": [0, 69], "__ref_s3_data": null}], "text": "Figure 2: Distribution of DocLayNet pages across document categories.", "type": "figure", "bounding-box": null}, {"prov": [{"bbox": [53.179771423339844, 250.80191040039062, 295.3565368652344, 481.6382141113281], "page": 4, "span": [0, 281], "__ref_s3_data": null}], "text": "Figure 3: Corpus Conversion Service annotation user interface. The PDF page is shown in the background, with overlaid text-cells (in darker shades). The annotation boxes can be drawn by dragging a rectangle over each segment with the respective label from the palette on the right.", "type": "figure", "bounding-box": null}, {"prov": [{"bbox": [315.8857116699219, 331.43994140625, 559.6527709960938, 707.0224609375], "page": 5, "span": [0, 173], "__ref_s3_data": null}], "text": "Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.", "type": "figure", "bounding-box": null}, {"prov": [{"bbox": [322.7086486816406, 531.372314453125, 553.7246704101562, 701.6975708007812], "page": 6, "span": [0, 329], "__ref_s3_data": null}], "text": "Figure 5: Prediction performance (mAP@0.5-0.95) of a Mask R-CNN network with ResNet50 backbone trained on increasing fractions of the DocLayNet dataset. The learning curve flattens around the 80% mark, indicating that increasing the size of the DocLayNet dataset with similar data will not yield significantly better predictions.", "type": "figure", "bounding-box": null}, {"prov": [{"bbox": [53.59891891479492, 343.73516845703125, 554.9424438476562, 708.443115234375], "page": 9, "span": [0, 386], "__ref_s3_data": null}], "text": "Figure 6: Example layout predictions on selected pages from the DocLayNet test-set. (A, D) exhibit favourable results on coloured backgrounds. (B, C) show accurate list-item and paragraph differentiation despite densely-spaced lines. (E) demonstrates good table and figure distinction. (F) shows predictions on a Chinese patent with multiple overlaps, label confusion and missing boxes.", "type": "figure", "bounding-box": null}], "tables": [{"prov": [{"bbox": [98.96420288085938, 498.30108642578125, 512.7739868164062, 654.1231689453125], "page": 4, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 1: DocLayNet dataset overview. Along with the frequency of each class label, we present the relative occurrence (as % of row \"Total\") in the train, test and validation sets. The inter-annotator agreement is computed as the mAP@0.5-0.95 metric between pairwise annotations from the triple-annotated pages, from which we obtain accuracy ranges.", "type": "table", "#-cols": 12, "#-rows": 14, "data": [[{"bbox": null, "spans": [[0, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": null, "spans": [[0, 1]], "text": "", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [233.94400024414062, 643.40185546875, 270.042724609375, 651.7764892578125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "% of Total", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [233.94400024414062, 643.40185546875, 270.042724609375, 651.7764892578125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "% of Total", "type": "col_header", "col": 3, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [233.94400024414062, 643.40185546875, 270.042724609375, 651.7764892578125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "% of Total", "type": "col_header", "col": 4, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 6, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 7, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 8, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 9, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 10, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [329.04998779296875, 643.40185546875, 483.39764404296875, 651.7764892578125], "spans": [[0, 5], [0, 6], [0, 7], [0, 8], [0, 9], [0, 10], [0, 11]], "text": "triple inter-annotator mAP @ 0.5-0.95 (%)", "type": "col_header", "col": 11, "col-header": false, "col-span": [5, 12], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [104.82499694824219, 632.4428100585938, 141.7127685546875, 640.8174438476562], "spans": [[1, 0]], "text": "class label", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [175.94700622558594, 632.4428100585938, 198.7126922607422, 640.8174438476562], "spans": [[1, 1]], "text": "Count", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [213.7949981689453, 632.4428100585938, 233.69143676757812, 640.8174438476562], "spans": [[1, 2]], "text": "Train", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [249.37367248535156, 632.4428100585938, 264.5, 640.8174438476562], "spans": [[1, 3]], "text": "Test", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [283.5356750488281, 632.4428100585938, 295.3085632324219, 640.8174438476562], "spans": [[1, 4]], "text": "Val", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [314.0150146484375, 632.4428100585938, 324.9809265136719, 640.8174438476562], "spans": [[1, 5]], "text": "All", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [343.0123596191406, 632.4428100585938, 354.6507568359375, 640.8174438476562], "spans": [[1, 6]], "text": "Fin", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [367.84033203125, 632.4428100585938, 384.3205871582031, 640.8174438476562], "spans": [[1, 7]], "text": "Man", "type": "col_header", "col": 7, "col-header": false, "col-span": [7, 8], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [407.5435791015625, 632.4428100585938, 418.1597900390625, 640.8174438476562], "spans": [[1, 8]], "text": "Sci", "type": "col_header", "col": 8, "col-header": false, "col-span": [8, 9], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [432.2998046875, 632.4428100585938, 447.8296203613281, 640.8174438476562], "spans": [[1, 9]], "text": "Law", "type": "col_header", "col": 9, "col-header": false, "col-span": [9, 10], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [465.7265625, 632.4428100585938, 477.5084228515625, 640.8174438476562], "spans": [[1, 10]], "text": "Pat", "type": "col_header", "col": 10, "col-header": false, "col-span": [10, 11], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [493.52239990234375, 632.4428100585938, 507.17822265625, 640.8174438476562], "spans": [[1, 11]], "text": "Ten", "type": "col_header", "col": 11, "col-header": false, "col-span": [11, 12], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [104.82499694824219, 621.0858154296875, 134.01063537597656, 629.46044921875], "spans": [[2, 0]], "text": "Caption", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [177.86599731445312, 621.0858154296875, 198.71287536621094, 629.46044921875], "spans": [[2, 1]], "text": "22524", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [219.21099853515625, 621.0858154296875, 233.69174194335938, 629.46044921875], "spans": [[2, 2]], "text": "2.04", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [250.01956176757812, 621.0858154296875, 264.50030517578125, 629.46044921875], "spans": [[2, 3]], "text": "1.77", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [280.828125, 621.0858154296875, 295.3088684082031, 629.46044921875], "spans": [[2, 4]], "text": "2.32", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [305.27301025390625, 621.0858154296875, 324.9811706542969, 629.46044921875], "spans": [[2, 5]], "text": "84-89", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [334.9428405761719, 621.0858154296875, 354.6510009765625, 629.46044921875], "spans": [[2, 6]], "text": "40-61", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [364.6126708984375, 621.0858154296875, 384.3208312988281, 629.46044921875], "spans": [[2, 7]], "text": "86-92", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [398.4518737792969, 621.0858154296875, 418.1600341796875, 629.46044921875], "spans": [[2, 8]], "text": "94-99", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [428.1217041015625, 621.0858154296875, 447.8298645019531, 629.46044921875], "spans": [[2, 9]], "text": "95-99", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [457.8005065917969, 621.0858154296875, 477.5086669921875, 629.46044921875], "spans": [[2, 10]], "text": "69-78", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [495.32489013671875, 621.0858154296875, 507.178466796875, 629.46044921875], "spans": [[2, 11]], "text": "n/a", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [104.82499694824219, 610.1268310546875, 137.3282012939453, 618.50146484375], "spans": [[3, 0]], "text": "Footnote", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [182.03500366210938, 610.1268310546875, 198.71250915527344, 618.50146484375], "spans": [[3, 1]], "text": "6318", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [219.21099853515625, 610.1268310546875, 233.69174194335938, 618.50146484375], "spans": [[3, 2]], "text": "0.60", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [250.01956176757812, 610.1268310546875, 264.50030517578125, 618.50146484375], "spans": [[3, 3]], "text": "0.31", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [280.828125, 610.1268310546875, 295.3088684082031, 618.50146484375], "spans": [[3, 4]], "text": "0.58", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [305.27301025390625, 610.1268310546875, 324.9811706542969, 618.50146484375], "spans": [[3, 5]], "text": "83-91", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [342.7973937988281, 610.1268310546875, 354.6509704589844, 618.50146484375], "spans": [[3, 6]], "text": "n/a", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [371.8126525878906, 610.1268310546875, 384.3207702636719, 618.50146484375], "spans": [[3, 7]], "text": "100", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [398.4518127441406, 610.1268310546875, 418.15997314453125, 618.50146484375], "spans": [[3, 8]], "text": "62-88", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [428.12164306640625, 610.1268310546875, 447.8298034667969, 618.50146484375], "spans": [[3, 9]], "text": "85-94", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [465.6549987792969, 610.1268310546875, 477.5085754394531, 618.50146484375], "spans": [[3, 10]], "text": "n/a", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [487.4702453613281, 610.1268310546875, 507.17840576171875, 618.50146484375], "spans": [[3, 11]], "text": "82-97", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [104.82499694824219, 599.1678466796875, 135.33766174316406, 607.54248046875], "spans": [[4, 0]], "text": "Formula", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [177.86599731445312, 599.1678466796875, 198.71287536621094, 607.54248046875], "spans": [[4, 1]], "text": "25027", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [219.21099853515625, 599.1678466796875, 233.69174194335938, 607.54248046875], "spans": [[4, 2]], "text": "2.25", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [250.01956176757812, 599.1678466796875, 264.50030517578125, 607.54248046875], "spans": [[4, 3]], "text": "1.90", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [280.828125, 599.1678466796875, 295.3088684082031, 607.54248046875], "spans": [[4, 4]], "text": "2.96", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [305.27301025390625, 599.1678466796875, 324.9811706542969, 607.54248046875], "spans": [[4, 5]], "text": "83-85", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [342.7973937988281, 599.1678466796875, 354.6509704589844, 607.54248046875], "spans": [[4, 6]], "text": "n/a", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [372.4671936035156, 599.1678466796875, 384.3207702636719, 607.54248046875], "spans": [[4, 7]], "text": "n/a", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [398.4518127441406, 599.1678466796875, 418.15997314453125, 607.54248046875], "spans": [[4, 8]], "text": "84-87", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [428.12164306640625, 599.1678466796875, 447.8298034667969, 607.54248046875], "spans": [[4, 9]], "text": "86-96", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [465.6549987792969, 599.1678466796875, 477.5085754394531, 607.54248046875], "spans": [[4, 10]], "text": "n/a", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [495.3247985839844, 599.1678466796875, 507.1783752441406, 607.54248046875], "spans": [[4, 11]], "text": "n/a", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [104.82499694824219, 588.2088012695312, 137.7047882080078, 596.5834350585938], "spans": [[5, 0]], "text": "List-item", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [173.69700622558594, 588.2088012695312, 198.7132568359375, 596.5834350585938], "spans": [[5, 1]], "text": "185660", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [215.04200744628906, 588.2088012695312, 233.69212341308594, 596.5834350585938], "spans": [[5, 2]], "text": "17.19", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [245.85055541992188, 588.2088012695312, 264.50067138671875, 596.5834350585938], "spans": [[5, 3]], "text": "13.34", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [276.65911865234375, 588.2088012695312, 295.3092346191406, 596.5834350585938], "spans": [[5, 4]], "text": "15.82", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [305.27301025390625, 588.2088012695312, 324.9811706542969, 596.5834350585938], "spans": [[5, 5]], "text": "87-88", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [334.9428405761719, 588.2088012695312, 354.6510009765625, 596.5834350585938], "spans": [[5, 6]], "text": "74-83", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [364.6126708984375, 588.2088012695312, 384.3208312988281, 596.5834350585938], "spans": [[5, 7]], "text": "90-92", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [398.4518737792969, 588.2088012695312, 418.1600341796875, 596.5834350585938], "spans": [[5, 8]], "text": "97-97", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [428.1217041015625, 588.2088012695312, 447.8298645019531, 596.5834350585938], "spans": [[5, 9]], "text": "81-85", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [457.8005065917969, 588.2088012695312, 477.5086669921875, 596.5834350585938], "spans": [[5, 10]], "text": "75-88", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [487.4703369140625, 588.2088012695312, 507.1784973144531, 596.5834350585938], "spans": [[5, 11]], "text": "93-95", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [104.82499694824219, 577.2498168945312, 147.3526153564453, 585.6244506835938], "spans": [[6, 0]], "text": "Page-footer", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [177.86599731445312, 577.2498168945312, 198.71287536621094, 585.6244506835938], "spans": [[6, 1]], "text": "70878", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [219.21099853515625, 577.2498168945312, 233.69174194335938, 585.6244506835938], "spans": [[6, 2]], "text": "6.51", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [250.01956176757812, 577.2498168945312, 264.50030517578125, 585.6244506835938], "spans": [[6, 3]], "text": "5.58", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [280.828125, 577.2498168945312, 295.3088684082031, 585.6244506835938], "spans": [[6, 4]], "text": "6.00", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [305.27301025390625, 577.2498168945312, 324.9811706542969, 585.6244506835938], "spans": [[6, 5]], "text": "93-94", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [334.9428405761719, 577.2498168945312, 354.6510009765625, 585.6244506835938], "spans": [[6, 6]], "text": "88-90", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [364.6126708984375, 577.2498168945312, 384.3208312988281, 585.6244506835938], "spans": [[6, 7]], "text": "95-96", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [405.6518859863281, 577.2498168945312, 418.1600036621094, 585.6244506835938], "spans": [[6, 8]], "text": "100", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [428.1216735839844, 577.2498168945312, 447.829833984375, 585.6244506835938], "spans": [[6, 9]], "text": "92-97", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [465.00048828125, 577.2498168945312, 477.50860595703125, 585.6244506835938], "spans": [[6, 10]], "text": "100", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [487.47027587890625, 577.2498168945312, 507.1784362792969, 585.6244506835938], "spans": [[6, 11]], "text": "96-98", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [104.82499694824219, 566.2908325195312, 150.10531616210938, 574.6654663085938], "spans": [[7, 0]], "text": "Page-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [177.86599731445312, 566.2908325195312, 198.71287536621094, 574.6654663085938], "spans": [[7, 1]], "text": "58022", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [219.21099853515625, 566.2908325195312, 233.69174194335938, 574.6654663085938], "spans": [[7, 2]], "text": "5.10", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [250.01956176757812, 566.2908325195312, 264.50030517578125, 574.6654663085938], "spans": [[7, 3]], "text": "6.70", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [280.828125, 566.2908325195312, 295.3088684082031, 574.6654663085938], "spans": [[7, 4]], "text": "5.06", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [305.27301025390625, 566.2908325195312, 324.9811706542969, 574.6654663085938], "spans": [[7, 5]], "text": "85-89", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [334.9428405761719, 566.2908325195312, 354.6510009765625, 574.6654663085938], "spans": [[7, 6]], "text": "66-76", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [364.6126708984375, 566.2908325195312, 384.3208312988281, 574.6654663085938], "spans": [[7, 7]], "text": "90-94", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [394.2825012207031, 566.2908325195312, 418.1600341796875, 574.6654663085938], "spans": [[7, 8]], "text": "98-100", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [428.1217041015625, 566.2908325195312, 447.8298645019531, 574.6654663085938], "spans": [[7, 9]], "text": "91-92", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [457.8005065917969, 566.2908325195312, 477.5086669921875, 574.6654663085938], "spans": [[7, 10]], "text": "97-99", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [487.4703369140625, 566.2908325195312, 507.1784973144531, 574.6654663085938], "spans": [[7, 11]], "text": "81-86", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 7, "row-header": false, "row-span": [7, 8]}], [{"bbox": [104.82499694824219, 555.3318481445312, 130.80963134765625, 563.7064819335938], "spans": [[8, 0]], "text": "Picture", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [177.86599731445312, 555.3318481445312, 198.71287536621094, 563.7064819335938], "spans": [[8, 1]], "text": "45976", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [219.21099853515625, 555.3318481445312, 233.69174194335938, 563.7064819335938], "spans": [[8, 2]], "text": "4.21", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [250.01956176757812, 555.3318481445312, 264.50030517578125, 563.7064819335938], "spans": [[8, 3]], "text": "2.78", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [280.828125, 555.3318481445312, 295.3088684082031, 563.7064819335938], "spans": [[8, 4]], "text": "5.31", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [305.27301025390625, 555.3318481445312, 324.9811706542969, 563.7064819335938], "spans": [[8, 5]], "text": "69-71", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [334.9428405761719, 555.3318481445312, 354.6510009765625, 563.7064819335938], "spans": [[8, 6]], "text": "56-59", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [364.6126708984375, 555.3318481445312, 384.3208312988281, 563.7064819335938], "spans": [[8, 7]], "text": "82-86", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [398.4518737792969, 555.3318481445312, 418.1600341796875, 563.7064819335938], "spans": [[8, 8]], "text": "69-82", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [428.1217041015625, 555.3318481445312, 447.8298645019531, 563.7064819335938], "spans": [[8, 9]], "text": "80-95", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [457.8005065917969, 555.3318481445312, 477.5086669921875, 563.7064819335938], "spans": [[8, 10]], "text": "66-71", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [487.4703369140625, 555.3318481445312, 507.1784973144531, 563.7064819335938], "spans": [[8, 11]], "text": "59-76", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 8, "row-header": false, "row-span": [8, 9]}], [{"bbox": [104.82499694824219, 544.372802734375, 159.5648651123047, 552.7474365234375], "spans": [[9, 0]], "text": "Section-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [173.69700622558594, 544.372802734375, 198.7132568359375, 552.7474365234375], "spans": [[9, 1]], "text": "142884", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [215.04200744628906, 544.372802734375, 233.69212341308594, 552.7474365234375], "spans": [[9, 2]], "text": "12.60", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [245.85055541992188, 544.372802734375, 264.50067138671875, 552.7474365234375], "spans": [[9, 3]], "text": "15.77", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [276.65911865234375, 544.372802734375, 295.3092346191406, 552.7474365234375], "spans": [[9, 4]], "text": "12.85", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [305.27301025390625, 544.372802734375, 324.9811706542969, 552.7474365234375], "spans": [[9, 5]], "text": "83-84", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [334.9428405761719, 544.372802734375, 354.6510009765625, 552.7474365234375], "spans": [[9, 6]], "text": "76-81", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [364.6126708984375, 544.372802734375, 384.3208312988281, 552.7474365234375], "spans": [[9, 7]], "text": "90-92", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [398.4518737792969, 544.372802734375, 418.1600341796875, 552.7474365234375], "spans": [[9, 8]], "text": "94-95", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [428.1217041015625, 544.372802734375, 447.8298645019531, 552.7474365234375], "spans": [[9, 9]], "text": "87-94", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [457.8005065917969, 544.372802734375, 477.5086669921875, 552.7474365234375], "spans": [[9, 10]], "text": "69-73", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [487.4703369140625, 544.372802734375, 507.1784973144531, 552.7474365234375], "spans": [[9, 11]], "text": "78-86", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 9, "row-header": false, "row-span": [9, 10]}], [{"bbox": [104.82499694824219, 533.413818359375, 124.63176727294922, 541.7884521484375], "spans": [[10, 0]], "text": "Table", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [177.86599731445312, 533.413818359375, 198.71287536621094, 541.7884521484375], "spans": [[10, 1]], "text": "34733", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [219.21099853515625, 533.413818359375, 233.69174194335938, 541.7884521484375], "spans": [[10, 2]], "text": "3.20", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [250.01956176757812, 533.413818359375, 264.50030517578125, 541.7884521484375], "spans": [[10, 3]], "text": "2.27", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [280.828125, 533.413818359375, 295.3088684082031, 541.7884521484375], "spans": [[10, 4]], "text": "3.60", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [305.27301025390625, 533.413818359375, 324.9811706542969, 541.7884521484375], "spans": [[10, 5]], "text": "77-81", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [334.9428405761719, 533.413818359375, 354.6510009765625, 541.7884521484375], "spans": [[10, 6]], "text": "75-80", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [364.6126708984375, 533.413818359375, 384.3208312988281, 541.7884521484375], "spans": [[10, 7]], "text": "83-86", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [398.4518737792969, 533.413818359375, 418.1600341796875, 541.7884521484375], "spans": [[10, 8]], "text": "98-99", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [428.1217041015625, 533.413818359375, 447.8298645019531, 541.7884521484375], "spans": [[10, 9]], "text": "58-80", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [457.8005065917969, 533.413818359375, 477.5086669921875, 541.7884521484375], "spans": [[10, 10]], "text": "79-84", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [487.4703369140625, 533.413818359375, 507.1784973144531, 541.7884521484375], "spans": [[10, 11]], "text": "70-85", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 10, "row-header": false, "row-span": [10, 11]}], [{"bbox": [104.82499694824219, 522.455810546875, 120.78518676757812, 530.8304443359375], "spans": [[11, 0]], "text": "Text", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [173.69700622558594, 522.455810546875, 198.7132568359375, 530.8304443359375], "spans": [[11, 1]], "text": "510377", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [215.04200744628906, 522.455810546875, 233.69212341308594, 530.8304443359375], "spans": [[11, 2]], "text": "45.82", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [245.85055541992188, 522.455810546875, 264.50067138671875, 530.8304443359375], "spans": [[11, 3]], "text": "49.28", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [276.65911865234375, 522.455810546875, 295.3092346191406, 530.8304443359375], "spans": [[11, 4]], "text": "45.00", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [305.27301025390625, 522.455810546875, 324.9811706542969, 530.8304443359375], "spans": [[11, 5]], "text": "84-86", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [334.9428405761719, 522.455810546875, 354.6510009765625, 530.8304443359375], "spans": [[11, 6]], "text": "81-86", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [364.6126708984375, 522.455810546875, 384.3208312988281, 530.8304443359375], "spans": [[11, 7]], "text": "88-93", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [398.4518737792969, 522.455810546875, 418.1600341796875, 530.8304443359375], "spans": [[11, 8]], "text": "89-93", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [428.1217041015625, 522.455810546875, 447.8298645019531, 530.8304443359375], "spans": [[11, 9]], "text": "87-92", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [457.8005065917969, 522.455810546875, 477.5086669921875, 530.8304443359375], "spans": [[11, 10]], "text": "71-79", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [487.4703369140625, 522.455810546875, 507.1784973144531, 530.8304443359375], "spans": [[11, 11]], "text": "87-95", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 11, "row-header": false, "row-span": [11, 12]}], [{"bbox": [104.82499694824219, 511.496826171875, 121.81632995605469, 519.8714599609375], "spans": [[12, 0]], "text": "Title", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [182.03500366210938, 511.496826171875, 198.71250915527344, 519.8714599609375], "spans": [[12, 1]], "text": "5071", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [219.21099853515625, 511.496826171875, 233.69174194335938, 519.8714599609375], "spans": [[12, 2]], "text": "0.47", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [250.01956176757812, 511.496826171875, 264.50030517578125, 519.8714599609375], "spans": [[12, 3]], "text": "0.30", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [280.828125, 511.496826171875, 295.3088684082031, 519.8714599609375], "spans": [[12, 4]], "text": "0.50", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [305.27301025390625, 511.496826171875, 324.9811706542969, 519.8714599609375], "spans": [[12, 5]], "text": "60-72", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [334.9428405761719, 511.496826171875, 354.6510009765625, 519.8714599609375], "spans": [[12, 6]], "text": "24-63", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [364.6126708984375, 511.496826171875, 384.3208312988281, 519.8714599609375], "spans": [[12, 7]], "text": "50-63", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [394.2825012207031, 511.496826171875, 418.1600341796875, 519.8714599609375], "spans": [[12, 8]], "text": "94-100", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [428.1217041015625, 511.496826171875, 447.8298645019531, 519.8714599609375], "spans": [[12, 9]], "text": "82-96", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [457.8005065917969, 511.496826171875, 477.5086669921875, 519.8714599609375], "spans": [[12, 10]], "text": "68-79", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [487.4703369140625, 511.496826171875, 507.1784973144531, 519.8714599609375], "spans": [[12, 11]], "text": "24-56", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 12, "row-header": false, "row-span": [12, 13]}], [{"bbox": [104.82499694824219, 500.1388244628906, 123.43028259277344, 508.5134582519531], "spans": [[13, 0]], "text": "Total", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [169.52699279785156, 500.1388244628906, 198.71263122558594, 508.5134582519531], "spans": [[13, 1]], "text": "1107470", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [208.6750030517578, 500.1388244628906, 233.69125366210938, 508.5134582519531], "spans": [[13, 2]], "text": "941123", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [243.65292358398438, 500.1388244628906, 264.49981689453125, 508.5134582519531], "spans": [[13, 3]], "text": "99816", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [274.46148681640625, 500.1388244628906, 295.3083801269531, 508.5134582519531], "spans": [[13, 4]], "text": "66531", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [305.27301025390625, 500.1388244628906, 324.9811706542969, 508.5134582519531], "spans": [[13, 5]], "text": "82-83", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [334.9428405761719, 500.1388244628906, 354.6510009765625, 508.5134582519531], "spans": [[13, 6]], "text": "71-74", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [364.6126708984375, 500.1388244628906, 384.3208312988281, 508.5134582519531], "spans": [[13, 7]], "text": "79-81", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [398.4518737792969, 500.1388244628906, 418.1600341796875, 508.5134582519531], "spans": [[13, 8]], "text": "89-94", "type": "body", "col": 8, "col-header": false, "col-span": [8, 9], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [428.1217041015625, 500.1388244628906, 447.8298645019531, 508.5134582519531], "spans": [[13, 9]], "text": "86-91", "type": "body", "col": 9, "col-header": false, "col-span": [9, 10], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [457.8005065917969, 500.1388244628906, 477.5086669921875, 508.5134582519531], "spans": [[13, 10]], "text": "71-76", "type": "body", "col": 10, "col-header": false, "col-span": [10, 11], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [487.4703369140625, 500.1388244628906, 507.1784973144531, 508.5134582519531], "spans": [[13, 11]], "text": "68-85", "type": "body", "col": 11, "col-header": false, "col-span": [11, 12], "row": 13, "row-header": false, "row-span": [13, 14]}]], "model": null, "bounding-box": null}, {"prov": [{"bbox": [61.93328094482422, 440.30438232421875, 285.75616455078125, 596.587158203125], "page": 6, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 2: Prediction performance (mAP@0.5-0.95) of object detection networks on DocLayNet test set. The MRCNN (Mask R-CNN) and FRCNN (Faster R-CNN) models with ResNet-50 or ResNet-101 backbone were trained based on the network architectures from the detectron2 model zoo (Mask R-CNN R50, R101-FPN 3x, Faster R-CNN R101-FPN 3x), with default configurations. The YOLO implementation utilized was YOLOv5x6 [13]. All models were initialised using pre-trained weights from the COCO 2017 dataset.", "type": "table", "#-cols": 6, "#-rows": 14, "data": [[{"bbox": null, "spans": [[0, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [132.36500549316406, 585.65185546875, 157.99098205566406, 594.0264892578125], "spans": [[0, 1], [1, 1]], "text": "human", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 2]}, {"bbox": [173.5050048828125, 585.65185546875, 204.618408203125, 594.0264892578125], "spans": [[0, 2], [0, 3]], "text": "MRCNN", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 4], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [173.5050048828125, 585.65185546875, 204.618408203125, 594.0264892578125], "spans": [[0, 2], [0, 3]], "text": "MRCNN", "type": "col_header", "col": 3, "col-header": false, "col-span": [2, 4], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [220.13027954101562, 585.65185546875, 248.069580078125, 594.0264892578125], "spans": [[0, 4]], "text": "FRCNN", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [258.03125, 585.65185546875, 280.1782531738281, 594.0264892578125], "spans": [[0, 5]], "text": "YOLO", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": null, "spans": [[1, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [132.36500549316406, 585.65185546875, 157.99098205566406, 594.0264892578125], "spans": [[0, 1], [1, 1]], "text": "human", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [0, 2]}, {"bbox": [168.39300537109375, 574.6928100585938, 181.9950408935547, 583.0674438476562], "spans": [[1, 2]], "text": "R50", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [192.39605712890625, 574.6928100585938, 210.16746520996094, 583.0674438476562], "spans": [[1, 3]], "text": "R101", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [225.2130889892578, 574.6928100585938, 242.9844970703125, 583.0674438476562], "spans": [[1, 4]], "text": "R101", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [260.5137939453125, 574.6928100585938, 277.702392578125, 583.0674438476562], "spans": [[1, 5]], "text": "v5x6", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [67.66300201416016, 563.3358154296875, 96.8486328125, 571.71044921875], "spans": [[2, 0]], "text": "Caption", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [135.32400512695312, 563.3358154296875, 155.0321502685547, 571.71044921875], "spans": [[2, 1]], "text": "84-89", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [167.95399475097656, 563.3358154296875, 182.43472290039062, 571.71044921875], "spans": [[2, 2]], "text": "68.4", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [194.04620361328125, 563.3358154296875, 208.52694702148438, 571.71044921875], "spans": [[2, 3]], "text": "71.5", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [226.8632354736328, 563.3358154296875, 241.34396362304688, 571.71044921875], "spans": [[2, 4]], "text": "70.1", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [261.8680419921875, 563.3358154296875, 276.3487854003906, 571.71044921875], "spans": [[2, 5]], "text": "77.7", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [67.66300201416016, 552.3768310546875, 100.16619873046875, 560.75146484375], "spans": [[3, 0]], "text": "Footnote", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [135.32400512695312, 552.3768310546875, 155.0321502685547, 560.75146484375], "spans": [[3, 1]], "text": "83-91", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [167.95399475097656, 552.3768310546875, 182.43472290039062, 560.75146484375], "spans": [[3, 2]], "text": "70.9", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [194.04620361328125, 552.3768310546875, 208.52694702148438, 560.75146484375], "spans": [[3, 3]], "text": "71.8", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [226.8632354736328, 552.3768310546875, 241.34396362304688, 560.75146484375], "spans": [[3, 4]], "text": "73.7", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [261.8680419921875, 552.3768310546875, 276.3487854003906, 560.75146484375], "spans": [[3, 5]], "text": "77.2", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [67.66300201416016, 541.4178466796875, 98.1756591796875, 549.79248046875], "spans": [[4, 0]], "text": "Formula", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [135.32400512695312, 541.4178466796875, 155.0321502685547, 549.79248046875], "spans": [[4, 1]], "text": "83-85", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [167.95399475097656, 541.4178466796875, 182.43472290039062, 549.79248046875], "spans": [[4, 2]], "text": "60.1", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [194.04620361328125, 541.4178466796875, 208.52694702148438, 549.79248046875], "spans": [[4, 3]], "text": "63.4", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [226.8632354736328, 541.4178466796875, 241.34396362304688, 549.79248046875], "spans": [[4, 4]], "text": "63.5", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [261.8680419921875, 541.4178466796875, 276.3487854003906, 549.79248046875], "spans": [[4, 5]], "text": "66.2", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [67.66300201416016, 530.4588012695312, 100.54279327392578, 538.8334350585938], "spans": [[5, 0]], "text": "List-item", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [135.32400512695312, 530.4588012695312, 155.0321502685547, 538.8334350585938], "spans": [[5, 1]], "text": "87-88", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [167.95399475097656, 530.4588012695312, 182.43472290039062, 538.8334350585938], "spans": [[5, 2]], "text": "81.2", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [194.04620361328125, 530.4588012695312, 208.52694702148438, 538.8334350585938], "spans": [[5, 3]], "text": "80.8", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [226.8632354736328, 530.4588012695312, 241.34396362304688, 538.8334350585938], "spans": [[5, 4]], "text": "81.0", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [261.8680419921875, 530.4588012695312, 276.3487854003906, 538.8334350585938], "spans": [[5, 5]], "text": "86.2", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [67.66300201416016, 519.4998168945312, 110.19064331054688, 527.8744506835938], "spans": [[6, 0]], "text": "Page-footer", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [135.32400512695312, 519.4998168945312, 155.0321502685547, 527.8744506835938], "spans": [[6, 1]], "text": "93-94", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [167.95399475097656, 519.4998168945312, 182.43472290039062, 527.8744506835938], "spans": [[6, 2]], "text": "61.6", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [194.04620361328125, 519.4998168945312, 208.52694702148438, 527.8744506835938], "spans": [[6, 3]], "text": "59.3", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [226.8632354736328, 519.4998168945312, 241.34396362304688, 527.8744506835938], "spans": [[6, 4]], "text": "58.9", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [261.8680419921875, 519.4998168945312, 276.3487854003906, 527.8744506835938], "spans": [[6, 5]], "text": "61.1", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [67.66300201416016, 508.54083251953125, 112.94332122802734, 516.9154663085938], "spans": [[7, 0]], "text": "Page-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [135.32400512695312, 508.54083251953125, 155.0321502685547, 516.9154663085938], "spans": [[7, 1]], "text": "85-89", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [167.95399475097656, 508.54083251953125, 182.43472290039062, 516.9154663085938], "spans": [[7, 2]], "text": "71.9", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [194.04620361328125, 508.54083251953125, 208.52694702148438, 516.9154663085938], "spans": [[7, 3]], "text": "70.0", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [226.8632354736328, 508.54083251953125, 241.34396362304688, 516.9154663085938], "spans": [[7, 4]], "text": "72.0", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [261.8680419921875, 508.54083251953125, 276.3487854003906, 516.9154663085938], "spans": [[7, 5]], "text": "67.9", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 7, "row-header": false, "row-span": [7, 8]}], [{"bbox": [67.66300201416016, 497.5818176269531, 93.64762878417969, 505.9564514160156], "spans": [[8, 0]], "text": "Picture", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [135.32400512695312, 497.5818176269531, 155.0321502685547, 505.9564514160156], "spans": [[8, 1]], "text": "69-71", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [167.95399475097656, 497.5818176269531, 182.43472290039062, 505.9564514160156], "spans": [[8, 2]], "text": "71.7", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [194.04620361328125, 497.5818176269531, 208.52694702148438, 505.9564514160156], "spans": [[8, 3]], "text": "72.7", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [226.8632354736328, 497.5818176269531, 241.34396362304688, 505.9564514160156], "spans": [[8, 4]], "text": "72.0", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [261.8680419921875, 497.5818176269531, 276.3487854003906, 505.9564514160156], "spans": [[8, 5]], "text": "77.1", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 8, "row-header": false, "row-span": [8, 9]}], [{"bbox": [67.66300201416016, 486.6228332519531, 122.40287780761719, 494.9974670410156], "spans": [[9, 0]], "text": "Section-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [135.32400512695312, 486.6228332519531, 155.0321502685547, 494.9974670410156], "spans": [[9, 1]], "text": "83-84", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [167.95399475097656, 486.6228332519531, 182.43472290039062, 494.9974670410156], "spans": [[9, 2]], "text": "67.6", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [194.04620361328125, 486.6228332519531, 208.52694702148438, 494.9974670410156], "spans": [[9, 3]], "text": "69.3", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [226.8632354736328, 486.6228332519531, 241.34396362304688, 494.9974670410156], "spans": [[9, 4]], "text": "68.4", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [261.8680419921875, 486.6228332519531, 276.3487854003906, 494.9974670410156], "spans": [[9, 5]], "text": "74.6", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 9, "row-header": false, "row-span": [9, 10]}], [{"bbox": [67.66300201416016, 475.663818359375, 87.46977996826172, 484.0384521484375], "spans": [[10, 0]], "text": "Table", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [135.32400512695312, 475.663818359375, 155.0321502685547, 484.0384521484375], "spans": [[10, 1]], "text": "77-81", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [167.95399475097656, 475.663818359375, 182.43472290039062, 484.0384521484375], "spans": [[10, 2]], "text": "82.2", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [194.04620361328125, 475.663818359375, 208.52694702148438, 484.0384521484375], "spans": [[10, 3]], "text": "82.9", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [226.8632354736328, 475.663818359375, 241.34396362304688, 484.0384521484375], "spans": [[10, 4]], "text": "82.2", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [261.8680419921875, 475.663818359375, 276.3487854003906, 484.0384521484375], "spans": [[10, 5]], "text": "86.3", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 10, "row-header": false, "row-span": [10, 11]}], [{"bbox": [67.66300201416016, 464.7058410644531, 83.62319946289062, 473.0804748535156], "spans": [[11, 0]], "text": "Text", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [135.32400512695312, 464.7058410644531, 155.0321502685547, 473.0804748535156], "spans": [[11, 1]], "text": "84-86", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [167.95399475097656, 464.7058410644531, 182.43472290039062, 473.0804748535156], "spans": [[11, 2]], "text": "84.6", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [194.04620361328125, 464.7058410644531, 208.52694702148438, 473.0804748535156], "spans": [[11, 3]], "text": "85.8", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [226.8632354736328, 464.7058410644531, 241.34396362304688, 473.0804748535156], "spans": [[11, 4]], "text": "85.4", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [261.8680419921875, 464.7058410644531, 276.3487854003906, 473.0804748535156], "spans": [[11, 5]], "text": "88.1", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 11, "row-header": false, "row-span": [11, 12]}], [{"bbox": [67.66300201416016, 453.746826171875, 84.65432739257812, 462.1214599609375], "spans": [[12, 0]], "text": "Title", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [135.32400512695312, 453.746826171875, 155.0321502685547, 462.1214599609375], "spans": [[12, 1]], "text": "60-72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [167.95399475097656, 453.746826171875, 182.43472290039062, 462.1214599609375], "spans": [[12, 2]], "text": "76.7", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [194.04620361328125, 453.746826171875, 208.52694702148438, 462.1214599609375], "spans": [[12, 3]], "text": "80.4", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [226.8632354736328, 453.746826171875, 241.34396362304688, 462.1214599609375], "spans": [[12, 4]], "text": "79.9", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [261.8680419921875, 453.746826171875, 276.3487854003906, 462.1214599609375], "spans": [[12, 5]], "text": "82.7", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 12, "row-header": false, "row-span": [12, 13]}], [{"bbox": [67.66300201416016, 442.3888244628906, 78.62890625, 450.7634582519531], "spans": [[13, 0]], "text": "All", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [135.32400512695312, 442.3888244628906, 155.0321502685547, 450.7634582519531], "spans": [[13, 1]], "text": "82-83", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [167.95399475097656, 442.3888244628906, 182.43472290039062, 450.7634582519531], "spans": [[13, 2]], "text": "72.4", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [194.04620361328125, 442.3888244628906, 208.52694702148438, 450.7634582519531], "spans": [[13, 3]], "text": "73.5", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [226.8632354736328, 442.3888244628906, 241.34396362304688, 450.7634582519531], "spans": [[13, 4]], "text": "73.4", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [261.8680419921875, 442.3888244628906, 276.3487854003906, 450.7634582519531], "spans": [[13, 5]], "text": "76.8", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 13, "row-header": false, "row-span": [13, 14]}]], "model": null, "bounding-box": null}, {"prov": [{"bbox": [80.5073471069336, 496.419189453125, 267.3428649902344, 640.9814453125], "page": 7, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 3: Performance of a Mask R-CNN R50 network in mAP@0.5-0.95 scores trained on DocLayNet with different class label sets. The reduced label sets were obtained by either down-mapping or dropping labels.", "type": "table", "#-cols": 5, "#-rows": 13, "data": [[{"bbox": [86.37200164794922, 630.5248413085938, 129.4645233154297, 638.8994750976562], "spans": [[0, 0]], "text": "Class-count", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [151.07400512695312, 630.5248413085938, 159.41275024414062, 638.8994750976562], "spans": [[0, 1]], "text": "11", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [179.3181610107422, 630.5248413085938, 183.48753356933594, 638.8994750976562], "spans": [[0, 2]], "text": "6", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [213.33668518066406, 630.5248413085938, 217.5060577392578, 638.8994750976562], "spans": [[0, 3]], "text": "5", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [247.35520935058594, 630.5248413085938, 251.5245819091797, 638.8994750976562], "spans": [[0, 4]], "text": "4", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [86.37200164794922, 619.1678466796875, 115.55763244628906, 627.54248046875], "spans": [[1, 0]], "text": "Caption", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [151.07400512695312, 619.1678466796875, 159.41275024414062, 627.54248046875], "spans": [[1, 1]], "text": "68", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [173.42723083496094, 619.1678466796875, 189.38742065429688, 627.54248046875], "spans": [[1, 2]], "text": "Text", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [207.4457550048828, 619.1678466796875, 223.40594482421875, 627.54248046875], "spans": [[1, 3]], "text": "Text", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [241.4642791748047, 619.1678466796875, 257.4244689941406, 627.54248046875], "spans": [[1, 4]], "text": "Text", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [86.37200164794922, 608.2088012695312, 118.87519836425781, 616.5834350585938], "spans": [[2, 0]], "text": "Footnote", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [151.07400512695312, 608.2088012695312, 159.41275024414062, 616.5834350585938], "spans": [[2, 1]], "text": "71", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [173.42723083496094, 608.2088012695312, 189.38742065429688, 616.5834350585938], "spans": [[2, 2]], "text": "Text", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [207.4457550048828, 608.2088012695312, 223.40594482421875, 616.5834350585938], "spans": [[2, 3]], "text": "Text", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [241.4642791748047, 608.2088012695312, 257.4244689941406, 616.5834350585938], "spans": [[2, 4]], "text": "Text", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [86.37200164794922, 597.2498168945312, 116.88465881347656, 605.6244506835938], "spans": [[3, 0]], "text": "Formula", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [151.07400512695312, 597.2498168945312, 159.41275024414062, 605.6244506835938], "spans": [[3, 1]], "text": "60", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [173.42723083496094, 597.2498168945312, 189.38742065429688, 605.6244506835938], "spans": [[3, 2]], "text": "Text", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [207.4457550048828, 597.2498168945312, 223.40594482421875, 605.6244506835938], "spans": [[3, 3]], "text": "Text", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [241.4642791748047, 597.2498168945312, 257.4244689941406, 605.6244506835938], "spans": [[3, 4]], "text": "Text", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [86.37200164794922, 586.2908325195312, 119.25179290771484, 594.6654663085938], "spans": [[4, 0]], "text": "List-item", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [151.07400512695312, 586.2908325195312, 159.41275024414062, 594.6654663085938], "spans": [[4, 1]], "text": "81", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [173.42723083496094, 586.2908325195312, 189.38742065429688, 594.6654663085938], "spans": [[4, 2]], "text": "Text", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [211.2564697265625, 586.2908325195312, 219.59521484375, 594.6654663085938], "spans": [[4, 3]], "text": "82", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [241.46426391601562, 586.2908325195312, 257.4244689941406, 594.6654663085938], "spans": [[4, 4]], "text": "Text", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [86.37200164794922, 575.3318481445312, 128.89964294433594, 583.7064819335938], "spans": [[5, 0]], "text": "Page-footer", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [151.07400512695312, 575.3318481445312, 159.41275024414062, 583.7064819335938], "spans": [[5, 1]], "text": "62", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [177.23794555664062, 575.3318481445312, 185.57669067382812, 583.7064819335938], "spans": [[5, 2]], "text": "62", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [213.9105224609375, 575.3318481445312, 216.941162109375, 583.7064819335938], "spans": [[5, 3]], "text": "-", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [247.92904663085938, 575.3318481445312, 250.95968627929688, 583.7064819335938], "spans": [[5, 4]], "text": "-", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [86.37200164794922, 564.372802734375, 131.65231323242188, 572.7474365234375], "spans": [[6, 0]], "text": "Page-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [151.07400512695312, 564.372802734375, 159.41275024414062, 572.7474365234375], "spans": [[6, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [177.23794555664062, 564.372802734375, 185.57669067382812, 572.7474365234375], "spans": [[6, 2]], "text": "68", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [213.9105224609375, 564.372802734375, 216.941162109375, 572.7474365234375], "spans": [[6, 3]], "text": "-", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [247.92904663085938, 564.372802734375, 250.95968627929688, 572.7474365234375], "spans": [[6, 4]], "text": "-", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [86.37200164794922, 553.413818359375, 112.35662841796875, 561.7884521484375], "spans": [[7, 0]], "text": "Picture", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [151.07400512695312, 553.413818359375, 159.41275024414062, 561.7884521484375], "spans": [[7, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [177.23794555664062, 553.413818359375, 185.57669067382812, 561.7884521484375], "spans": [[7, 2]], "text": "72", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [211.25645446777344, 553.413818359375, 219.59519958496094, 561.7884521484375], "spans": [[7, 3]], "text": "72", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [245.27496337890625, 553.413818359375, 253.61370849609375, 561.7884521484375], "spans": [[7, 4]], "text": "72", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 7, "row-header": false, "row-span": [7, 8]}], [{"bbox": [86.37200164794922, 542.455810546875, 141.11187744140625, 550.8304443359375], "spans": [[8, 0]], "text": "Section-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [151.07400512695312, 542.455810546875, 159.41275024414062, 550.8304443359375], "spans": [[8, 1]], "text": "68", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [177.23794555664062, 542.455810546875, 185.57669067382812, 550.8304443359375], "spans": [[8, 2]], "text": "67", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [211.25645446777344, 542.455810546875, 219.59519958496094, 550.8304443359375], "spans": [[8, 3]], "text": "69", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [245.27496337890625, 542.455810546875, 253.61370849609375, 550.8304443359375], "spans": [[8, 4]], "text": "68", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 8, "row-header": false, "row-span": [8, 9]}], [{"bbox": [86.37200164794922, 531.496826171875, 106.17877960205078, 539.8714599609375], "spans": [[9, 0]], "text": "Table", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [151.07400512695312, 531.496826171875, 159.41275024414062, 539.8714599609375], "spans": [[9, 1]], "text": "82", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [177.23794555664062, 531.496826171875, 185.57669067382812, 539.8714599609375], "spans": [[9, 2]], "text": "83", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [211.25645446777344, 531.496826171875, 219.59519958496094, 539.8714599609375], "spans": [[9, 3]], "text": "82", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [245.27496337890625, 531.496826171875, 253.61370849609375, 539.8714599609375], "spans": [[9, 4]], "text": "82", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 9, "row-header": false, "row-span": [9, 10]}], [{"bbox": [86.37200164794922, 520.537841796875, 102.33219909667969, 528.9124755859375], "spans": [[10, 0]], "text": "Text", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [151.07400512695312, 520.537841796875, 159.41275024414062, 528.9124755859375], "spans": [[10, 1]], "text": "85", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [177.23794555664062, 520.537841796875, 185.57669067382812, 528.9124755859375], "spans": [[10, 2]], "text": "84", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [211.25645446777344, 520.537841796875, 219.59519958496094, 528.9124755859375], "spans": [[10, 3]], "text": "84", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [245.27496337890625, 520.537841796875, 253.61370849609375, 528.9124755859375], "spans": [[10, 4]], "text": "84", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 10, "row-header": false, "row-span": [10, 11]}], [{"bbox": [86.37200164794922, 509.5788269042969, 103.36332702636719, 517.9534301757812], "spans": [[11, 0]], "text": "Title", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [151.07400512695312, 509.5788269042969, 159.41275024414062, 517.9534301757812], "spans": [[11, 1]], "text": "77", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [169.37442016601562, 509.5788269042969, 193.4312744140625, 517.9534301757812], "spans": [[11, 2]], "text": "Sec.-h.", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [203.3929443359375, 509.5788269042969, 227.44979858398438, 517.9534301757812], "spans": [[11, 3]], "text": "Sec.-h.", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [237.41146850585938, 509.5788269042969, 261.46832275390625, 517.9534301757812], "spans": [[11, 4]], "text": "Sec.-h.", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 11, "row-header": false, "row-span": [11, 12]}], [{"bbox": [86.37200164794922, 498.2208251953125, 113.3160171508789, 506.595458984375], "spans": [[12, 0]], "text": "Overall", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [151.07400512695312, 498.2208251953125, 159.41275024414062, 506.595458984375], "spans": [[12, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [177.23794555664062, 498.2208251953125, 185.57669067382812, 506.595458984375], "spans": [[12, 2]], "text": "73", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [211.25645446777344, 498.2208251953125, 219.59519958496094, 506.595458984375], "spans": [[12, 3]], "text": "78", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [245.27496337890625, 498.2208251953125, 253.61370849609375, 506.595458984375], "spans": [[12, 4]], "text": "77", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 12, "row-header": false, "row-span": [12, 13]}]], "model": null, "bounding-box": null}, {"prov": [{"bbox": [353.065185546875, 485.2873840332031, 523.3069458007812, 641.25341796875], "page": 7, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 4: Performance of a Mask R-CNN R50 network with document-wise and page-wise split for different label sets. Naive page-wise split will result in GLYPH 10% point improvement.", "type": "table", "#-cols": 5, "#-rows": 14, "data": [[{"bbox": [358.6390075683594, 630.5248413085938, 401.7315368652344, 638.8994750976562], "spans": [[0, 0]], "text": "Class-count", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [440.2250061035156, 630.5248413085938, 448.5637512207031, 638.8994750976562], "spans": [[0, 1], [0, 2]], "text": "11", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 3], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [440.2250061035156, 630.5248413085938, 448.5637512207031, 638.8994750976562], "spans": [[0, 1], [0, 2]], "text": "11", "type": "col_header", "col": 2, "col-header": false, "col-span": [1, 3], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [494.3800048828125, 630.5248413085938, 498.54937744140625, 638.8994750976562], "spans": [[0, 3], [0, 4]], "text": "5", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [494.3800048828125, 630.5248413085938, 498.54937744140625, 638.8994750976562], "spans": [[0, 3], [0, 4]], "text": "5", "type": "col_header", "col": 4, "col-header": false, "col-span": [3, 5], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [358.6390075683594, 619.5658569335938, 375.27166748046875, 627.9404907226562], "spans": [[1, 0]], "text": "Split", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [423.34100341796875, 619.5658569335938, 438.0458984375, 627.9404907226562], "spans": [[1, 1]], "text": "Doc", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [448.007568359375, 619.5658569335938, 465.44720458984375, 627.9404907226562], "spans": [[1, 2]], "text": "Page", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [475.4110107421875, 619.5658569335938, 490.11590576171875, 627.9404907226562], "spans": [[1, 3]], "text": "Doc", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [500.07757568359375, 619.5658569335938, 517.5172119140625, 627.9404907226562], "spans": [[1, 4]], "text": "Page", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [358.6390075683594, 608.2088012695312, 387.82464599609375, 616.5834350585938], "spans": [[2, 0]], "text": "Caption", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [426.52398681640625, 608.2088012695312, 434.86273193359375, 616.5834350585938], "spans": [[2, 1]], "text": "68", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [452.5624084472656, 608.2088012695312, 460.9011535644531, 616.5834350585938], "spans": [[2, 2]], "text": "83", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": null, "spans": [[2, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": null, "spans": [[2, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [358.6390075683594, 597.2498168945312, 391.1422119140625, 605.6244506835938], "spans": [[3, 0]], "text": "Footnote", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [426.52398681640625, 597.2498168945312, 434.86273193359375, 605.6244506835938], "spans": [[3, 1]], "text": "71", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [452.5624084472656, 597.2498168945312, 460.9011535644531, 605.6244506835938], "spans": [[3, 2]], "text": "84", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": null, "spans": [[3, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": null, "spans": [[3, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [358.6390075683594, 586.2908325195312, 389.15167236328125, 594.6654663085938], "spans": [[4, 0]], "text": "Formula", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [426.52398681640625, 586.2908325195312, 434.86273193359375, 594.6654663085938], "spans": [[4, 1]], "text": "60", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [452.5624084472656, 586.2908325195312, 460.9011535644531, 594.6654663085938], "spans": [[4, 2]], "text": "66", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": null, "spans": [[4, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": null, "spans": [[4, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [358.6390075683594, 575.3318481445312, 391.518798828125, 583.7064819335938], "spans": [[5, 0]], "text": "List-item", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [426.52398681640625, 575.3318481445312, 434.86273193359375, 583.7064819335938], "spans": [[5, 1]], "text": "81", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [452.5624084472656, 575.3318481445312, 460.9011535644531, 583.7064819335938], "spans": [[5, 2]], "text": "88", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [478.593994140625, 575.3318481445312, 486.9327392578125, 583.7064819335938], "spans": [[5, 3]], "text": "82", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [504.6324157714844, 575.3318481445312, 512.97119140625, 583.7064819335938], "spans": [[5, 4]], "text": "88", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [358.6390075683594, 564.372802734375, 401.1666564941406, 572.7474365234375], "spans": [[6, 0]], "text": "Page-footer", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [426.52398681640625, 564.372802734375, 434.86273193359375, 572.7474365234375], "spans": [[6, 1]], "text": "62", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [452.5624084472656, 564.372802734375, 460.9011535644531, 572.7474365234375], "spans": [[6, 2]], "text": "89", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": null, "spans": [[6, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": null, "spans": [[6, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [358.6390075683594, 553.413818359375, 403.9193115234375, 561.7884521484375], "spans": [[7, 0]], "text": "Page-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [426.52398681640625, 553.413818359375, 434.86273193359375, 561.7884521484375], "spans": [[7, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [452.5624084472656, 553.413818359375, 460.9011535644531, 561.7884521484375], "spans": [[7, 2]], "text": "90", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": null, "spans": [[7, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": null, "spans": [[7, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 7, "row-header": false, "row-span": [7, 8]}], [{"bbox": [358.6390075683594, 542.455810546875, 384.6236572265625, 550.8304443359375], "spans": [[8, 0]], "text": "Picture", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [426.52398681640625, 542.455810546875, 434.86273193359375, 550.8304443359375], "spans": [[8, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [452.5624084472656, 542.455810546875, 460.9011535644531, 550.8304443359375], "spans": [[8, 2]], "text": "82", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [478.593994140625, 542.455810546875, 486.9327392578125, 550.8304443359375], "spans": [[8, 3]], "text": "72", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [504.6324157714844, 542.455810546875, 512.97119140625, 550.8304443359375], "spans": [[8, 4]], "text": "82", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 8, "row-header": false, "row-span": [8, 9]}], [{"bbox": [358.6390075683594, 531.496826171875, 413.37890625, 539.8714599609375], "spans": [[9, 0]], "text": "Section-header", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [426.52398681640625, 531.496826171875, 434.86273193359375, 539.8714599609375], "spans": [[9, 1]], "text": "68", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [452.5624084472656, 531.496826171875, 460.9011535644531, 539.8714599609375], "spans": [[9, 2]], "text": "83", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [478.593994140625, 531.496826171875, 486.9327392578125, 539.8714599609375], "spans": [[9, 3]], "text": "69", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [504.6324157714844, 531.496826171875, 512.97119140625, 539.8714599609375], "spans": [[9, 4]], "text": "83", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 9, "row-header": false, "row-span": [9, 10]}], [{"bbox": [358.6390075683594, 520.537841796875, 378.4457702636719, 528.9124755859375], "spans": [[10, 0]], "text": "Table", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [426.52398681640625, 520.537841796875, 434.86273193359375, 528.9124755859375], "spans": [[10, 1]], "text": "82", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [452.5624084472656, 520.537841796875, 460.9011535644531, 528.9124755859375], "spans": [[10, 2]], "text": "89", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [478.593994140625, 520.537841796875, 486.9327392578125, 528.9124755859375], "spans": [[10, 3]], "text": "82", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [504.6324157714844, 520.537841796875, 512.97119140625, 528.9124755859375], "spans": [[10, 4]], "text": "90", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 10, "row-header": false, "row-span": [10, 11]}], [{"bbox": [358.6390075683594, 509.5788269042969, 374.5992126464844, 517.9534301757812], "spans": [[11, 0]], "text": "Text", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [426.52398681640625, 509.5788269042969, 434.86273193359375, 517.9534301757812], "spans": [[11, 1]], "text": "85", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [452.5624084472656, 509.5788269042969, 460.9011535644531, 517.9534301757812], "spans": [[11, 2]], "text": "91", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [478.593994140625, 509.5788269042969, 486.9327392578125, 517.9534301757812], "spans": [[11, 3]], "text": "84", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [504.6324157714844, 509.5788269042969, 512.97119140625, 517.9534301757812], "spans": [[11, 4]], "text": "90", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 11, "row-header": false, "row-span": [11, 12]}], [{"bbox": [358.6390075683594, 498.6198425292969, 375.6303405761719, 506.9944763183594], "spans": [[12, 0]], "text": "Title", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [426.52398681640625, 498.6198425292969, 434.86273193359375, 506.9944763183594], "spans": [[12, 1]], "text": "77", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [452.5624084472656, 498.6198425292969, 460.9011535644531, 506.9944763183594], "spans": [[12, 2]], "text": "81", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": null, "spans": [[12, 3]], "text": "", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": null, "spans": [[12, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 12, "row-header": false, "row-span": [12, 13]}], [{"bbox": [358.6390075683594, 487.2628173828125, 369.60491943359375, 495.637451171875], "spans": [[13, 0]], "text": "All", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [426.52398681640625, 487.2628173828125, 434.86273193359375, 495.637451171875], "spans": [[13, 1]], "text": "72", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [452.5624084472656, 487.2628173828125, 460.9011535644531, 495.637451171875], "spans": [[13, 2]], "text": "84", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [478.593994140625, 487.2628173828125, 486.9327392578125, 495.637451171875], "spans": [[13, 3]], "text": "78", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [504.6324157714844, 487.2628173828125, 512.97119140625, 495.637451171875], "spans": [[13, 4]], "text": "87", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 13, "row-header": false, "row-span": [13, 14]}]], "model": null, "bounding-box": null}, {"prov": [{"bbox": [72.87370300292969, 452.12615966796875, 274.87945556640625, 619.3699951171875], "page": 8, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 5: Prediction Performance (mAP@0.5-0.95) of a Mask R-CNN R50 network across the PubLayNet, DocBank & DocLayNet data-sets. By evaluating on common label classes of each dataset, we observe that the DocLayNet-trained model has much less pronounced variations in performance across all datasets.", "type": "table", "#-cols": 4, "#-rows": 15, "data": [[{"bbox": null, "spans": [[0, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [217.74099731445312, 608.6068115234375, 256.2606506347656, 616.9814453125], "spans": [[0, 1], [0, 2], [0, 3]], "text": "Testing on", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 4], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [217.74099731445312, 608.6068115234375, 256.2606506347656, 616.9814453125], "spans": [[0, 1], [0, 2], [0, 3]], "text": "Testing on", "type": "col_header", "col": 2, "col-header": false, "col-span": [1, 4], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [217.74099731445312, 608.6068115234375, 256.2606506347656, 616.9814453125], "spans": [[0, 1], [0, 2], [0, 3]], "text": "Testing on", "type": "col_header", "col": 3, "col-header": false, "col-span": [1, 4], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [154.62899780273438, 597.6488037109375, 175.4758758544922, 606.0234375], "spans": [[1, 0]], "text": "labels", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [204.69000244140625, 597.6488037109375, 220.5426025390625, 606.0234375], "spans": [[1, 1]], "text": "PLN", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [230.5042724609375, 597.6488037109375, 242.0619659423828, 606.0234375], "spans": [[1, 2]], "text": "DB", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [252.0236358642578, 597.6488037109375, 269.31085205078125, 606.0234375], "spans": [[1, 3]], "text": "DLN", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [154.62899780273438, 586.2908325195312, 177.9237060546875, 594.6654663085938], "spans": [[2, 0]], "text": "Figure", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [208.44700622558594, 586.2908325195312, 216.78575134277344, 594.6654663085938], "spans": [[2, 1]], "text": "96", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [232.11830139160156, 586.2908325195312, 240.45704650878906, 594.6654663085938], "spans": [[2, 2]], "text": "43", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [256.4979248046875, 586.2908325195312, 264.836669921875, 594.6654663085938], "spans": [[2, 3]], "text": "23", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [154.62899780273438, 575.3318481445312, 194.72674560546875, 583.7064819335938], "spans": [[3, 0]], "text": "Sec-header", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [208.44700622558594, 575.3318481445312, 216.78575134277344, 583.7064819335938], "spans": [[3, 1]], "text": "87", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [234.77235412597656, 575.3318481445312, 237.80299377441406, 583.7064819335938], "spans": [[3, 2]], "text": "-", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [256.4979248046875, 575.3318481445312, 264.836669921875, 583.7064819335938], "spans": [[3, 3]], "text": "32", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [154.62899780273438, 564.372802734375, 174.43577575683594, 572.7474365234375], "spans": [[4, 0]], "text": "Table", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [208.44700622558594, 564.372802734375, 216.78575134277344, 572.7474365234375], "spans": [[4, 1]], "text": "95", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [232.11830139160156, 564.372802734375, 240.45704650878906, 572.7474365234375], "spans": [[4, 2]], "text": "24", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [256.4979248046875, 564.372802734375, 264.836669921875, 572.7474365234375], "spans": [[4, 3]], "text": "49", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [154.62899780273438, 553.413818359375, 170.5891876220703, 561.7884521484375], "spans": [[5, 0]], "text": "Text", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [208.44700622558594, 553.413818359375, 216.78575134277344, 561.7884521484375], "spans": [[5, 1]], "text": "96", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [234.77235412597656, 553.413818359375, 237.80299377441406, 561.7884521484375], "spans": [[5, 2]], "text": "-", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [256.4979248046875, 553.413818359375, 264.836669921875, 561.7884521484375], "spans": [[5, 3]], "text": "42", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [154.62899780273438, 542.455810546875, 171.27960205078125, 550.8304443359375], "spans": [[6, 0]], "text": "total", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [208.44700622558594, 542.455810546875, 216.78575134277344, 550.8304443359375], "spans": [[6, 1]], "text": "93", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [232.11830139160156, 542.455810546875, 240.45704650878906, 550.8304443359375], "spans": [[6, 2]], "text": "34", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [256.4979248046875, 542.455810546875, 264.836669921875, 550.8304443359375], "spans": [[6, 3]], "text": "30", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [154.62899780273438, 531.0978393554688, 177.9237060546875, 539.4724731445312], "spans": [[7, 0]], "text": "Figure", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [208.44700622558594, 531.0978393554688, 216.78575134277344, 539.4724731445312], "spans": [[7, 1]], "text": "77", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [232.11830139160156, 531.0978393554688, 240.45704650878906, 539.4724731445312], "spans": [[7, 2]], "text": "71", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [256.4979248046875, 531.0978393554688, 264.836669921875, 539.4724731445312], "spans": [[7, 3]], "text": "31", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}], [{"bbox": [154.62899780273438, 520.1388549804688, 174.43577575683594, 528.5134887695312], "spans": [[8, 0]], "text": "Table", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [208.44700622558594, 520.1388549804688, 216.78575134277344, 528.5134887695312], "spans": [[8, 1]], "text": "19", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [232.11830139160156, 520.1388549804688, 240.45704650878906, 528.5134887695312], "spans": [[8, 2]], "text": "65", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 8, "row-header": false, "row-span": [8, 9]}, {"bbox": [256.4979248046875, 520.1388549804688, 264.836669921875, 528.5134887695312], "spans": [[8, 3]], "text": "22", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 8, "row-header": false, "row-span": [8, 9]}], [{"bbox": [154.62899780273438, 509.1798400878906, 171.27960205078125, 517.554443359375], "spans": [[9, 0]], "text": "total", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [208.44700622558594, 509.1798400878906, 216.78575134277344, 517.554443359375], "spans": [[9, 1]], "text": "48", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [232.11830139160156, 509.1798400878906, 240.45704650878906, 517.554443359375], "spans": [[9, 2]], "text": "68", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 9, "row-header": false, "row-span": [9, 10]}, {"bbox": [256.4979248046875, 509.1798400878906, 264.836669921875, 517.554443359375], "spans": [[9, 3]], "text": "27", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 9, "row-header": false, "row-span": [9, 10]}], [{"bbox": [154.62899780273438, 497.82281494140625, 177.9237060546875, 506.19744873046875], "spans": [[10, 0]], "text": "Figure", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [208.44700622558594, 497.82281494140625, 216.78575134277344, 506.19744873046875], "spans": [[10, 1]], "text": "67", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [232.11830139160156, 497.82281494140625, 240.45704650878906, 506.19744873046875], "spans": [[10, 2]], "text": "51", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 10, "row-header": false, "row-span": [10, 11]}, {"bbox": [256.4979248046875, 497.82281494140625, 264.836669921875, 506.19744873046875], "spans": [[10, 3]], "text": "72", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 10, "row-header": false, "row-span": [10, 11]}], [{"bbox": [154.62899780273438, 486.86383056640625, 194.72674560546875, 495.23846435546875], "spans": [[11, 0]], "text": "Sec-header", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [208.44700622558594, 486.86383056640625, 216.78575134277344, 495.23846435546875], "spans": [[11, 1]], "text": "53", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [234.77235412597656, 486.86383056640625, 237.80299377441406, 495.23846435546875], "spans": [[11, 2]], "text": "-", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 11, "row-header": false, "row-span": [11, 12]}, {"bbox": [256.4979248046875, 486.86383056640625, 264.836669921875, 495.23846435546875], "spans": [[11, 3]], "text": "68", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 11, "row-header": false, "row-span": [11, 12]}], [{"bbox": [154.62899780273438, 475.9048156738281, 174.43577575683594, 484.2794494628906], "spans": [[12, 0]], "text": "Table", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [208.44700622558594, 475.9048156738281, 216.78575134277344, 484.2794494628906], "spans": [[12, 1]], "text": "87", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [232.11830139160156, 475.9048156738281, 240.45704650878906, 484.2794494628906], "spans": [[12, 2]], "text": "43", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 12, "row-header": false, "row-span": [12, 13]}, {"bbox": [256.4979248046875, 475.9048156738281, 264.836669921875, 484.2794494628906], "spans": [[12, 3]], "text": "82", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 12, "row-header": false, "row-span": [12, 13]}], [{"bbox": [154.62899780273438, 464.9458312988281, 170.5891876220703, 473.3204650878906], "spans": [[13, 0]], "text": "Text", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [208.44700622558594, 464.9458312988281, 216.78575134277344, 473.3204650878906], "spans": [[13, 1]], "text": "77", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [234.77235412597656, 464.9458312988281, 237.80299377441406, 473.3204650878906], "spans": [[13, 2]], "text": "-", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 13, "row-header": false, "row-span": [13, 14]}, {"bbox": [256.4979248046875, 464.9458312988281, 264.836669921875, 473.3204650878906], "spans": [[13, 3]], "text": "84", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 13, "row-header": false, "row-span": [13, 14]}], [{"bbox": [154.62899780273438, 453.98681640625, 171.27960205078125, 462.3614501953125], "spans": [[14, 0]], "text": "total", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 14, "row-header": false, "row-span": [14, 15]}, {"bbox": [208.44700622558594, 453.98681640625, 216.78575134277344, 462.3614501953125], "spans": [[14, 1]], "text": "59", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 14, "row-header": false, "row-span": [14, 15]}, {"bbox": [232.11830139160156, 453.98681640625, 240.45704650878906, 462.3614501953125], "spans": [[14, 2]], "text": "47", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 14, "row-header": false, "row-span": [14, 15]}, {"bbox": [256.4979248046875, 453.98681640625, 264.836669921875, 462.3614501953125], "spans": [[14, 3]], "text": "78", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 14, "row-header": false, "row-span": [14, 15]}]], "model": null, "bounding-box": null}], "bitmaps": null, "equations": [], "footnotes": [], "page-dimensions": [{"height": 792.0, "page": 1, "width": 612.0}, {"height": 792.0, "page": 2, "width": 612.0}, {"height": 792.0, "page": 3, "width": 612.0}, {"height": 792.0, "page": 4, "width": 612.0}, {"height": 792.0, "page": 5, "width": 612.0}, {"height": 792.0, "page": 6, "width": 612.0}, {"height": 792.0, "page": 7, "width": 612.0}, {"height": 792.0, "page": 8, "width": 612.0}, {"height": 792.0, "page": 9, "width": 612.0}], "page-footers": [], "page-headers": [], "_s3_data": null, "identifiers": null}
\ No newline at end of file
diff --git a/tests/data/2305.03393v1-pg9.doctags.txt b/tests/data/2305.03393v1-pg9.doctags.txt
new file mode 100644
index 00000000..d9749b6e
--- /dev/null
+++ b/tests/data/2305.03393v1-pg9.doctags.txt
@@ -0,0 +1,20 @@
+
+order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.
+5.1 Hyper Parameter Optimization
+We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.
+
Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.
+
+
+
Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.
+5.2 Quantitative Results
+We picked the model parameter configuration that produced the best prediction quality (enc=6, dec=6, heads=8) with PubTabNet alone, then independently trained and evaluated it on three publicly available data sets: PubTabNet (395k samples), FinTabNet (113k samples) and PubTables-1M (about 1M samples). Performance results are presented in Table. 2. It is clearly evident that the model trained on OTSL outperforms HTML across the board, keeping high TEDs and mAP scores even on difficult financial tables (FinTabNet) that contain sparse and large tables.
+Additionally, the results show that OTSL has an advantage over HTML when applied on a bigger data set like PubTables-1M and achieves significantly improved scores. Finally, OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation.
+
\ No newline at end of file
diff --git a/tests/data/2305.03393v1-pg9.json b/tests/data/2305.03393v1-pg9.json
index 3c96e32c..cfd8e7e4 100644
--- a/tests/data/2305.03393v1-pg9.json
+++ b/tests/data/2305.03393v1-pg9.json
@@ -1 +1 @@
-{"_name": "", "type": "pdf-document", "description": {"title": null, "abstract": null, "authors": null, "affiliations": null, "subjects": null, "keywords": null, "publication_date": null, "languages": null, "license": null, "publishers": null, "url_refs": null, "references": null, "publication": null, "reference_count": null, "citation_count": null, "citation_date": null, "advanced": null, "analytics": null, "logs": [], "collection": null, "acquisition": null}, "file-info": {"filename": "2305.03393v1-pg9.pdf", "filename-prov": null, "document-hash": "a07f5c34601ba2c234d898cbfaa9e29a7045996ccd82ccab3012516220a1f3a4", "#-pages": 1, "collection-name": null, "description": null, "page-hashes": [{"hash": "16ccd0a495625bd9c7a28a4b353d85137f3e6b09508a0d2280663478de9c9b25", "model": "default", "page": 1}]}, "main-text": [{"text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [193.9645538330078, 689.2177734375, 447.5447692871094, 700.5064697265625], "page": 1, "span": [0, 60], "__ref_s3_data": null}]}, {"text": "9", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [475.1263732910156, 689.2177734375, 480.5931396484375, 700.5064697265625], "page": 1, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.8929443359375, 639.093017578125, 480.79583740234375, 675.5369873046875], "page": 1, "span": [0, 163], "__ref_s3_data": null}]}, {"text": "5.1 Hyper Parameter Optimization", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.27793884277344, 612.7918090820312, 318.4514465332031, 625.2948608398438], "page": 1, "span": [0, 32], "__ref_s3_data": null}]}, {"text": "We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.84170532226562, 536.5759887695312, 481.2436218261719, 608.8849487304688], "page": 1, "span": [0, 423], "__ref_s3_data": null}]}, {"text": "Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [133.8990936279297, 464.017822265625, 480.7420349121094, 519.2052612304688], "page": 1, "span": [0, 398], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/0"}, {"text": "5.2 Quantitative Results", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.489013671875, 273.8258056640625, 264.4082946777344, 286.3288879394531], "page": 1, "span": [0, 24], "__ref_s3_data": null}]}, {"text": "We picked the model parameter configuration that produced the best prediction quality (enc=6, dec=6, heads=8) with PubTabNet alone, then independently trained and evaluated it on three publicly available data sets: PubTabNet (395k samples), FinTabNet (113k samples) and PubTables-1M (about 1M samples). Performance results are presented in Table. 2. It is clearly evident that the model trained on OTSL outperforms HTML across the board, keeping high TEDs and mAP scores even on difficult financial tables (FinTabNet) that contain sparse and large tables.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.97596740722656, 173.6999969482422, 480.8291931152344, 269.9199523925781], "page": 1, "span": [0, 555], "__ref_s3_data": null}]}, {"text": "Additionally, the results show that OTSL has an advantage over HTML when applied on a bigger data set like PubTables-1M and achieves significantly improved scores. Finally, OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.89259338378906, 125.87999725341797, 480.9114074707031, 174.2779541015625], "page": 1, "span": [0, 289], "__ref_s3_data": null}]}], "figures": [], "tables": [{"bounding-box": null, "prov": [{"bbox": [139.83172607421875, 322.2643737792969, 474.81011962890625, 454.8448791503906], "page": 1, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.", "type": "table", "#-cols": 8, "#-rows": 7, "data": [[{"bbox": [160.3699951171875, 441.2538146972656, 168.04522705078125, 452.5425109863281], "spans": [[0, 0]], "text": "#", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [207.9739990234375, 441.2538146972656, 215.64923095703125, 452.5425109863281], "spans": [[0, 1]], "text": "#", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [239.79800415039062, 435.7748107910156, 278.33380126953125, 447.0635070800781], "spans": [[0, 2], [1, 2]], "text": "Language", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 0, "row-header": false, "row-span": [0, 2]}, {"bbox": [324.6700134277344, 441.2538146972656, 348.2641906738281, 452.5425109863281], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [324.6700134277344, 441.2538146972656, 348.2641906738281, 452.5425109863281], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 4, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [324.6700134277344, 441.2538146972656, 348.2641906738281, 452.5425109863281], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 5, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [396.27099609375, 441.2538146972656, 417.1259460449219, 452.5425109863281], "spans": [[0, 6]], "text": "mAP", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [430.77099609375, 441.2538146972656, 467.14141845703125, 452.5425109863281], "spans": [[0, 7]], "text": "Inference", "type": "col_header", "col": 7, "col-header": false, "col-span": [7, 8], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [144.5919952392578, 428.3028259277344, 183.82894897460938, 439.5915222167969], "spans": [[1, 0]], "text": "enc-layers", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [192.19500732421875, 428.3028259277344, 231.42303466796875, 439.5915222167969], "spans": [[1, 1]], "text": "dec-layers", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [239.79800415039062, 435.7748107910156, 278.33380126953125, 447.0635070800781], "spans": [[0, 2], [1, 2]], "text": "Language", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [0, 2]}, {"bbox": [286.6860046386719, 428.3028259277344, 312.328125, 439.5915222167969], "spans": [[1, 3]], "text": "simple", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [320.7019958496094, 428.3028259277344, 353.71539306640625, 439.5915222167969], "spans": [[1, 4]], "text": "complex", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [369.3059997558594, 428.3028259277344, 379.0291442871094, 439.5915222167969], "spans": [[1, 5]], "text": "all", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [394.927001953125, 430.2948303222656, 418.4692077636719, 441.5835266113281], "spans": [[1, 6]], "text": "(0.75)", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [427.14801025390625, 430.2948303222656, 470.7695617675781, 441.5835266113281], "spans": [[1, 7]], "text": "time (secs)", "type": "col_header", "col": 7, "col-header": false, "col-span": [7, 8], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [161.906005859375, 409.4728088378906, 166.51473999023438, 420.7615051269531], "spans": [[2, 0]], "text": "6", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [209.50900268554688, 409.4728088378906, 214.11773681640625, 420.7615051269531], "spans": [[2, 1]], "text": "6", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [245.17599487304688, 402.0008239746094, 272.9449462890625, 426.24151611328125], "spans": [[2, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [289.0169982910156, 402.0008239746094, 310.00732421875, 426.24151611328125], "spans": [[2, 3]], "text": "0.965 0.969", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [326.7170104980469, 402.0008239746094, 347.70733642578125, 426.24151611328125], "spans": [[2, 4]], "text": "0.934 0.927", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [363.6759948730469, 402.0008239746094, 384.66632080078125, 426.24151611328125], "spans": [[2, 5]], "text": "0.955 0.955", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [396.20599365234375, 402.0008239746094, 417.1963195800781, 426.3042907714844], "spans": [[2, 6]], "text": "0.88 0.857", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [439.5270080566406, 402.0008239746094, 458.38336181640625, 426.3042907714844], "spans": [[2, 7]], "text": "2.73 5.39", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [161.906005859375, 383.17181396484375, 166.51473999023438, 394.46051025390625], "spans": [[3, 0]], "text": "4", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [209.50900268554688, 383.17181396484375, 214.11773681640625, 394.46051025390625], "spans": [[3, 1]], "text": "4", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [245.17599487304688, 375.6998291015625, 272.9449462890625, 399.93951416015625], "spans": [[3, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [289.0169982910156, 375.6998291015625, 310.00732421875, 399.93951416015625], "spans": [[3, 3]], "text": "0.938 0.952", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [326.7170104980469, 388.65081787109375, 347.70733642578125, 399.93951416015625], "spans": [[3, 4]], "text": "0.904", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [363.6759948730469, 388.65081787109375, 384.66632080078125, 399.93951416015625], "spans": [[3, 5]], "text": "0.927", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [394.6180114746094, 388.5970153808594, 418.7779846191406, 400.0022888183594], "spans": [[3, 6]], "text": "0.853", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [439.5270080566406, 388.5970153808594, 458.38336181640625, 400.0022888183594], "spans": [[3, 7]], "text": "1.97", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": null, "spans": [[4, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": null, "spans": [[4, 1]], "text": "", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [245.17599487304688, 349.3988342285156, 272.9449462890625, 373.6385192871094], "spans": [[4, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [289.0169982910156, 362.3498229980469, 310.00732421875, 373.6385192871094], "spans": [[4, 3]], "text": "0.923", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [326.7170104980469, 349.3988342285156, 347.70733642578125, 386.988525390625], "spans": [[4, 4]], "text": "0.909 0.897 0.901", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [362.0880126953125, 362.3498229980469, 386.24798583984375, 387.0513000488281], "spans": [[4, 5]], "text": "0.938 0.915", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [396.20599365234375, 375.6998291015625, 417.1963195800781, 386.988525390625], "spans": [[4, 6]], "text": "0.843", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [440.7669982910156, 375.6998291015625, 457.150390625, 386.988525390625], "spans": [[4, 7]], "text": "3.77", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [161.906005859375, 356.8708190917969, 166.51473999023438, 368.1595153808594], "spans": [[5, 0]], "text": "2", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [209.50900268554688, 356.8708190917969, 214.11773681640625, 368.1595153808594], "spans": [[5, 1]], "text": "4", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": null, "spans": [[5, 2]], "text": "", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [289.0169982910156, 349.3988342285156, 310.00732421875, 360.6875305175781], "spans": [[5, 3]], "text": "0.945", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": null, "spans": [[5, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [362.0880126953125, 349.34503173828125, 386.24798583984375, 360.75030517578125], "spans": [[5, 5]], "text": "0.931", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [394.6180114746094, 349.3988342285156, 418.7779846191406, 373.7012939453125], "spans": [[5, 6]], "text": "0.859 0.834", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [439.5270080566406, 349.3988342285156, 458.38336181640625, 373.7012939453125], "spans": [[5, 7]], "text": "1.91 3.81", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [161.906005859375, 330.5688171386719, 166.51473999023438, 341.8575134277344], "spans": [[6, 0]], "text": "4", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [209.50900268554688, 330.5688171386719, 214.11773681640625, 341.8575134277344], "spans": [[6, 1]], "text": "2", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [245.17599487304688, 323.0968322753906, 272.9449462890625, 347.3375244140625], "spans": [[6, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [289.0169982910156, 323.0968322753906, 310.00732421875, 347.3375244140625], "spans": [[6, 3]], "text": "0.952 0.944", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [326.7170104980469, 323.0968322753906, 347.70733642578125, 347.3375244140625], "spans": [[6, 4]], "text": "0.92 0.903", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [362.0880126953125, 323.0968322753906, 386.24798583984375, 347.4002990722656], "spans": [[6, 5]], "text": "0.942 0.931", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [394.6180114746094, 323.0968322753906, 418.7779846191406, 347.4002990722656], "spans": [[6, 6]], "text": "0.857 0.824", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [439.5270080566406, 323.0968322753906, 458.38336181640625, 347.4002990722656], "spans": [[6, 7]], "text": "1.22 2", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 6, "row-header": false, "row-span": [6, 7]}]], "model": null}], "bitmaps": null, "equations": [], "footnotes": [], "page-dimensions": [{"height": 792.0, "page": 1, "width": 612.0}], "page-footers": [], "page-headers": [], "_s3_data": null, "identifiers": null}
\ No newline at end of file
+{"_name": "", "type": "pdf-document", "description": {"title": null, "abstract": null, "authors": null, "affiliations": null, "subjects": null, "keywords": null, "publication_date": null, "languages": null, "license": null, "publishers": null, "url_refs": null, "references": null, "publication": null, "reference_count": null, "citation_count": null, "citation_date": null, "advanced": null, "analytics": null, "logs": [], "collection": null, "acquisition": null}, "file-info": {"filename": "2305.03393v1-pg9.pdf", "filename-prov": null, "document-hash": "a07f5c34601ba2c234d898cbfaa9e29a7045996ccd82ccab3012516220a1f3a4", "#-pages": 1, "collection-name": null, "description": null, "page-hashes": [{"hash": "16ccd0a495625bd9c7a28a4b353d85137f3e6b09508a0d2280663478de9c9b25", "model": "default", "page": 1}]}, "main-text": [{"prov": [{"bbox": [193.9645538330078, 689.2177734375, 447.5447692871094, 700.5064697265625], "page": 1, "span": [0, 60], "__ref_s3_data": null}], "text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [475.1263732910156, 689.2177734375, 480.5931396484375, 700.5064697265625], "page": 1, "span": [0, 1], "__ref_s3_data": null}], "text": "9", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [133.8929443359375, 639.093017578125, 480.79583740234375, 675.5369873046875], "page": 1, "span": [0, 163], "__ref_s3_data": null}], "text": "order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.27793884277344, 612.7918090820312, 318.4514465332031, 625.2948608398438], "page": 1, "span": [0, 32], "__ref_s3_data": null}], "text": "5.1 Hyper Parameter Optimization", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [133.84170532226562, 536.5759887695312, 481.2436218261719, 608.8849487304688], "page": 1, "span": [0, 423], "__ref_s3_data": null}], "text": "We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.8990936279297, 464.017822265625, 480.7420349121094, 519.2052612304688], "page": 1, "span": [0, 398], "__ref_s3_data": null}], "text": "Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.", "type": "caption", "name": "Caption", "font": null}, {"name": "Table", "type": "table", "$ref": "#/tables/0"}, {"prov": [{"bbox": [134.489013671875, 273.8258056640625, 264.4082946777344, 286.3288879394531], "page": 1, "span": [0, 24], "__ref_s3_data": null}], "text": "5.2 Quantitative Results", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [133.97596740722656, 173.6999969482422, 480.8291931152344, 269.9199523925781], "page": 1, "span": [0, 555], "__ref_s3_data": null}], "text": "We picked the model parameter configuration that produced the best prediction quality (enc=6, dec=6, heads=8) with PubTabNet alone, then independently trained and evaluated it on three publicly available data sets: PubTabNet (395k samples), FinTabNet (113k samples) and PubTables-1M (about 1M samples). Performance results are presented in Table. 2. It is clearly evident that the model trained on OTSL outperforms HTML across the board, keeping high TEDs and mAP scores even on difficult financial tables (FinTabNet) that contain sparse and large tables.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.89259338378906, 125.87999725341797, 480.9114074707031, 174.2779541015625], "page": 1, "span": [0, 289], "__ref_s3_data": null}], "text": "Additionally, the results show that OTSL has an advantage over HTML when applied on a bigger data set like PubTables-1M and achieves significantly improved scores. Finally, OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation.", "type": "paragraph", "name": "Text", "font": null}], "figures": [], "tables": [{"prov": [{"bbox": [139.83172607421875, 322.2643737792969, 474.81011962890625, 454.8448791503906], "page": 1, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.", "type": "table", "#-cols": 8, "#-rows": 7, "data": [[{"bbox": [160.3699951171875, 441.2538146972656, 168.04522705078125, 452.5425109863281], "spans": [[0, 0]], "text": "#", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [207.9739990234375, 441.2538146972656, 215.64923095703125, 452.5425109863281], "spans": [[0, 1]], "text": "#", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [239.79800415039062, 435.7748107910156, 278.33380126953125, 447.0635070800781], "spans": [[0, 2], [1, 2]], "text": "Language", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 0, "row-header": false, "row-span": [0, 2]}, {"bbox": [324.6700134277344, 441.2538146972656, 348.2641906738281, 452.5425109863281], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [324.6700134277344, 441.2538146972656, 348.2641906738281, 452.5425109863281], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 4, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [324.6700134277344, 441.2538146972656, 348.2641906738281, 452.5425109863281], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 5, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [396.27099609375, 441.2538146972656, 417.1259460449219, 452.5425109863281], "spans": [[0, 6]], "text": "mAP", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [430.77099609375, 441.2538146972656, 467.14141845703125, 452.5425109863281], "spans": [[0, 7]], "text": "Inference", "type": "col_header", "col": 7, "col-header": false, "col-span": [7, 8], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [144.5919952392578, 428.3028259277344, 183.82894897460938, 439.5915222167969], "spans": [[1, 0]], "text": "enc-layers", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [192.19500732421875, 428.3028259277344, 231.42303466796875, 439.5915222167969], "spans": [[1, 1]], "text": "dec-layers", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [239.79800415039062, 435.7748107910156, 278.33380126953125, 447.0635070800781], "spans": [[0, 2], [1, 2]], "text": "Language", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [0, 2]}, {"bbox": [286.6860046386719, 428.3028259277344, 312.328125, 439.5915222167969], "spans": [[1, 3]], "text": "simple", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [320.7019958496094, 428.3028259277344, 353.71539306640625, 439.5915222167969], "spans": [[1, 4]], "text": "complex", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [369.3059997558594, 428.3028259277344, 379.0291442871094, 439.5915222167969], "spans": [[1, 5]], "text": "all", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [394.927001953125, 430.2948303222656, 418.4692077636719, 441.5835266113281], "spans": [[1, 6]], "text": "(0.75)", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [427.14801025390625, 430.2948303222656, 470.7695617675781, 441.5835266113281], "spans": [[1, 7]], "text": "time (secs)", "type": "col_header", "col": 7, "col-header": false, "col-span": [7, 8], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [161.906005859375, 409.4728088378906, 166.51473999023438, 420.7615051269531], "spans": [[2, 0]], "text": "6", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [209.50900268554688, 409.4728088378906, 214.11773681640625, 420.7615051269531], "spans": [[2, 1]], "text": "6", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [245.17599487304688, 402.0008239746094, 272.9449462890625, 426.24151611328125], "spans": [[2, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [289.0169982910156, 402.0008239746094, 310.00732421875, 426.24151611328125], "spans": [[2, 3]], "text": "0.965 0.969", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [326.7170104980469, 402.0008239746094, 347.70733642578125, 426.24151611328125], "spans": [[2, 4]], "text": "0.934 0.927", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [363.6759948730469, 402.0008239746094, 384.66632080078125, 426.24151611328125], "spans": [[2, 5]], "text": "0.955 0.955", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [396.20599365234375, 402.0008239746094, 417.1963195800781, 426.3042907714844], "spans": [[2, 6]], "text": "0.88 0.857", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [439.5270080566406, 402.0008239746094, 458.38336181640625, 426.3042907714844], "spans": [[2, 7]], "text": "2.73 5.39", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [161.906005859375, 383.17181396484375, 166.51473999023438, 394.46051025390625], "spans": [[3, 0]], "text": "4", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [209.50900268554688, 383.17181396484375, 214.11773681640625, 394.46051025390625], "spans": [[3, 1]], "text": "4", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [245.17599487304688, 375.6998291015625, 272.9449462890625, 399.93951416015625], "spans": [[3, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [289.0169982910156, 375.6998291015625, 310.00732421875, 399.93951416015625], "spans": [[3, 3]], "text": "0.938 0.952", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [326.7170104980469, 388.65081787109375, 347.70733642578125, 399.93951416015625], "spans": [[3, 4]], "text": "0.904", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [363.6759948730469, 388.65081787109375, 384.66632080078125, 399.93951416015625], "spans": [[3, 5]], "text": "0.927", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [394.6180114746094, 388.5970153808594, 418.7779846191406, 400.0022888183594], "spans": [[3, 6]], "text": "0.853", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [439.5270080566406, 388.5970153808594, 458.38336181640625, 400.0022888183594], "spans": [[3, 7]], "text": "1.97", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": null, "spans": [[4, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": null, "spans": [[4, 1]], "text": "", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [245.17599487304688, 349.3988342285156, 272.9449462890625, 373.6385192871094], "spans": [[4, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [289.0169982910156, 362.3498229980469, 310.00732421875, 373.6385192871094], "spans": [[4, 3]], "text": "0.923", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [326.7170104980469, 349.3988342285156, 347.70733642578125, 386.988525390625], "spans": [[4, 4]], "text": "0.909 0.897 0.901", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [362.0880126953125, 362.3498229980469, 386.24798583984375, 387.0513000488281], "spans": [[4, 5]], "text": "0.938 0.915", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [396.20599365234375, 375.6998291015625, 417.1963195800781, 386.988525390625], "spans": [[4, 6]], "text": "0.843", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [440.7669982910156, 375.6998291015625, 457.150390625, 386.988525390625], "spans": [[4, 7]], "text": "3.77", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [161.906005859375, 356.8708190917969, 166.51473999023438, 368.1595153808594], "spans": [[5, 0]], "text": "2", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [209.50900268554688, 356.8708190917969, 214.11773681640625, 368.1595153808594], "spans": [[5, 1]], "text": "4", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": null, "spans": [[5, 2]], "text": "", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [289.0169982910156, 349.3988342285156, 310.00732421875, 360.6875305175781], "spans": [[5, 3]], "text": "0.945", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": null, "spans": [[5, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [362.0880126953125, 349.34503173828125, 386.24798583984375, 360.75030517578125], "spans": [[5, 5]], "text": "0.931", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [394.6180114746094, 349.3988342285156, 418.7779846191406, 373.7012939453125], "spans": [[5, 6]], "text": "0.859 0.834", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [439.5270080566406, 349.3988342285156, 458.38336181640625, 373.7012939453125], "spans": [[5, 7]], "text": "1.91 3.81", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [161.906005859375, 330.5688171386719, 166.51473999023438, 341.8575134277344], "spans": [[6, 0]], "text": "4", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [209.50900268554688, 330.5688171386719, 214.11773681640625, 341.8575134277344], "spans": [[6, 1]], "text": "2", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [245.17599487304688, 323.0968322753906, 272.9449462890625, 347.3375244140625], "spans": [[6, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [289.0169982910156, 323.0968322753906, 310.00732421875, 347.3375244140625], "spans": [[6, 3]], "text": "0.952 0.944", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [326.7170104980469, 323.0968322753906, 347.70733642578125, 347.3375244140625], "spans": [[6, 4]], "text": "0.92 0.903", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [362.0880126953125, 323.0968322753906, 386.24798583984375, 347.4002990722656], "spans": [[6, 5]], "text": "0.942 0.931", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [394.6180114746094, 323.0968322753906, 418.7779846191406, 347.4002990722656], "spans": [[6, 6]], "text": "0.857 0.824", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [439.5270080566406, 323.0968322753906, 458.38336181640625, 347.4002990722656], "spans": [[6, 7]], "text": "1.22 2", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 6, "row-header": false, "row-span": [6, 7]}]], "model": null, "bounding-box": null}], "bitmaps": null, "equations": [], "footnotes": [], "page-dimensions": [{"height": 792.0, "page": 1, "width": 612.0}], "page-footers": [], "page-headers": [], "_s3_data": null, "identifiers": null}
\ No newline at end of file
diff --git a/tests/data/2305.03393v1.doctags.txt b/tests/data/2305.03393v1.doctags.txt
new file mode 100644
index 00000000..cca51d58
--- /dev/null
+++ b/tests/data/2305.03393v1.doctags.txt
@@ -0,0 +1,149 @@
+
+Optimized Table Tokenization for Table Structure Recognition
+Maksym Lysak [0000 - 0002 - 3723 - $^{6960]}$, Ahmed Nassar[0000 - 0002 - 9468 - $^{0822]}$, Nikolaos Livathinos [0000 - 0001 - 8513 - $^{3491]}$, Christoph Auer[0000 - 0001 - 5761 - $^{0422]}$, and Peter Staar [0000 - 0002 - 8088 - 0823]
+IBM Research {mly,ahn,nli,cau,taa}@zurich.ibm.com
+Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community.
+Keywords: Table Structure Recognition · Data Representation · Transformers · Optimization.
+1 Introduction
+Tables are ubiquitous in documents such as scientific papers, patents, reports, manuals, specification sheets or marketing material. They often encode highly valuable information and therefore need to be extracted with high accuracy. Unfortunately, tables appear in documents in various sizes, styling and structure, making it difficult to recover their correct structure with simple analytical methods. Therefore, accurate table extraction is achieved these days with machine-learning based methods.
+In modern document understanding systems [1,15], table extraction is typically a two-step process. Firstly, every table on a page is located with a bounding box, and secondly, their logical row and column structure is recognized. As of
+Fig. 1. Comparison between HTML and OTSL table structure representation: (A) table-example with complex row and column headers, including a 2D empty span, (B) minimal graphical representation of table structure using rectangular layout, (C) HTML representation, (D) OTSL representation. This example demonstrates many of the key-features of OTSL, namely its reduced vocabulary size (12 versus 5 in this case), its reduced sequence length (55 versus 30) and a enhanced internal structure (variable token sequence length per row in HTML versus a fixed length of rows in OTSL).
+
+today, table detection in documents is a well understood problem, and the latest state-of-the-art (SOTA) object detection methods provide an accuracy comparable to human observers [7,8,10,14,23]. On the other hand, the problem of table structure recognition (TSR) is a lot more challenging and remains a very active area of research, in which many novel machine learning algorithms are being explored [3,4,5,9,11,12,13,14,17,18,21,22].
+Recently emerging SOTA methods for table structure recognition employ transformer-based models, in which an image of the table is provided to the network in order to predict the structure of the table as a sequence of tokens. These image-to-sequence (Im2Seq) models are extremely powerful, since they allow for a purely data-driven solution. The tokens of the sequence typically belong to a markup language such as HTML, Latex or Markdown, which allow to describe table structure as rows, columns and spanning cells in various configurations. In Figure 1, we illustrate how HTML is used to represent the table-structure of a particular example table. Public table-structure data sets such as PubTabNet [22], and FinTabNet [21], which were created in a semi-automated way from paired PDF and HTML sources (e.g. PubMed Central), popularized primarily the use of HTML as ground-truth representation format for TSR.
+While the majority of research in TSR is currently focused on the development and application of novel neural model architectures, the table structure representation language (e.g. HTML in PubTabNet and FinTabNet) is usually adopted as is for the sequence tokenization in Im2Seq models. In this paper, we aim for the opposite and investigate the impact of the table structure representation language with an otherwise unmodified Im2Seq transformer-based architecture. Since the current state-of-the-art Im2Seq model is TableFormer [9], we select this model to perform our experiments.
+The main contribution of this paper is the introduction of a new optimised table structure language (OTSL), specifically designed to describe table-structure in an compact and structured way for Im2Seq models. OTSL has a number of key features, which make it very attractive to use in Im2Seq models. Specifically, compared to other languages such as HTML, OTSL has a minimized vocabulary which yields short sequence length, strong inherent structure (e.g. strict rectangular layout) and a strict syntax with rules that only look backwards. The latter allows for syntax validation during inference and ensures a syntactically correct table-structure. These OTSL features are illustrated in Figure 1, in comparison to HTML.
+The paper is structured as follows. In section 2, we give an overview of the latest developments in table-structure reconstruction. In section 3 we review the current HTML table encoding (popularised by PubTabNet and FinTabNet) and discuss its flaws. Subsequently, we introduce OTSL in section 4, which includes the language definition, syntax rules and error-correction procedures. In section 5, we apply OTSL on the TableFormer architecture, compare it to TableFormer models trained on HTML and ultimately demonstrate the advantages of using OTSL. Finally, in section 6 we conclude our work and outline next potential steps.
+2 Related Work
+Approaches to formalize the logical structure and layout of tables in electronic documents date back more than two decades [16]. In the recent past, a wide variety of computer vision methods have been explored to tackle the problem of table structure recognition, i.e. the correct identification of columns, rows and spanning cells in a given table. Broadly speaking, the current deeplearning based approaches fall into three categories: object detection (OD) methods, Graph-Neural-Network (GNN) methods and Image-to-Markup-Sequence (Im2Seq) methods. Object-detection based methods [11,12,13,14,21] rely on tablestructure annotation using (overlapping) bounding boxes for training, and produce bounding-box predictions to define table cells, rows, and columns on a table image. Graph Neural Network (GNN) based methods [3,6,17,18], as the name suggests, represent tables as graph structures. The graph nodes represent the content of each table cell, an embedding vector from the table image, or geometric coordinates of the table cell. The edges of the graph define the relationship between the nodes, e.g. if they belong to the same column, row, or table cell.
+Other work [20] aims at predicting a grid for each table and deciding which cells must be merged using an attention network. Im2Seq methods cast the problem as a sequence generation task [4,5,9,22], and therefore need an internal tablestructure representation language, which is often implemented with standard markup languages (e.g. HTML, LaTeX, Markdown). In theory, Im2Seq methods have a natural advantage over the OD and GNN methods by virtue of directly predicting the table-structure. As such, no post-processing or rules are needed in order to obtain the table-structure, which is necessary with OD and GNN approaches. In practice, this is not entirely true, because a predicted sequence of table-structure markup does not necessarily have to be syntactically correct. Hence, depending on the quality of the predicted sequence, some post-processing needs to be performed to ensure a syntactically valid (let alone correct) sequence.
+Within the Im2Seq method, we find several popular models, namely the encoder-dual-decoder model (EDD) [22], TableFormer [9], Tabsplitter[2] and Ye et. al. [19]. EDD uses two consecutive long short-term memory (LSTM) decoders to predict a table in HTML representation. The tag decoder predicts a sequence of HTML tags. For each decoded table cell (
), the attention is passed to the cell decoder to predict the content with an embedded OCR approach. The latter makes it susceptible to transcription errors in the cell content of the table. TableFormer address this reliance on OCR and uses two transformer decoders for HTML structure and cell bounding box prediction in an end-to-end architecture. The predicted cell bounding box is then used to extract text tokens from an originating (digital) PDF page, circumventing any need for OCR. TabSplitter [2] proposes a compact double-matrix representation of table rows and columns to do error detection and error correction of HTML structure sequences based on predictions from [19]. This compact double-matrix representation can not be used directly by the Img2seq model training, so the model uses HTML as an intermediate form. Chi et. al. [4] introduce a data set and a baseline method using bidirectional LSTMs to predict LaTeX code. Kayal [5] introduces Gated ResNet transformers to predict LaTeX code, and a separate OCR module to extract content.
+Im2Seq approaches have shown to be well-suited for the TSR task and allow a full end-to-end network design that can output the final table structure without pre- or post-processing logic. Furthermore, Im2Seq models have demonstrated to deliver state-of-the-art prediction accuracy [9]. This motivated the authors to investigate if the performance (both in accuracy and inference time) can be further improved by optimising the table structure representation language. We believe this is a necessary step before further improving neural network architectures for this task.
+3 Problem Statement
+All known Im2Seq based models for TSR fundamentally work in similar ways. Given an image of a table, the Im2Seq model predicts the structure of the table by generating a sequence of tokens. These tokens originate from a finite vocab-
+ulary and can be interpreted as a table structure. For example, with the HTML tokens
,
,
,
,
and
, one can construct simple table structures without any spanning cells. In reality though, one needs at least 28 HTML tokens to describe the most common complex tables observed in real-world documents [21,22], due to a variety of spanning cells definitions in the HTML token vocabulary.
+
Fig. 2. Frequency of tokens in HTML and OTSL as they appear in PubTabNet.
+
+Obviously, HTML and other general-purpose markup languages were not designed for Im2Seq models. As such, they have some serious drawbacks. First, the token vocabulary needs to be artificially large in order to describe all plausible tabular structures. Since most Im2Seq models use an autoregressive approach, they generate the sequence token by token. Therefore, to reduce inference time, a shorter sequence length is critical. Every table-cell is represented by at least two tokens (
and
). Furthermore, when tokenizing the HTML structure, one needs to explicitly enumerate possible column-spans and row-spans as words. In practice, this ends up requiring 28 different HTML tokens (when including column- and row-spans up to 10 cells) just to describe every table in the PubTabNet dataset. Clearly, not every token is equally represented, as is depicted in Figure 2. This skewed distribution of tokens in combination with variable token row-length makes it challenging for models to learn the HTML structure.
+Additionally, it would be desirable if the representation would easily allow an early detection of invalid sequences on-the-go, before the prediction of the entire table structure is completed. HTML is not well-suited for this purpose as the verification of incomplete sequences is non-trivial or even impossible.
+In a valid HTML table, the token sequence must describe a 2D grid of table cells, serialised in row-major ordering, where each row and each column have the same length (while considering row- and column-spans). Furthermore, every opening tag in HTML needs to be matched by a closing tag in a correct hierarchical manner. Since the number of tokens for each table row and column can vary significantly, especially for large tables with many row- and column-spans, it is complex to verify the consistency of predicted structures during sequence
+generation. Implicitly, this also means that Im2Seq models need to learn these complex syntax rules, simply to deliver valid output.
+In practice, we observe two major issues with prediction quality when training Im2Seq models on HTML table structure generation from images. On the one hand, we find that on large tables, the visual attention of the model often starts to drift and is not accurately moving forward cell by cell anymore. This manifests itself in either in an increasing location drift for proposed table-cells in later rows on the same column or even complete loss of vertical alignment, as illustrated in Figure 5. Addressing this with post-processing is partially possible, but clearly undesired. On the other hand, we find many instances of predictions with structural inconsistencies or plain invalid HTML output, as shown in Figure 6, which are nearly impossible to properly correct. Both problems seriously impact the TSR model performance, since they reflect not only in the task of pure structure recognition but also in the equally crucial recognition or matching of table cell content.
+4 Optimised Table Structure Language
+To mitigate the issues with HTML in Im2Seq-based TSR models laid out before, we propose here our Optimised Table Structure Language (OTSL). OTSL is designed to express table structure with a minimized vocabulary and a simple set of rules, which are both significantly reduced compared to HTML. At the same time, OTSL enables easy error detection and correction during sequence generation. We further demonstrate how the compact structure representation and minimized sequence length improves prediction accuracy and inference time in the TableFormer architecture.
+4.1 Language Definition
+In Figure 3, we illustrate how the OTSL is defined. In essence, the OTSL defines only 5 tokens that directly describe a tabular structure based on an atomic 2D grid.
+The OTSL vocabulary is comprised of the following tokens:
+-"C" cell a new table cell that either has or does not have cell content
+-"L" cell left-looking cell , merging with the left neighbor cell to create a span
+-"U" cell up-looking cell , merging with the upper neighbor cell to create a span
+-"X" cell cross cell , to merge with both left and upper neighbor cells
+-"NL" new-line , switch to the next row.
+A notable attribute of OTSL is that it has the capability of achieving lossless conversion to HTML.
+
Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding
+
+4.2 Language Syntax
+The OTSL representation follows these syntax rules:
+1. Left-looking cell rule : The left neighbour of an "L" cell must be either another "L" cell or a "C" cell.
+2. Up-looking cell rule : The upper neighbour of a "U" cell must be either another "U" cell or a "C" cell.
+3. Cross cell rule :
+: The left neighbour of an "X" cell must be either another "X" cell or a "U" cell, and the upper neighbour of an "X" cell must be either another "X" cell or an "L" cell.
+4. First row rule : Only "L" cells and "C" cells are allowed in the first row.
+5. First column rule : Only "U" cells and "C" cells are allowed in the first column.
+6. Rectangular rule : The table representation is always rectangular - all rows must have an equal number of tokens, terminated with "NL" token.
+The application of these rules gives OTSL a set of unique properties. First of all, the OTSL enforces a strictly rectangular structure representation, where every new-line token starts a new row. As a consequence, all rows and all columns have exactly the same number of tokens, irrespective of cell spans. Secondly, the OTSL representation is unambiguous: Every table structure is represented in one way. In this representation every table cell corresponds to a "C"-cell token, which in case of spans is always located in the top-left corner of the table cell definition. Third, OTSL syntax rules are only backward-looking. As a consequence, every predicted token can be validated straight during sequence generation by looking at the previously predicted sequence. As such, OTSL can guarantee that every predicted sequence is syntactically valid.
+These characteristics can be easily learned by sequence generator networks, as we demonstrate further below. We find strong indications that this pattern
+reduces significantly the column drift seen in the HTML based models (see Figure 5).
+4.3 Error-detection and -mitigation
+The design of OTSL allows to validate a table structure easily on an unfinished sequence. The detection of an invalid sequence token is a clear indication of a prediction mistake, however a valid sequence by itself does not guarantee prediction correctness. Different heuristics can be used to correct token errors in an invalid sequence and thus increase the chances for accurate predictions. Such heuristics can be applied either after the prediction of each token, or at the end on the entire predicted sequence. For example a simple heuristic which can correct the predicted OTSL sequence on-the-fly is to verify if the token with the highest prediction confidence invalidates the predicted sequence, and replace it by the token with the next highest confidence until OTSL rules are satisfied.
+5 Experiments
+To evaluate the impact of OTSL on prediction accuracy and inference times, we conducted a series of experiments based on the TableFormer model (Figure 4) with two objectives: Firstly we evaluate the prediction quality and performance of OTSL vs. HTML after performing Hyper Parameter Optimization (HPO) on the canonical PubTabNet data set. Secondly we pick the best hyper-parameters found in the first step and evaluate how OTSL impacts the performance of TableFormer after training on other publicly available data sets (FinTabNet, PubTables-1M [14]). The ground truth (GT) from all data sets has been converted into OTSL format for this purpose, and will be made publicly available.
+
Fig. 4. Architecture sketch of the TableFormer model, which is a representative for the Im2Seq approach.
+
+We rely on standard metrics such as Tree Edit Distance score (TEDs) for table structure prediction, and Mean Average Precision (mAP) with 0.75 Intersection Over Union (IOU) threshold for the bounding-box predictions of table cells. The predicted OTSL structures were converted back to HTML format in
+order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.
+5.1 Hyper Parameter Optimization
+We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.
+
Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.
+
+
+
Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.
+5.2 Quantitative Results
+We picked the model parameter configuration that produced the best prediction quality (enc=6, dec=6, heads=8) with PubTabNet alone, then independently trained and evaluated it on three publicly available data sets: PubTabNet (395k samples), FinTabNet (113k samples) and PubTables-1M (about 1M samples). Performance results are presented in Table. 2. It is clearly evident that the model trained on OTSL outperforms HTML across the board, keeping high TEDs and mAP scores even on difficult financial tables (FinTabNet) that contain sparse and large tables.
+Additionally, the results show that OTSL has an advantage over HTML when applied on a bigger data set like PubTables-1M and achieves significantly improved scores. Finally, OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation.
+
Table 2. TSR and cell detection results compared between OTSL and HTML on the PubTabNet [22], FinTabNet [21] and PubTables-1M [14] data sets using TableFormer [9] (with enc=6, dec=6, heads=8).
+
+
+
Table 2. TSR and cell detection results compared between OTSL and HTML on the PubTabNet [22], FinTabNet [21] and PubTables-1M [14] data sets using TableFormer [9] (with enc=6, dec=6, heads=8).
+LanguageTEDsTEDsTEDsmAP(0.75)Inference time (secs)
+LanguagesimplecomplexallmAP(0.75)Inference time (secs)
+PubTabNetOTSL0.9650.9340.9550.882.73
+PubTabNetHTML0.9690.9270.9550.8575.39
+FinTabNetOTSL0.9550.9610.9590.8621.85
+FinTabNetHTML0.9170.9220.920.7223.26
+PubTables-1MOTSL0.9870.9640.9770.8961.79
+PubTables-1MHTML0.9830.9440.9660.8893.26
+
+5.3 Qualitative Results
+To illustrate the qualitative differences between OTSL and HTML, Figure 5 demonstrates less overlap and more accurate bounding boxes with OTSL. In Figure 6, OTSL proves to be more effective in handling tables with longer token sequences, resulting in even more precise structure prediction and bounding boxes.
+
Fig. 5. The OTSL model produces more accurate bounding boxes with less overlap (E) than the HTML model (D), when predicting the structure of a sparse table (A), at twice the inference speed because of shorter sequence length (B),(C). "PMC2807444_006_00.png" PubTabNet. μ
+
+μ
+≥
+
Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. "PMC5406406_003_01.png" PubTabNet.
+
+6 Conclusion
+We demonstrated that representing tables in HTML for the task of table structure recognition with Im2Seq models is ill-suited and has serious limitations. Furthermore, we presented in this paper an Optimized Table Structure Language (OTSL) which, when compared to commonly used general purpose languages, has several key benefits.
+First and foremost, given the same network configuration, inference time for a table-structure prediction is about 2 times faster compared to the conventional HTML approach. This is primarily owed to the shorter sequence length of the OTSL representation. Additional performance benefits can be obtained with HPO (hyper parameter optimization). As we demonstrate in our experiments, models trained on OTSL can be significantly smaller, e.g. by reducing the number of encoder and decoder layers, while preserving comparatively good prediction quality. This can further improve inference performance, yielding 5-6 times faster inference speed in OTSL with prediction quality comparable to models trained on HTML (see Table 1).
+Secondly, OTSL has more inherent structure and a significantly restricted vocabulary size. This allows autoregressive models to perform better in the TED metric, but especially with regards to prediction accuracy of the table-cell bounding boxes (see Table 2). As shown in Figure 5, we observe that the OTSL drastically reduces the drift for table cell bounding boxes at high row count and in sparse tables. This leads to more accurate predictions and a significant reduction in post-processing complexity, which is an undesired necessity in HTML-based Im2Seq models. Significant novelty lies in OTSL syntactical rules, which are few, simple and always backwards looking. Each new token can be validated only by analyzing the sequence of previous tokens, without requiring the entire sequence to detect mistakes. This in return allows to perform structural error detection and correction on-the-fly during sequence generation.
+References
+1. Auer, C., Dolfi, M., Carvalho, A., Ramis, C.B., Staar, P.W.J.: Delivering document conversion as a cloud service with high throughput and responsiveness. CoRR abs/2206.00785 (2022). https://doi.org/10.48550/arXiv.2206.00785 , https://doi.org/10.48550/arXiv.2206.00785
+2. Chen, B., Peng, D., Zhang, J., Ren, Y., Jin, L.: Complex table structure recognition in the wild using transformer and identity matrix-based augmentation. In: Porwal, U., Fornés, A., Shafait, F. (eds.) Frontiers in Handwriting Recognition. pp. 545561. Springer International Publishing, Cham (2022)
+3. Chi, Z., Huang, H., Xu, H.D., Yu, H., Yin, W., Mao, X.L.: Complicated table structure recognition. arXiv preprint arXiv:1908.04729 (2019)
+4. Deng, Y., Rosenberg, D., Mann, G.: Challenges in end-to-end neural scientific table recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 894-901. IEEE (2019)
+5. Kayal, P., Anand, M., Desai, H., Singh, M.: Tables to latex: structure and content extraction from scientific tables. International Journal on Document Analysis and Recognition (IJDAR) pp. 1-10 (2022)
+6. Lee, E., Kwon, J., Yang, H., Park, J., Lee, S., Koo, H.I., Cho, N.I.: Table structure recognition based on grid shape graph. In: 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). pp. 18681873. IEEE (2022)
+7. Li, M., Cui, L., Huang, S., Wei, F., Zhou, M., Li, Z.: Tablebank: A benchmark dataset for table detection and recognition (2019)
+8. Livathinos, N., Berrospi, C., Lysak, M., Kuropiatnyk, V., Nassar, A., Carvalho, A., Dolfi, M., Auer, C., Dinkla, K., Staar, P.: Robust pdf document conversion using recurrent neural networks. Proceedings of the AAAI Conference on Artificial Intelligence 35 (17), 15137-15145 (May 2021), https://ojs.aaai.org/index.php/ AAAI/article/view/17777
+9. Nassar, A., Livathinos, N., Lysak, M., Staar, P.: Tableformer: Table structure understanding with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4614-4623 (June 2022)
+10. Pfitzmann, B., Auer, C., Dolfi, M., Nassar, A.S., Staar, P.W.J.: Doclaynet: A large human-annotated dataset for document-layout segmentation. In: Zhang, A., Rangwala, H. (eds.) KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. pp. 3743-3751. ACM (2022). https://doi.org/10.1145/3534678.3539043 , https:// doi.org/10.1145/3534678.3539043
+11. Prasad, D., Gadpal, A., Kapadni, K., Visave, M., Sultanpure, K.: Cascadetabnet: An approach for end to end table detection and structure recognition from imagebased documents. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. pp. 572-573 (2020)
+12. Schreiber, S., Agne, S., Wolf, I., Dengel, A., Ahmed, S.: Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR). vol. 1, pp. 1162-1167. IEEE (2017)
+13. Siddiqui, S.A., Fateh, I.A., Rizvi, S.T.R., Dengel, A., Ahmed, S.: Deeptabstr: Deep learning based table structure recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1403-1409 (2019). https:// doi.org/10.1109/ICDAR.2019.00226
+14. Smock, B., Pesala, R., Abraham, R.: PubTables-1M: Towards comprehensive table extraction from unstructured documents. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4634-4642 (June 2022)
+15. Staar, P.W.J., Dolfi, M., Auer, C., Bekas, C.: Corpus conversion service: A machine learning platform to ingest documents at scale. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 774-782. KDD '18, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3219819.3219834 , https://doi.org/10. 1145/3219819.3219834
+16. Wang, X.: Tabular Abstraction, Editing, and Formatting. Ph.D. thesis, CAN (1996), aAINN09397
+17. Xue, W., Li, Q., Tao, D.: Res2tim: Reconstruct syntactic structures from table images. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 749-755. IEEE (2019)
+18. Xue, W., Yu, B., Wang, W., Tao, D., Li, Q.: Tgrnet: A table graph reconstruction network for table structure recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1295-1304 (2021)
+19. Ye, J., Qi, X., He, Y., Chen, Y., Gu, D., Gao, P., Xiao, R.: Pingan-vcgroup's solution for icdar 2021 competition on scientific literature parsing task b: Table recognition to html (2021). https://doi.org/10.48550/ARXIV.2105.01848 , https://arxiv.org/abs/2105.01848
+20. Zhang, Z., Zhang, J., Du, J., Wang, F.: Split, embed and merge: An accurate table structure recognizer. Pattern Recognition 126 , 108565 (2022)
+21. Zheng, X., Burdick, D., Popa, L., Zhong, X., Wang, N.X.R.: Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 697-706 (2021). https://doi.org/10.1109/WACV48630.2021. 00074
+22. Zhong, X., ShafieiBavani, E., Jimeno Yepes, A.: Image-based table recognition: Data, model, and evaluation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) Computer Vision - ECCV 2020. pp. 564-580. Springer International Publishing, Cham (2020)
+23. Zhong, X., Tang, J., Yepes, A.J.: Publaynet: largest dataset ever for document layout analysis. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1015-1022. IEEE (2019)
+
\ No newline at end of file
diff --git a/tests/data/2305.03393v1.json b/tests/data/2305.03393v1.json
index a547d423..e3174b78 100644
--- a/tests/data/2305.03393v1.json
+++ b/tests/data/2305.03393v1.json
@@ -1 +1 @@
-{"_name": "", "type": "pdf-document", "description": {"title": null, "abstract": null, "authors": null, "affiliations": null, "subjects": null, "keywords": null, "publication_date": null, "languages": null, "license": null, "publishers": null, "url_refs": null, "references": null, "publication": null, "reference_count": null, "citation_count": null, "citation_date": null, "advanced": null, "analytics": null, "logs": [], "collection": null, "acquisition": null}, "file-info": {"filename": "2305.03393v1.pdf", "filename-prov": null, "document-hash": "62f2a2163d768d5b125a207967797aefa6c9cc113de8bb5c725c582595dd0c1d", "#-pages": 14, "collection-name": null, "description": null, "page-hashes": [{"hash": "7d7ef24bf2a048bcc229d37583b737ee85f67a02864236764abcaca9eabc8b68", "model": "default", "page": 1}, {"hash": "45bd6ad4d3e145029fa89fbf741a81d8885eb87ef03d6744221c61e66358451b", "model": "default", "page": 2}, {"hash": "69656f07bd8fb7afc53ab6f3d0e9153a337b550522493bf18d702c8406a9c545", "model": "default", "page": 3}, {"hash": "5afca9340c5bda646a75b8c2a1bde1b8f7b89e08a64a3cc4732fd11c1c6ead48", "model": "default", "page": 4}, {"hash": "d3b9daa8fd5d091fb5ef9bce44f085dd282a137e215574fec9556904b25cea8a", "model": "default", "page": 5}, {"hash": "eaaaaebf96b567c9bd5696b2dd4d747b3b3ad40e15ca8dc8968c56060315f228", "model": "default", "page": 6}, {"hash": "d786b8d564d7a7c122f2cf573f0cc1f11ea0a559d93f19cf020c11360bce00b4", "model": "default", "page": 7}, {"hash": "839d5ba3f9d079e8b42470002e4d7cb9ac60681cd9e2f2e3bf41afa6884a170e", "model": "default", "page": 8}, {"hash": "d50e5f3b8b4d1d5b04d5b253b187da6f40784bee5bf36b7eaefcabbc89e7b7a9", "model": "default", "page": 9}, {"hash": "a1509c4093fe25dbcb07c87f394506182323289a17dd189679c0b6d8238c5aae", "model": "default", "page": 10}, {"hash": "ac5ff01e648170bbe641d6fd95dc4f952a8e0bf62308f109b7c49678cef97005", "model": "default", "page": 11}, {"hash": "6a9aa589dc4faead43b032ec733af0c4a6fedfa834aa56b1bfefc7458ea949cc", "model": "default", "page": 12}, {"hash": "467ed0563b555b6fd2a0bd2e4a7bf596c066b8f08d2e1fd33f6c6d8b1c445759", "model": "default", "page": 13}, {"hash": "435efd2ece1dfed60a8dcc1f7fd72dde2cb58c59f5aebc4d5ae2227510195b42", "model": "default", "page": 14}]}, "main-text": [{"text": "arXiv:2305.03393v1 [cs.CV] 5 May 2023", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [16.329214096069336, 236.99996948242188, 36.6031608581543, 582.52001953125], "page": 1, "span": [0, 37], "__ref_s3_data": null}]}, {"text": "Optimized Table Tokenization for Table Structure Recognition", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.61328125, 644.6187133789062, 480.59735107421875, 676.8052978515625], "page": 1, "span": [0, 60], "__ref_s3_data": null}]}, {"text": "Maksym Lysak [0000 - 0002 - 3723 - $^{6960]}$, Ahmed Nassar[0000 - 0002 - 9468 - $^{0822]}$, Nikolaos Livathinos [0000 - 0001 - 8513 - $^{3491]}$, Christoph Auer[0000 - 0001 - 5761 - $^{0422]}$, and Peter Staar [0000 - 0002 - 8088 - 0823]", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [138.6561737060547, 587.6192626953125, 476.05718994140625, 623.0816650390625], "page": 1, "span": [0, 238], "__ref_s3_data": null}]}, {"text": "IBM Research {mly,ahn,nli,cau,taa}@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [222.96609497070312, 555.623046875, 392.69110107421875, 575.94482421875], "page": 1, "span": [0, 49], "__ref_s3_data": null}]}, {"text": "Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [162.13674926757812, 327.2655334472656, 452.4198913574219, 522.533447265625], "page": 1, "span": [0, 1198], "__ref_s3_data": null}]}, {"text": "Keywords: Table Structure Recognition \u00b7 Data Representation \u00b7 Transformers \u00b7 Optimization.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [162.6794891357422, 293.8035888671875, 452.2415771484375, 314.24090576171875], "page": 1, "span": [0, 90], "__ref_s3_data": null}]}, {"text": "1 Introduction", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.76512145996094, 259.3119201660156, 228.933837890625, 270.5150451660156], "page": 1, "span": [0, 14], "__ref_s3_data": null}]}, {"text": "Tables are ubiquitous in documents such as scientific papers, patents, reports, manuals, specification sheets or marketing material. They often encode highly valuable information and therefore need to be extracted with high accuracy. Unfortunately, tables appear in documents in various sizes, styling and structure, making it difficult to recover their correct structure with simple analytical methods. Therefore, accurate table extraction is achieved these days with machine-learning based methods.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [134.01023864746094, 163.12771606445312, 480.595947265625, 244.2879638671875], "page": 1, "span": [0, 500], "__ref_s3_data": null}]}, {"text": "In modern document understanding systems [1,15], table extraction is typically a two-step process. Firstly, every table on a page is located with a bounding box, and secondly, their logical row and column structure is recognized. As of", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [134.044189453125, 126.84117889404297, 480.74835205078125, 160.30677795410156], "page": 1, "span": [0, 235], "__ref_s3_data": null}]}, {"text": "2", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [134.28973388671875, 690.1593017578125, 139.494384765625, 698.4556884765625], "page": 2, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [167.312744140625, 689.8800048828125, 231.72227478027344, 699.0272827148438], "page": 2, "span": [0, 16], "__ref_s3_data": null}]}, {"text": "Fig. 1. Comparison between HTML and OTSL table structure representation: (A) table-example with complex row and column headers, including a 2D empty span, (B) minimal graphical representation of table structure using rectangular layout, (C) HTML representation, (D) OTSL representation. This example demonstrates many of the key-features of OTSL, namely its reduced vocabulary size (12 versus 5 in this case), its reduced sequence length (55 versus 30) and a enhanced internal structure (variable token sequence length per row in HTML versus a fixed length of rows in OTSL).", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.99227905273438, 591.5379028320312, 480.7561950683594, 666.4251098632812], "page": 2, "span": [0, 574], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/0"}, {"text": "today, table detection in documents is a well understood problem, and the latest state-of-the-art (SOTA) object detection methods provide an accuracy comparable to human observers [7,8,10,14,23]. On the other hand, the problem of table structure recognition (TSR) is a lot more challenging and remains a very active area of research, in which many novel machine learning algorithms are being explored [3,4,5,9,11,12,13,14,17,18,21,22].", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.9597930908203, 270.46295166015625, 480.5923156738281, 340.515380859375], "page": 2, "span": [0, 435], "__ref_s3_data": null}]}, {"text": "Recently emerging SOTA methods for table structure recognition employ transformer-based models, in which an image of the table is provided to the network in order to predict the structure of the table as a sequence of tokens. These image-to-sequence (Im2Seq) models are extremely powerful, since they allow for a purely data-driven solution. The tokens of the sequence typically belong to a markup language such as HTML, Latex or Markdown, which allow to describe table structure as rows, columns and spanning cells in various configurations. In Figure 1, we illustrate how HTML is used to represent the table-structure of a particular example table. Public table-structure data sets such as PubTabNet [22], and FinTabNet [21], which were created in a semi-automated way from paired PDF and HTML sources (e.g. PubMed Central), popularized primarily the use of HTML as ground-truth representation format for TSR.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.86209106445312, 126.80567932128906, 480.5948181152344, 268.64990234375], "page": 2, "span": [0, 911], "__ref_s3_data": null}]}, {"text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [194.0343780517578, 689.6653442382812, 447.54290771484375, 698.948486328125], "page": 3, "span": [0, 60], "__ref_s3_data": null}]}, {"text": "3", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [474.95513916015625, 690.1593017578125, 480.59124755859375, 698.3677978515625], "page": 3, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "While the majority of research in TSR is currently focused on the development and application of novel neural model architectures, the table structure representation language (e.g. HTML in PubTabNet and FinTabNet) is usually adopted as is for the sequence tokenization in Im2Seq models. In this paper, we aim for the opposite and investigate the impact of the table structure representation language with an otherwise unmodified Im2Seq transformer-based architecture. Since the current state-of-the-art Im2Seq model is TableFormer [9], we select this model to perform our experiments.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.981201171875, 579.9556884765625, 480.7418212890625, 673.815185546875], "page": 3, "span": [0, 584], "__ref_s3_data": null}]}, {"text": "The main contribution of this paper is the introduction of a new optimised table structure language (OTSL), specifically designed to describe table-structure in an compact and structured way for Im2Seq models. OTSL has a number of key features, which make it very attractive to use in Im2Seq models. Specifically, compared to other languages such as HTML, OTSL has a minimized vocabulary which yields short sequence length, strong inherent structure (e.g. strict rectangular layout) and a strict syntax with rules that only look backwards. The latter allows for syntax validation during inference and ensures a syntactically correct table-structure. These OTSL features are illustrated in Figure 1, in comparison to HTML.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.7724151611328, 460.7701416015625, 480.87481689453125, 577.6600341796875], "page": 3, "span": [0, 721], "__ref_s3_data": null}]}, {"text": "The paper is structured as follows. In section 2, we give an overview of the latest developments in table-structure reconstruction. In section 3 we review the current HTML table encoding (popularised by PubTabNet and FinTabNet) and discuss its flaws. Subsequently, we introduce OTSL in section 4, which includes the language definition, syntax rules and error-correction procedures. In section 5, we apply OTSL on the TableFormer architecture, compare it to TableFormer models trained on HTML and ultimately demonstrate the advantages of using OTSL. Finally, in section 6 we conclude our work and outline next potential steps.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.7509765625, 352.1451110839844, 480.6080017089844, 458.64886474609375], "page": 3, "span": [0, 626], "__ref_s3_data": null}]}, {"text": "2 Related Work", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.4993896484375, 319.3436584472656, 236.76913452148438, 330.5750732421875], "page": 3, "span": [0, 14], "__ref_s3_data": null}]}, {"text": "Approaches to formalize the logical structure and layout of tables in electronic documents date back more than two decades [16]. In the recent past, a wide variety of computer vision methods have been explored to tackle the problem of table structure recognition, i.e. the correct identification of columns, rows and spanning cells in a given table. Broadly speaking, the current deeplearning based approaches fall into three categories: object detection (OD) methods, Graph-Neural-Network (GNN) methods and Image-to-Markup-Sequence (Im2Seq) methods. Object-detection based methods [11,12,13,14,21] rely on tablestructure annotation using (overlapping) bounding boxes for training, and produce bounding-box predictions to define table cells, rows, and columns on a table image. Graph Neural Network (GNN) based methods [3,6,17,18], as the name suggests, represent tables as graph structures. The graph nodes represent the content of each table cell, an embedding vector from the table image, or geometric coordinates of the table cell. The edges of the graph define the relationship between the nodes, e.g. if they belong to the same column, row, or table cell.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.65347290039062, 126.65711212158203, 484.1204833984375, 304.6298522949219], "page": 3, "span": [0, 1161], "__ref_s3_data": null}]}, {"text": "4 M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [134.52096557617188, 690.1593017578125, 231.72227478027344, 699.0346069335938], "page": 4, "span": [0, 18], "__ref_s3_data": null}]}, {"text": "Other work [20] aims at predicting a grid for each table and deciding which cells must be merged using an attention network. Im2Seq methods cast the problem as a sequence generation task [4,5,9,22], and therefore need an internal tablestructure representation language, which is often implemented with standard markup languages (e.g. HTML, LaTeX, Markdown). In theory, Im2Seq methods have a natural advantage over the OD and GNN methods by virtue of directly predicting the table-structure. As such, no post-processing or rules are needed in order to obtain the table-structure, which is necessary with OD and GNN approaches. In practice, this is not entirely true, because a predicted sequence of table-structure markup does not necessarily have to be syntactically correct. Hence, depending on the quality of the predicted sequence, some post-processing needs to be performed to ensure a syntactically valid (let alone correct) sequence.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.7613983154297, 532.5480346679688, 480.6270446777344, 674.1491088867188], "page": 4, "span": [0, 939], "__ref_s3_data": null}]}, {"text": "Within the Im2Seq method, we find several popular models, namely the encoder-dual-decoder model (EDD) [22], TableFormer [9], Tabsplitter[2] and Ye et. al. [19]. EDD uses two consecutive long short-term memory (LSTM) decoders to predict a table in HTML representation. The tag decoder predicts a sequence of HTML tags. For each decoded table cell (
), the attention is passed to the cell decoder to predict the content with an embedded OCR approach. The latter makes it susceptible to transcription errors in the cell content of the table. TableFormer address this reliance on OCR and uses two transformer decoders for HTML structure and cell bounding box prediction in an end-to-end architecture. The predicted cell bounding box is then used to extract text tokens from an originating (digital) PDF page, circumventing any need for OCR. TabSplitter [2] proposes a compact double-matrix representation of table rows and columns to do error detection and error correction of HTML structure sequences based on predictions from [19]. This compact double-matrix representation can not be used directly by the Img2seq model training, so the model uses HTML as an intermediate form. Chi et. al. [4] introduce a data set and a baseline method using bidirectional LSTMs to predict LaTeX code. Kayal [5] introduces Gated ResNet transformers to predict LaTeX code, and a separate OCR module to extract content.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.5825958251953, 305.3533020019531, 480.7930908203125, 530.6050415039062], "page": 4, "span": [0, 1404], "__ref_s3_data": null}]}, {"text": "Im2Seq approaches have shown to be well-suited for the TSR task and allow a full end-to-end network design that can output the final table structure without pre- or post-processing logic. Furthermore, Im2Seq models have demonstrated to deliver state-of-the-art prediction accuracy [9]. This motivated the authors to investigate if the performance (both in accuracy and inference time) can be further improved by optimising the table structure representation language. We believe this is a necessary step before further improving neural network architectures for this task.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.88829040527344, 209.4513397216797, 480.5937805175781, 303.2884216308594], "page": 4, "span": [0, 572], "__ref_s3_data": null}]}, {"text": "3 Problem Statement", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.42018127441406, 175.88177490234375, 269.6244201660156, 186.8051300048828], "page": 4, "span": [0, 19], "__ref_s3_data": null}]}, {"text": "All known Im2Seq based models for TSR fundamentally work in similar ways. Given an image of a table, the Im2Seq model predicts the structure of the table by generating a sequence of tokens. These tokens originate from a finite vocab-", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.80313110351562, 126.69752502441406, 480.59368896484375, 160.46705627441406], "page": 4, "span": [0, 233], "__ref_s3_data": null}]}, {"text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [194.02210998535156, 689.8338623046875, 447.54290771484375, 698.9061889648438], "page": 5, "span": [0, 60], "__ref_s3_data": null}]}, {"text": "5", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [475.1318664550781, 690.1593017578125, 480.59124755859375, 698.4717407226562], "page": 5, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "ulary and can be interpreted as a table structure. For example, with the HTML tokens
,
,
,
,
and
, one can construct simple table structures without any spanning cells. In reality though, one needs at least 28 HTML tokens to describe the most common complex tables observed in real-world documents [21,22], due to a variety of spanning cells definitions in the HTML token vocabulary.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.90025329589844, 604.4931640625, 480.7872619628906, 673.93798828125], "page": 5, "span": [0, 422], "__ref_s3_data": null}]}, {"text": "Fig. 2. Frequency of tokens in HTML and OTSL as they appear in PubTabNet.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [145.19676208496094, 562.5794677734375, 469.7522277832031, 571.8128051757812], "page": 5, "span": [0, 73], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/1"}, {"text": "Obviously, HTML and other general-purpose markup languages were not designed for Im2Seq models. As such, they have some serious drawbacks. First, the token vocabulary needs to be artificially large in order to describe all plausible tabular structures. Since most Im2Seq models use an autoregressive approach, they generate the sequence token by token. Therefore, to reduce inference time, a shorter sequence length is critical. Every table-cell is represented by at least two tokens (
and
). Furthermore, when tokenizing the HTML structure, one needs to explicitly enumerate possible column-spans and row-spans as words. In practice, this ends up requiring 28 different HTML tokens (when including column- and row-spans up to 10 cells) just to describe every table in the PubTabNet dataset. Clearly, not every token is equally represented, as is depicted in Figure 2. This skewed distribution of tokens in combination with variable token row-length makes it challenging for models to learn the HTML structure.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.7060546875, 259.57940673828125, 480.62744140625, 424.87249755859375], "page": 5, "span": [0, 1021], "__ref_s3_data": null}]}, {"text": "Additionally, it would be desirable if the representation would easily allow an early detection of invalid sequences on-the-go, before the prediction of the entire table structure is completed. HTML is not well-suited for this purpose as the verification of incomplete sequences is non-trivial or even impossible.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.89939880371094, 210.46835327148438, 480.5928955078125, 257.10150146484375], "page": 5, "span": [0, 313], "__ref_s3_data": null}]}, {"text": "In a valid HTML table, the token sequence must describe a 2D grid of table cells, serialised in row-major ordering, where each row and each column have the same length (while considering row- and column-spans). Furthermore, every opening tag in HTML needs to be matched by a closing tag in a correct hierarchical manner. Since the number of tokens for each table row and column can vary significantly, especially for large tables with many row- and column-spans, it is complex to verify the consistency of predicted structures during sequence", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.75929260253906, 126.89654541015625, 480.5947265625, 208.89126586914062], "page": 5, "span": [0, 542], "__ref_s3_data": null}]}, {"text": "6", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [134.12826538085938, 690.1593017578125, 139.453125, 698.234130859375], "page": 6, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [167.2993927001953, 690.0819091796875, 231.72227478027344, 698.99951171875], "page": 6, "span": [0, 16], "__ref_s3_data": null}]}, {"text": "generation. Implicitly, this also means that Im2Seq models need to learn these complex syntax rules, simply to deliver valid output.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.94253540039062, 651.3041381835938, 480.59478759765625, 673.705078125], "page": 6, "span": [0, 132], "__ref_s3_data": null}]}, {"text": "In practice, we observe two major issues with prediction quality when training Im2Seq models on HTML table structure generation from images. On the one hand, we find that on large tables, the visual attention of the model often starts to drift and is not accurately moving forward cell by cell anymore. This manifests itself in either in an increasing location drift for proposed table-cells in later rows on the same column or even complete loss of vertical alignment, as illustrated in Figure 5. Addressing this with post-processing is partially possible, but clearly undesired. On the other hand, we find many instances of predictions with structural inconsistencies or plain invalid HTML output, as shown in Figure 6, which are nearly impossible to properly correct. Both problems seriously impact the TSR model performance, since they reflect not only in the task of pure structure recognition but also in the equally crucial recognition or matching of table cell content.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.64344787597656, 496.2580871582031, 480.595703125, 649.443603515625], "page": 6, "span": [0, 977], "__ref_s3_data": null}]}, {"text": "4 Optimised Table Structure Language", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.07444763183594, 460.4577331542969, 372.50848388671875, 472.3045959472656], "page": 6, "span": [0, 36], "__ref_s3_data": null}]}, {"text": "To mitigate the issues with HTML in Im2Seq-based TSR models laid out before, we propose here our Optimised Table Structure Language (OTSL). OTSL is designed to express table structure with a minimized vocabulary and a simple set of rules, which are both significantly reduced compared to HTML. At the same time, OTSL enables easy error detection and correction during sequence generation. We further demonstrate how the compact structure representation and minimized sequence length improves prediction accuracy and inference time in the TableFormer architecture.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.82858276367188, 350.400146484375, 480.5947265625, 443.65216064453125], "page": 6, "span": [0, 563], "__ref_s3_data": null}]}, {"text": "4.1 Language Definition", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.0214385986328, 316.9593811035156, 261.80108642578125, 326.9925231933594], "page": 6, "span": [0, 23], "__ref_s3_data": null}]}, {"text": "In Figure 3, we illustrate how the OTSL is defined. In essence, the OTSL defines only 5 tokens that directly describe a tabular structure based on an atomic 2D grid.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [134.03182983398438, 269.9826354980469, 480.5887145996094, 303.5955505371094], "page": 6, "span": [0, 165], "__ref_s3_data": null}]}, {"text": "The OTSL vocabulary is comprised of the following tokens:", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [149.35653686523438, 256.95648193359375, 409.3113708496094, 266.98114013671875], "page": 6, "span": [0, 57], "__ref_s3_data": null}]}, {"text": "-\"C\" cell a new table cell that either has or does not have cell content", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [139.9448699951172, 235.22317504882812, 460.54443359375, 245.30445861816406], "page": 6, "span": [0, 72], "__ref_s3_data": null}]}, {"text": "-\"L\" cell left-looking cell , merging with the left neighbor cell to create a span", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [139.9716796875, 210.11834716796875, 480.59393310546875, 232.8718719482422], "page": 6, "span": [0, 82], "__ref_s3_data": null}]}, {"text": "-\"U\" cell up-looking cell , merging with the upper neighbor cell to create a span", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [140.17970275878906, 184.99545288085938, 480.58856201171875, 207.94252014160156], "page": 6, "span": [0, 81], "__ref_s3_data": null}]}, {"text": "-\"X\" cell cross cell , to merge with both left and upper neighbor cells", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [139.92364501953125, 172.88253784179688, 454.5549621582031, 183.41383361816406], "page": 6, "span": [0, 71], "__ref_s3_data": null}]}, {"text": "-\"NL\" new-line , switch to the next row.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [139.87696838378906, 160.93917846679688, 328.61676025390625, 170.83633422851562], "page": 6, "span": [0, 40], "__ref_s3_data": null}]}, {"text": "A notable attribute of OTSL is that it has the capability of achieving lossless conversion to HTML.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [134.19346618652344, 127.14515686035156, 480.5928039550781, 148.89442443847656], "page": 6, "span": [0, 99], "__ref_s3_data": null}]}, {"text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [193.9747772216797, 689.7752685546875, 447.54290771484375, 698.8756103515625], "page": 7, "span": [0, 60], "__ref_s3_data": null}]}, {"text": "7", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [475.3976135253906, 690.1593017578125, 480.59124755859375, 698.609375], "page": 7, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [133.8881378173828, 635.6204833984375, 480.58740234375, 667.1154174804688], "page": 7, "span": [0, 207], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/2"}, {"text": "4.2 Language Syntax", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.2874298095703, 477.7056579589844, 246.78787231445312, 487.5195007324219], "page": 7, "span": [0, 19], "__ref_s3_data": null}]}, {"text": "The OTSL representation follows these syntax rules:", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [134.23097229003906, 457.80255126953125, 363.7961730957031, 467.56781005859375], "page": 7, "span": [0, 51], "__ref_s3_data": null}]}, {"text": "1. Left-looking cell rule : The left neighbour of an \"L\" cell must be either another \"L\" cell or a \"C\" cell.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.97299194335938, 424.0662536621094, 480.5890197753906, 445.8700256347656], "page": 7, "span": [0, 108], "__ref_s3_data": null}]}, {"text": "2. Up-looking cell rule : The upper neighbour of a \"U\" cell must be either another \"U\" cell or a \"C\" cell.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.19281005859375, 400.15325927734375, 480.59228515625, 421.95819091796875], "page": 7, "span": [0, 106], "__ref_s3_data": null}]}, {"text": "3. Cross cell rule :", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [138.06527709960938, 388.19525146484375, 226.0736083984375, 397.4916687011719], "page": 7, "span": [0, 20], "__ref_s3_data": null}]}, {"text": ": The left neighbour of an \"X\" cell must be either another \"X\" cell or a \"U\" cell, and the upper neighbour of an \"X\" cell must be either another \"X\" cell or an \"L\" cell.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [146.40036010742188, 352.3262939453125, 480.5923767089844, 396.9922180175781], "page": 7, "span": [0, 169], "__ref_s3_data": null}]}, {"text": "4. First row rule : Only \"L\" cells and \"C\" cells are allowed in the first row.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.39491271972656, 339.79541015625, 474.5901794433594, 349.8867492675781], "page": 7, "span": [0, 78], "__ref_s3_data": null}]}, {"text": "5. First column rule : Only \"U\" cells and \"C\" cells are allowed in the first column.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.3254852294922, 316.4543151855469, 480.58746337890625, 338.0946960449219], "page": 7, "span": [0, 84], "__ref_s3_data": null}]}, {"text": "6. Rectangular rule : The table representation is always rectangular - all rows must have an equal number of tokens, terminated with \"NL\" token.", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.22427368164062, 292.2819519042969, 480.5945739746094, 314.491455078125], "page": 7, "span": [0, 144], "__ref_s3_data": null}]}, {"text": "The application of these rules gives OTSL a set of unique properties. First of all, the OTSL enforces a strictly rectangular structure representation, where every new-line token starts a new row. As a consequence, all rows and all columns have exactly the same number of tokens, irrespective of cell spans. Secondly, the OTSL representation is unambiguous: Every table structure is represented in one way. In this representation every table cell corresponds to a \"C\"-cell token, which in case of spans is always located in the top-left corner of the table cell definition. Third, OTSL syntax rules are only backward-looking. As a consequence, every predicted token can be validated straight during sequence generation by looking at the previously predicted sequence. As such, OTSL can guarantee that every predicted sequence is syntactically valid.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.6158447265625, 149.74966430664062, 480.5958251953125, 280.5412292480469], "page": 7, "span": [0, 848], "__ref_s3_data": null}]}, {"text": "These characteristics can be easily learned by sequence generator networks, as we demonstrate further below. We find strong indications that this pattern", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [134.04405212402344, 126.91014099121094, 480.5926513671875, 148.8981170654297], "page": 7, "span": [0, 153], "__ref_s3_data": null}]}, {"text": "8", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [134.1900634765625, 690.1593017578125, 139.46353149414062, 698.3311767578125], "page": 8, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [167.40870666503906, 690.0598754882812, 231.72227478027344, 699.074462890625], "page": 8, "span": [0, 16], "__ref_s3_data": null}]}, {"text": "reduces significantly the column drift seen in the HTML based models (see Figure 5).", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [134.2002410888672, 651.7838745117188, 480.5888366699219, 673.7068481445312], "page": 8, "span": [0, 84], "__ref_s3_data": null}]}, {"text": "4.3 Error-detection and -mitigation", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.25576782226562, 620.8721313476562, 319.3470764160156, 630.8031005859375], "page": 8, "span": [0, 35], "__ref_s3_data": null}]}, {"text": "The design of OTSL allows to validate a table structure easily on an unfinished sequence. The detection of an invalid sequence token is a clear indication of a prediction mistake, however a valid sequence by itself does not guarantee prediction correctness. Different heuristics can be used to correct token errors in an invalid sequence and thus increase the chances for accurate predictions. Such heuristics can be applied either after the prediction of each token, or at the end on the entire predicted sequence. For example a simple heuristic which can correct the predicted OTSL sequence on-the-fly is to verify if the token with the highest prediction confidence invalidates the predicted sequence, and replace it by the token with the next highest confidence until OTSL rules are satisfied.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.90631103515625, 492.9853515625, 480.59576416015625, 610.5565185546875], "page": 8, "span": [0, 797], "__ref_s3_data": null}]}, {"text": "5 Experiments", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.63143920898438, 459.85089111328125, 229.03533935546875, 471.56646728515625], "page": 8, "span": [0, 13], "__ref_s3_data": null}]}, {"text": "To evaluate the impact of OTSL on prediction accuracy and inference times, we conducted a series of experiments based on the TableFormer model (Figure 4) with two objectives: Firstly we evaluate the prediction quality and performance of OTSL vs. HTML after performing Hyper Parameter Optimization (HPO) on the canonical PubTabNet data set. Secondly we pick the best hyper-parameters found in the first step and evaluate how OTSL impacts the performance of TableFormer after training on other publicly available data sets (FinTabNet, PubTables-1M [14]). The ground truth (GT) from all data sets has been converted into OTSL format for this purpose, and will be made publicly available.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.63893127441406, 339.67877197265625, 480.6024475097656, 445.8916015625], "page": 8, "span": [0, 684], "__ref_s3_data": null}]}, {"text": "Fig. 4. Architecture sketch of the TableFormer model, which is a representative for the Im2Seq approach.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [134.0367889404297, 287.69140625, 480.5908203125, 308.2715148925781], "page": 8, "span": [0, 104], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/3"}, {"text": "We rely on standard metrics such as Tree Edit Distance score (TEDs) for table structure prediction, and Mean Average Precision (mAP) with 0.75 Intersection Over Union (IOU) threshold for the bounding-box predictions of table cells. The predicted OTSL structures were converted back to HTML format in", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.83853149414062, 126.85651397705078, 480.59173583984375, 172.45193481445312], "page": 8, "span": [0, 299], "__ref_s3_data": null}]}, {"text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [193.94395446777344, 689.7586669921875, 447.54290771484375, 698.8834228515625], "page": 9, "span": [0, 60], "__ref_s3_data": null}]}, {"text": "9", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [474.9051818847656, 690.1593017578125, 480.59124755859375, 698.5001831054688], "page": 9, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.90585327148438, 640.3582153320312, 480.5957946777344, 673.7608642578125], "page": 9, "span": [0, 163], "__ref_s3_data": null}]}, {"text": "5.1 Hyper Parameter Optimization", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.28504943847656, 613.6966552734375, 318.44842529296875, 623.6006469726562], "page": 9, "span": [0, 32], "__ref_s3_data": null}]}, {"text": "We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.80441284179688, 537.6300659179688, 481.1519775390625, 607.1452026367188], "page": 9, "span": [0, 423], "__ref_s3_data": null}]}, {"text": "Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [133.88543701171875, 464.55596923828125, 480.59539794921875, 517.7815551757812], "page": 9, "span": [0, 398], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/0"}, {"text": "5.2 Quantitative Results", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.48985290527344, 274.2215881347656, 264.4033203125, 284.3811950683594], "page": 9, "span": [0, 24], "__ref_s3_data": null}]}, {"text": "We picked the model parameter configuration that produced the best prediction quality (enc=6, dec=6, heads=8) with PubTabNet alone, then independently trained and evaluated it on three publicly available data sets: PubTabNet (395k samples), FinTabNet (113k samples) and PubTables-1M (about 1M samples). Performance results are presented in Table. 2. It is clearly evident that the model trained on OTSL outperforms HTML across the board, keeping high TEDs and mAP scores even on difficult financial tables (FinTabNet) that contain sparse and large tables.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.97792053222656, 174.46827697753906, 480.59576416015625, 268.4878234863281], "page": 9, "span": [0, 555], "__ref_s3_data": null}]}, {"text": "Additionally, the results show that OTSL has an advantage over HTML when applied on a bigger data set like PubTables-1M and achieves significantly improved scores. Finally, OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.90371704101562, 126.73831176757812, 480.6639099121094, 172.7313995361328], "page": 9, "span": [0, 289], "__ref_s3_data": null}]}, {"text": "10", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [134.6792755126953, 690.1593017578125, 144.2487335205078, 698.4376831054688], "page": 10, "span": [0, 2], "__ref_s3_data": null}]}, {"text": "M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [167.2496337890625, 690.1593017578125, 231.72048950195312, 699.0352783203125], "page": 10, "span": [0, 16], "__ref_s3_data": null}]}, {"text": "Table 2. TSR and cell detection results compared between OTSL and HTML on the PubTabNet [22], FinTabNet [21] and PubTables-1M [14] data sets using TableFormer [9] (with enc=6, dec=6, heads=8).", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [134.00595092773438, 645.5076904296875, 480.59356689453125, 677.1614379882812], "page": 10, "span": [0, 192], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/1"}, {"text": "5.3 Qualitative Results", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.25314331054688, 493.7161560058594, 257.19561767578125, 503.76678466796875], "page": 10, "span": [0, 23], "__ref_s3_data": null}]}, {"text": "To illustrate the qualitative differences between OTSL and HTML, Figure 5 demonstrates less overlap and more accurate bounding boxes with OTSL. In Figure 6, OTSL proves to be more effective in handling tables with longer token sequences, resulting in even more precise structure prediction and bounding boxes.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.7931365966797, 425.5223083496094, 480.6096496582031, 483.0732421875], "page": 10, "span": [0, 309], "__ref_s3_data": null}]}, {"text": "Fig. 5. The OTSL model produces more accurate bounding boxes with less overlap (E) than the HTML model (D), when predicting the structure of a sparse table (A), at twice the inference speed because of shorter sequence length (B),(C). \"PMC2807444_006_00.png\" PubTabNet. \u03bc", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [133.934326171875, 352.2828369140625, 480.591064453125, 395.2126770019531], "page": 10, "span": [0, 270], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/4"}, {"text": "\u03bc", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [227.91465759277344, 116.65360260009766, 230.10028076171875, 126.1739730834961], "page": 10, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "\u2265", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [300.58056640625, 98.57134246826172, 302.72637939453125, 108.3780517578125], "page": 10, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [194.172119140625, 689.804443359375, 447.54290771484375, 698.850830078125], "page": 11, "span": [0, 60], "__ref_s3_data": null}]}, {"text": "11", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [471.22021484375, 690.1593017578125, 480.5894775390625, 698.3983154296875], "page": 11, "span": [0, 2], "__ref_s3_data": null}]}, {"text": "Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. \"PMC5406406_003_01.png\" PubTabNet.", "type": "caption", "name": "Caption", "font": null, "prov": [{"bbox": [134.00157165527344, 613.6331176757812, 480.82830810546875, 667.0059814453125], "page": 11, "span": [0, 390], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/5"}, {"text": "12 M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [134.69354248046875, 690.152099609375, 231.72048950195312, 698.9852905273438], "page": 12, "span": [0, 19], "__ref_s3_data": null}]}, {"text": "6 Conclusion", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.32138061523438, 663.8826293945312, 219.25479125976562, 675.0826416015625], "page": 12, "span": [0, 12], "__ref_s3_data": null}]}, {"text": "We demonstrated that representing tables in HTML for the task of table structure recognition with Im2Seq models is ill-suited and has serious limitations. Furthermore, we presented in this paper an Optimized Table Structure Language (OTSL) which, when compared to commonly used general purpose languages, has several key benefits.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [134.07997131347656, 588.5181884765625, 480.595703125, 645.8515014648438], "page": 12, "span": [0, 330], "__ref_s3_data": null}]}, {"text": "First and foremost, given the same network configuration, inference time for a table-structure prediction is about 2 times faster compared to the conventional HTML approach. This is primarily owed to the shorter sequence length of the OTSL representation. Additional performance benefits can be obtained with HPO (hyper parameter optimization). As we demonstrate in our experiments, models trained on OTSL can be significantly smaller, e.g. by reducing the number of encoder and decoder layers, while preserving comparatively good prediction quality. This can further improve inference performance, yielding 5-6 times faster inference speed in OTSL with prediction quality comparable to models trained on HTML (see Table 1).", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.63015747070312, 467.4183654785156, 480.6451416015625, 585.736328125], "page": 12, "span": [0, 724], "__ref_s3_data": null}]}, {"text": "Secondly, OTSL has more inherent structure and a significantly restricted vocabulary size. This allows autoregressive models to perform better in the TED metric, but especially with regards to prediction accuracy of the table-cell bounding boxes (see Table 2). As shown in Figure 5, we observe that the OTSL drastically reduces the drift for table cell bounding boxes at high row count and in sparse tables. This leads to more accurate predictions and a significant reduction in post-processing complexity, which is an undesired necessity in HTML-based Im2Seq models. Significant novelty lies in OTSL syntactical rules, which are few, simple and always backwards looking. Each new token can be validated only by analyzing the sequence of previous tokens, without requiring the entire sequence to detect mistakes. This in return allows to perform structural error detection and correction on-the-fly during sequence generation.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [133.8241424560547, 323.7073974609375, 480.5948181152344, 465.1226806640625], "page": 12, "span": [0, 926], "__ref_s3_data": null}]}, {"text": "References", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [134.31680297851562, 287.61077880859375, 197.68641662597656, 298.98321533203125], "page": 12, "span": [0, 10], "__ref_s3_data": null}]}, {"text": "1. Auer, C., Dolfi, M., Carvalho, A., Ramis, C.B., Staar, P.W.J.: Delivering document conversion as a cloud service with high throughput and responsiveness. CoRR abs/2206.00785 (2022). https://doi.org/10.48550/arXiv.2206.00785 , https://doi.org/10.48550/arXiv.2206.00785", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [139.37100219726562, 227.38706970214844, 480.5920104980469, 269.8235168457031], "page": 12, "span": [0, 270], "__ref_s3_data": null}]}, {"text": "2. Chen, B., Peng, D., Zhang, J., Ren, Y., Jin, L.: Complex table structure recognition in the wild using transformer and identity matrix-based augmentation. In: Porwal, U., Forn\u00e9s, A., Shafait, F. (eds.) Frontiers in Handwriting Recognition. pp. 545561. Springer International Publishing, Cham (2022)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.86715698242188, 182.8286590576172, 480.6174011230469, 225.87879943847656], "page": 12, "span": [0, 301], "__ref_s3_data": null}]}, {"text": "3. Chi, Z., Huang, H., Xu, H.D., Yu, H., Yin, W., Mao, X.L.: Complicated table structure recognition. arXiv preprint arXiv:1908.04729 (2019)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.72738647460938, 160.16236877441406, 480.5873107910156, 181.41339111328125], "page": 12, "span": [0, 140], "__ref_s3_data": null}]}, {"text": "4. Deng, Y., Rosenberg, D., Mann, G.: Challenges in end-to-end neural scientific table recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 894-901. IEEE (2019)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.9593963623047, 126.65552520751953, 480.5882568359375, 157.8516387939453], "page": 12, "span": [0, 204], "__ref_s3_data": null}]}, {"text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [194.0724639892578, 689.6328735351562, 447.54290771484375, 698.8519287109375], "page": 13, "span": [0, 60], "__ref_s3_data": null}]}, {"text": "13", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [471.1661376953125, 690.1593017578125, 480.5894775390625, 698.4201049804688], "page": 13, "span": [0, 2], "__ref_s3_data": null}]}, {"text": "5. Kayal, P., Anand, M., Desai, H., Singh, M.: Tables to latex: structure and content extraction from scientific tables. International Journal on Document Analysis and Recognition (IJDAR) pp. 1-10 (2022)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.6960906982422, 641.0914306640625, 480.59478759765625, 672.9320068359375], "page": 13, "span": [0, 203], "__ref_s3_data": null}]}, {"text": "6. Lee, E., Kwon, J., Yang, H., Park, J., Lee, S., Koo, H.I., Cho, N.I.: Table structure recognition based on grid shape graph. In: 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). pp. 18681873. IEEE (2022)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.54495239257812, 598.4913940429688, 480.7531433105469, 640.2967529296875], "page": 13, "span": [0, 264], "__ref_s3_data": null}]}, {"text": "7. Li, M., Cui, L., Huang, S., Wei, F., Zhou, M., Li, Z.: Tablebank: A benchmark dataset for table detection and recognition (2019)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [139.07086181640625, 576.4161376953125, 480.5901184082031, 596.6123046875], "page": 13, "span": [0, 131], "__ref_s3_data": null}]}, {"text": "8. Livathinos, N., Berrospi, C., Lysak, M., Kuropiatnyk, V., Nassar, A., Carvalho, A., Dolfi, M., Auer, C., Dinkla, K., Staar, P.: Robust pdf document conversion using recurrent neural networks. Proceedings of the AAAI Conference on Artificial Intelligence 35 (17), 15137-15145 (May 2021), https://ojs.aaai.org/index.php/ AAAI/article/view/17777", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.5443878173828, 521.7116088867188, 480.8269348144531, 574.5029296875], "page": 13, "span": [0, 345], "__ref_s3_data": null}]}, {"text": "9. Nassar, A., Livathinos, N., Lysak, M., Staar, P.: Tableformer: Table structure understanding with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4614-4623 (June 2022)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [138.21878051757812, 487.909423828125, 480.5938720703125, 519.8042602539062], "page": 13, "span": [0, 234], "__ref_s3_data": null}]}, {"text": "10. Pfitzmann, B., Auer, C., Dolfi, M., Nassar, A.S., Staar, P.W.J.: Doclaynet: A large human-annotated dataset for document-layout segmentation. In: Zhang, A., Rangwala, H. (eds.) KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. pp. 3743-3751. ACM (2022). https://doi.org/10.1145/3534678.3539043 , https:// doi.org/10.1145/3534678.3539043", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.7440185546875, 422.8146057128906, 480.6158447265625, 486.7056579589844], "page": 13, "span": [0, 413], "__ref_s3_data": null}]}, {"text": "11. Prasad, D., Gadpal, A., Kapadni, K., Visave, M., Sultanpure, K.: Cascadetabnet: An approach for end to end table detection and structure recognition from imagebased documents. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. pp. 572-573 (2020)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.48020935058594, 378.9383850097656, 480.59295654296875, 421.14239501953125], "page": 13, "span": [0, 295], "__ref_s3_data": null}]}, {"text": "12. Schreiber, S., Agne, S., Wolf, I., Dengel, A., Ahmed, S.: Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR). vol. 1, pp. 1162-1167. IEEE (2017)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.6136016845703, 334.68109130859375, 480.6297302246094, 377.08355712890625], "page": 13, "span": [0, 281], "__ref_s3_data": null}]}, {"text": "13. Siddiqui, S.A., Fateh, I.A., Rizvi, S.T.R., Dengel, A., Ahmed, S.: Deeptabstr: Deep learning based table structure recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1403-1409 (2019). https:// doi.org/10.1109/ICDAR.2019.00226", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.72238159179688, 290.7889099121094, 480.75555419921875, 333.61895751953125], "page": 13, "span": [0, 275], "__ref_s3_data": null}]}, {"text": "14. Smock, B., Pesala, R., Abraham, R.: PubTables-1M: Towards comprehensive table extraction from unstructured documents. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4634-4642 (June 2022)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.3740997314453, 247.3230743408203, 480.5928649902344, 289.9039306640625], "page": 13, "span": [0, 241], "__ref_s3_data": null}]}, {"text": "15. Staar, P.W.J., Dolfi, M., Auer, C., Bekas, C.: Corpus conversion service: A machine learning platform to ingest documents at scale. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 774-782. KDD '18, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3219819.3219834 , https://doi.org/10. 1145/3219819.3219834", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.6051483154297, 181.90472412109375, 480.6208190917969, 245.70274353027344], "page": 13, "span": [0, 405], "__ref_s3_data": null}]}, {"text": "16. Wang, X.: Tabular Abstraction, Editing, and Formatting. Ph.D. thesis, CAN (1996), aAINN09397", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.76400756835938, 159.9412841796875, 480.5954284667969, 179.845703125], "page": 13, "span": [0, 96], "__ref_s3_data": null}]}, {"text": "17. Xue, W., Li, Q., Tao, D.: Res2tim: Reconstruct syntactic structures from table images. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 749-755. IEEE (2019)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.76400756835938, 126.6559829711914, 480.5911865234375, 157.7118377685547], "page": 13, "span": [0, 195], "__ref_s3_data": null}]}, {"text": "14 M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null, "prov": [{"bbox": [134.76499938964844, 690.1593017578125, 231.72048950195312, 699.0250244140625], "page": 14, "span": [0, 19], "__ref_s3_data": null}]}, {"text": "18. Xue, W., Yu, B., Wang, W., Tao, D., Li, Q.: Tgrnet: A table graph reconstruction network for table structure recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1295-1304 (2021)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.63540649414062, 641.2738647460938, 480.59112548828125, 673.007568359375], "page": 14, "span": [0, 223], "__ref_s3_data": null}]}, {"text": "19. Ye, J., Qi, X., He, Y., Chen, Y., Gu, D., Gao, P., Xiao, R.: Pingan-vcgroup's solution for icdar 2021 competition on scientific literature parsing task b: Table recognition to html (2021). https://doi.org/10.48550/ARXIV.2105.01848 , https://arxiv.org/abs/2105.01848", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.76499938964844, 598.3690795898438, 480.9535217285156, 640.1014404296875], "page": 14, "span": [0, 269], "__ref_s3_data": null}]}, {"text": "20. Zhang, Z., Zhang, J., Du, J., Wang, F.: Split, embed and merge: An accurate table structure recognizer. Pattern Recognition 126 , 108565 (2022)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.35293579101562, 576.3993530273438, 480.5935363769531, 596.5462036132812], "page": 14, "span": [0, 147], "__ref_s3_data": null}]}, {"text": "21. Zheng, X., Burdick, D., Popa, L., Zhong, X., Wang, N.X.R.: Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 697-706 (2021). https://doi.org/10.1109/WACV48630.2021. 00074", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.2264862060547, 521.74560546875, 480.8044738769531, 574.3355712890625], "page": 14, "span": [0, 329], "__ref_s3_data": null}]}, {"text": "22. Zhong, X., ShafieiBavani, E., Jimeno Yepes, A.: Image-based table recognition: Data, model, and evaluation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) Computer Vision - ECCV 2020. pp. 564-580. Springer International Publishing, Cham (2020)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [133.99171447753906, 477.6664123535156, 480.5955810546875, 519.9246826171875], "page": 14, "span": [0, 259], "__ref_s3_data": null}]}, {"text": "23. Zhong, X., Tang, J., Yepes, A.J.: Publaynet: largest dataset ever for document layout analysis. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1015-1022. IEEE (2019)", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [134.23336791992188, 444.7017822265625, 480.59454345703125, 475.69757080078125], "page": 14, "span": [0, 206], "__ref_s3_data": null}]}], "figures": [{"bounding-box": null, "prov": [{"bbox": [150.0213623046875, 366.15130615234375, 464.4815673828125, 583.114990234375], "page": 2, "span": [0, 574], "__ref_s3_data": null}], "text": "Fig. 1. Comparison between HTML and OTSL table structure representation: (A) table-example with complex row and column headers, including a 2D empty span, (B) minimal graphical representation of table structure using rectangular layout, (C) HTML representation, (D) OTSL representation. This example demonstrates many of the key-features of OTSL, namely its reduced vocabulary size (12 versus 5 in this case), its reduced sequence length (55 versus 30) and a enhanced internal structure (variable token sequence length per row in HTML versus a fixed length of rows in OTSL).", "type": "figure"}, {"bounding-box": null, "prov": [{"bbox": [137.5374755859375, 452.4152526855469, 476.1513366699219, 562.9699096679688], "page": 5, "span": [0, 73], "__ref_s3_data": null}], "text": "Fig. 2. Frequency of tokens in HTML and OTSL as they appear in PubTabNet.", "type": "figure"}, {"bounding-box": null, "prov": [{"bbox": [164.22023010253906, 511.6170959472656, 448.9761047363281, 628.123291015625], "page": 7, "span": [0, 207], "__ref_s3_data": null}], "text": "Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding", "type": "figure"}, {"bounding-box": null, "prov": [{"bbox": [141.4298095703125, 197.92733764648438, 472.34527587890625, 285.1344299316406], "page": 8, "span": [0, 104], "__ref_s3_data": null}], "text": "Fig. 4. Architecture sketch of the TableFormer model, which is a representative for the Im2Seq approach.", "type": "figure"}, {"bounding-box": null, "prov": [{"bbox": [162.900146484375, 128.48397827148438, 451.3374328613281, 348.21990966796875], "page": 10, "span": [0, 270], "__ref_s3_data": null}], "text": "Fig. 5. The OTSL model produces more accurate bounding boxes with less overlap (E) than the HTML model (D), when predicting the structure of a sparse table (A), at twice the inference speed because of shorter sequence length (B),(C). \"PMC2807444_006_00.png\" PubTabNet. \u03bc", "type": "figure"}, {"bounding-box": null, "prov": [{"bbox": [168.26930236816406, 157.55677795410156, 447.7568664550781, 609.8697509765625], "page": 11, "span": [0, 390], "__ref_s3_data": null}], "text": "Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. \"PMC5406406_003_01.png\" PubTabNet.", "type": "figure"}], "tables": [{"bounding-box": null, "prov": [{"bbox": [139.82040405273438, 322.2669982910156, 474.80023193359375, 454.9158935546875], "page": 9, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.", "type": "table", "#-cols": 8, "#-rows": 7, "data": [[{"bbox": [160.3699951171875, 442.1952819824219, 168.0479278564453, 450.2650451660156], "spans": [[0, 0]], "text": "#", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [207.9739990234375, 442.1952819824219, 215.6519317626953, 450.2650451660156], "spans": [[0, 1]], "text": "#", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [239.79800415039062, 436.7162780761719, 278.3176574707031, 444.7860412597656], "spans": [[0, 2], [1, 2]], "text": "Language", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 0, "row-header": false, "row-span": [0, 2]}, {"bbox": [324.6700134277344, 442.1952819824219, 348.2641906738281, 450.2650451660156], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [324.6700134277344, 442.1952819824219, 348.2641906738281, 450.2650451660156], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 4, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [324.6700134277344, 442.1952819824219, 348.2641906738281, 450.2650451660156], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 5, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [396.27099609375, 442.1952819824219, 417.1268310546875, 450.2650451660156], "spans": [[0, 6]], "text": "mAP", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [430.77099609375, 442.1952819824219, 467.1423034667969, 450.2650451660156], "spans": [[0, 7]], "text": "Inference", "type": "col_header", "col": 7, "col-header": false, "col-span": [7, 8], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [144.5919952392578, 429.2442932128906, 183.82806396484375, 437.3140563964844], "spans": [[1, 0]], "text": "enc-layers", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [192.1949920654297, 429.2442932128906, 231.43106079101562, 437.3140563964844], "spans": [[1, 1]], "text": "dec-layers", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [239.79800415039062, 436.7162780761719, 278.3176574707031, 444.7860412597656], "spans": [[0, 2], [1, 2]], "text": "Language", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [0, 2]}, {"bbox": [286.6860046386719, 429.2442932128906, 312.3326110839844, 437.3140563964844], "spans": [[1, 3]], "text": "simple", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [320.7019958496094, 429.2442932128906, 353.7198791503906, 437.3140563964844], "spans": [[1, 4]], "text": "complex", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [369.3059997558594, 429.2442932128906, 379.03094482421875, 437.3140563964844], "spans": [[1, 5]], "text": "all", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [394.927001953125, 431.2362976074219, 418.4727783203125, 439.3060607910156], "spans": [[1, 6]], "text": "(0.75)", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [427.14801025390625, 431.2362976074219, 470.76055908203125, 439.3060607910156], "spans": [[1, 7]], "text": "time (secs)", "type": "col_header", "col": 7, "col-header": false, "col-span": [7, 8], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [161.906005859375, 410.4142761230469, 166.512939453125, 418.4840393066406], "spans": [[2, 0]], "text": "6", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [209.50900268554688, 410.4142761230469, 214.11593627929688, 418.4840393066406], "spans": [[2, 1]], "text": "6", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [245.17599487304688, 402.9422912597656, 272.9395446777344, 423.96405029296875], "spans": [[2, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [289.0169982910156, 402.9422912597656, 310.0037536621094, 423.96405029296875], "spans": [[2, 3]], "text": "0.965 0.969", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [326.7170104980469, 402.9422912597656, 347.7037658691406, 423.96405029296875], "spans": [[2, 4]], "text": "0.934 0.927", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [363.6759948730469, 402.9422912597656, 384.6627502441406, 423.96405029296875], "spans": [[2, 5]], "text": "0.955 0.955", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [396.20599365234375, 402.9422912597656, 417.1927490234375, 424.0268249511719], "spans": [[2, 6]], "text": "0.88 0.857", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [439.5270080566406, 402.9422912597656, 458.3842468261719, 424.0268249511719], "spans": [[2, 7]], "text": "2.73 5.39", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [161.906005859375, 384.11328125, 166.512939453125, 392.18304443359375], "spans": [[3, 0]], "text": "4", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [209.50900268554688, 384.11328125, 214.11593627929688, 392.18304443359375], "spans": [[3, 1]], "text": "4", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [245.17599487304688, 376.64129638671875, 272.9395446777344, 397.66204833984375], "spans": [[3, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [289.0169982910156, 376.64129638671875, 310.0037536621094, 397.66204833984375], "spans": [[3, 3]], "text": "0.938 0.952", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [326.7170104980469, 376.64129638671875, 347.7037658691406, 397.66204833984375], "spans": [[3, 4]], "text": "0.904 0.909", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [363.6759948730469, 389.59228515625, 384.6627502441406, 397.66204833984375], "spans": [[3, 5]], "text": "0.927", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [394.6180114746094, 389.79852294921875, 418.77886962890625, 397.7248229980469], "spans": [[3, 6]], "text": "0.853", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [439.5270080566406, 389.79852294921875, 458.3842468261719, 397.7248229980469], "spans": [[3, 7]], "text": "1.97", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [161.906005859375, 357.8122863769531, 166.512939453125, 365.8820495605469], "spans": [[4, 0]], "text": "2", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [209.50900268554688, 357.8122863769531, 214.11593627929688, 365.8820495605469], "spans": [[4, 1]], "text": "4", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [245.17599487304688, 350.3403015136719, 272.9395446777344, 371.3610534667969], "spans": [[4, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [289.0169982910156, 363.2912902832031, 310.0037536621094, 371.3610534667969], "spans": [[4, 3]], "text": "0.923", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [326.7170104980469, 350.3403015136719, 347.7037658691406, 371.3610534667969], "spans": [[4, 4]], "text": "0.897 0.901", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [362.0880126953125, 363.2912902832031, 386.2488708496094, 384.7738342285156], "spans": [[4, 5]], "text": "0.938 0.915", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [396.20599365234375, 376.64129638671875, 417.1927490234375, 384.7110595703125], "spans": [[4, 6]], "text": "0.843", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [440.7669982910156, 376.64129638671875, 457.1468200683594, 384.7110595703125], "spans": [[4, 7]], "text": "3.77", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": null, "spans": [[5, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": null, "spans": [[5, 1]], "text": "", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": null, "spans": [[5, 2]], "text": "", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [289.0169982910156, 350.3403015136719, 310.0037536621094, 358.4100646972656], "spans": [[5, 3]], "text": "0.945", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": null, "spans": [[5, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [362.0880126953125, 350.5465393066406, 386.2488708496094, 358.47283935546875], "spans": [[5, 5]], "text": "0.931", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [394.6180114746094, 350.3403015136719, 418.77886962890625, 371.423828125], "spans": [[5, 6]], "text": "0.859 0.834", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [439.5270080566406, 350.3403015136719, 458.3842468261719, 371.423828125], "spans": [[5, 7]], "text": "1.91 3.81", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [161.906005859375, 331.5102844238281, 166.512939453125, 339.5800476074219], "spans": [[6, 0]], "text": "4", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [209.50900268554688, 331.5102844238281, 214.11593627929688, 339.5800476074219], "spans": [[6, 1]], "text": "2", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [245.17599487304688, 324.0382995605469, 272.9395446777344, 345.06005859375], "spans": [[6, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [289.0169982910156, 324.0382995605469, 310.0037536621094, 345.06005859375], "spans": [[6, 3]], "text": "0.952 0.944", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [326.7170104980469, 324.0382995605469, 347.7037658691406, 345.06005859375], "spans": [[6, 4]], "text": "0.92 0.903", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [362.0880126953125, 324.0382995605469, 386.2488708496094, 345.1228332519531], "spans": [[6, 5]], "text": "0.942 0.931", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [394.6180114746094, 324.0382995605469, 418.77886962890625, 345.1228332519531], "spans": [[6, 6]], "text": "0.857 0.824", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [439.5270080566406, 324.0382995605469, 458.3842468261719, 345.1228332519531], "spans": [[6, 7]], "text": "1.22 2", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 6, "row-header": false, "row-span": [6, 7]}]], "model": null}, {"bounding-box": null, "prov": [{"bbox": [143.81715393066406, 528.7755126953125, 470.8412170410156, 635.86865234375], "page": 10, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 2. TSR and cell detection results compared between OTSL and HTML on the PubTabNet [22], FinTabNet [21] and PubTables-1M [14] data sets using TableFormer [9] (with enc=6, dec=6, heads=8).", "type": "table", "#-cols": 7, "#-rows": 8, "data": [[{"bbox": null, "spans": [[0, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [215.52499389648438, 617.3963012695312, 254.04464721679688, 625.4660034179688], "spans": [[0, 1], [1, 1]], "text": "Language", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 2]}, {"bbox": [300.3970031738281, 622.851318359375, 323.9911804199219, 630.9210205078125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "TEDs", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [300.3970031738281, 622.851318359375, 323.9911804199219, 630.9210205078125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "TEDs", "type": "col_header", "col": 3, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [300.3970031738281, 622.851318359375, 323.9911804199219, 630.9210205078125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "TEDs", "type": "col_header", "col": 4, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [370.3450012207031, 617.371337890625, 414.7466125488281, 625.4410400390625], "spans": [[0, 5], [1, 5]], "text": "mAP(0.75)", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 0, "row-header": false, "row-span": [0, 2]}, {"bbox": [423.114013671875, 611.892333984375, 466.7265625, 630.9210205078125], "spans": [[0, 6], [1, 6]], "text": "Inference time (secs)", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 0, "row-header": false, "row-span": [0, 2]}], [{"bbox": null, "spans": [[1, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [215.52499389648438, 617.3963012695312, 254.04464721679688, 625.4660034179688], "spans": [[0, 1], [1, 1]], "text": "Language", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [0, 2]}, {"bbox": [262.4129943847656, 609.8992919921875, 288.0596008300781, 617.968994140625], "spans": [[1, 2]], "text": "simple", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [296.4289855957031, 609.8992919921875, 329.4468688964844, 617.968994140625], "spans": [[1, 3]], "text": "complex", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [345.0329895019531, 609.8992919921875, 354.7579345703125, 617.968994140625], "spans": [[1, 4]], "text": "all", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [370.3450012207031, 617.371337890625, 414.7466125488281, 625.4410400390625], "spans": [[0, 5], [1, 5]], "text": "mAP(0.75)", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 1, "row-header": false, "row-span": [0, 2]}, {"bbox": [423.114013671875, 611.892333984375, 466.7265625, 630.9210205078125], "spans": [[0, 6], [1, 6]], "text": "Inference time (secs)", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 1, "row-header": false, "row-span": [0, 2]}], [{"bbox": [154.53799438476562, 591.0703125, 201.2412872314453, 599.1400146484375], "spans": [[2, 0], [3, 0]], "text": "PubTabNet", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 4]}, {"bbox": [222.43699645996094, 596.54931640625, 247.13226318359375, 604.6190185546875], "spans": [[2, 1]], "text": "OTSL", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [264.7439880371094, 596.54931640625, 285.7307434082031, 604.6190185546875], "spans": [[2, 2]], "text": "0.965", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [302.4440002441406, 596.54931640625, 323.4307556152344, 604.6190185546875], "spans": [[2, 3]], "text": "0.934", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [339.40301513671875, 596.54931640625, 360.3897705078125, 604.6190185546875], "spans": [[2, 4]], "text": "0.955", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [383.1159973144531, 596.7554931640625, 401.9732360839844, 604.6818237304688], "spans": [[2, 5]], "text": "0.88", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [435.4930114746094, 596.7554931640625, 454.3502502441406, 604.6818237304688], "spans": [[2, 6]], "text": "2.73", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [154.53799438476562, 591.0703125, 201.2412872314453, 599.1400146484375], "spans": [[2, 0], [3, 0]], "text": "PubTabNet", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [2, 4]}, {"bbox": [220.9029998779297, 583.5983276367188, 248.66656494140625, 591.6680297851562], "spans": [[3, 1]], "text": "HTML", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [264.7439880371094, 583.5983276367188, 285.7307434082031, 591.6680297851562], "spans": [[3, 2]], "text": "0.969", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [302.4440002441406, 583.5983276367188, 323.4307556152344, 591.6680297851562], "spans": [[3, 3]], "text": "0.927", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [339.40301513671875, 583.5983276367188, 360.3897705078125, 591.6680297851562], "spans": [[3, 4]], "text": "0.955", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [382.052001953125, 583.5983276367188, 403.03875732421875, 591.6680297851562], "spans": [[3, 5]], "text": "0.857", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [436.73199462890625, 583.5983276367188, 453.11181640625, 591.6680297851562], "spans": [[3, 6]], "text": "5.39", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [155.94500732421875, 564.768310546875, 199.833740234375, 572.8380126953125], "spans": [[4, 0], [5, 0]], "text": "FinTabNet", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 6]}, {"bbox": [222.43699645996094, 570.248291015625, 247.13226318359375, 578.3179931640625], "spans": [[4, 1]], "text": "OTSL", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [264.7439880371094, 570.248291015625, 285.7307434082031, 578.3179931640625], "spans": [[4, 2]], "text": "0.955", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [302.4440002441406, 570.248291015625, 323.4307556152344, 578.3179931640625], "spans": [[4, 3]], "text": "0.961", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [337.81500244140625, 570.4544677734375, 361.9758605957031, 578.3807983398438], "spans": [[4, 4]], "text": "0.959", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [380.4639892578125, 570.4544677734375, 404.6248474121094, 578.3807983398438], "spans": [[4, 5]], "text": "0.862", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [435.4930114746094, 570.4544677734375, 454.3502502441406, 578.3807983398438], "spans": [[4, 6]], "text": "1.85", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [155.94500732421875, 564.768310546875, 199.833740234375, 572.8380126953125], "spans": [[4, 0], [5, 0]], "text": "FinTabNet", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [4, 6]}, {"bbox": [220.9029998779297, 557.2963256835938, 248.66656494140625, 565.3660278320312], "spans": [[5, 1]], "text": "HTML", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [264.7439880371094, 557.2963256835938, 285.7307434082031, 565.3660278320312], "spans": [[5, 2]], "text": "0.917", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [302.4440002441406, 557.2963256835938, 323.4307556152344, 565.3660278320312], "spans": [[5, 3]], "text": "0.922", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [341.70599365234375, 557.2963256835938, 358.0858154296875, 565.3660278320312], "spans": [[5, 4]], "text": "0.92", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [382.052001953125, 557.2963256835938, 403.03875732421875, 565.3660278320312], "spans": [[5, 5]], "text": "0.722", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [436.73199462890625, 557.2963256835938, 453.11181640625, 565.3660278320312], "spans": [[5, 6]], "text": "3.26", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [148.62600708007812, 538.4673461914062, 207.15240478515625, 546.5370483398438], "spans": [[6, 0], [7, 0]], "text": "PubTables-1M", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 8]}, {"bbox": [222.43699645996094, 543.9473266601562, 247.13226318359375, 552.0170288085938], "spans": [[6, 1]], "text": "OTSL", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [264.7439880371094, 543.9473266601562, 285.7307434082031, 552.0170288085938], "spans": [[6, 2]], "text": "0.987", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [302.4440002441406, 543.9473266601562, 323.4307556152344, 552.0170288085938], "spans": [[6, 3]], "text": "0.964", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [337.81500244140625, 544.1535034179688, 361.9758605957031, 552.079833984375], "spans": [[6, 4]], "text": "0.977", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [380.4639892578125, 544.1535034179688, 404.6248474121094, 552.079833984375], "spans": [[6, 5]], "text": "0.896", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [435.4930114746094, 544.1535034179688, 454.3502502441406, 552.079833984375], "spans": [[6, 6]], "text": "1.79", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [148.62600708007812, 538.4673461914062, 207.15240478515625, 546.5370483398438], "spans": [[6, 0], [7, 0]], "text": "PubTables-1M", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [6, 8]}, {"bbox": [220.9029998779297, 530.9953002929688, 248.66656494140625, 539.0650024414062], "spans": [[7, 1]], "text": "HTML", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [264.7439880371094, 530.9953002929688, 285.7307434082031, 539.0650024414062], "spans": [[7, 2]], "text": "0.983", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [302.4440002441406, 530.9953002929688, 323.4307556152344, 539.0650024414062], "spans": [[7, 3]], "text": "0.944", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [339.40301513671875, 530.9953002929688, 360.3897705078125, 539.0650024414062], "spans": [[7, 4]], "text": "0.966", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [382.052001953125, 530.9953002929688, 403.03875732421875, 539.0650024414062], "spans": [[7, 5]], "text": "0.889", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [436.73199462890625, 530.9953002929688, 453.11181640625, 539.0650024414062], "spans": [[7, 6]], "text": "3.26", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 7, "row-header": false, "row-span": [7, 8]}]], "model": null}], "bitmaps": null, "equations": [], "footnotes": [], "page-dimensions": [{"height": 792.0, "page": 1, "width": 612.0}, {"height": 792.0, "page": 2, "width": 612.0}, {"height": 792.0, "page": 3, "width": 612.0}, {"height": 792.0, "page": 4, "width": 612.0}, {"height": 792.0, "page": 5, "width": 612.0}, {"height": 792.0, "page": 6, "width": 612.0}, {"height": 792.0, "page": 7, "width": 612.0}, {"height": 792.0, "page": 8, "width": 612.0}, {"height": 792.0, "page": 9, "width": 612.0}, {"height": 792.0, "page": 10, "width": 612.0}, {"height": 792.0, "page": 11, "width": 612.0}, {"height": 792.0, "page": 12, "width": 612.0}, {"height": 792.0, "page": 13, "width": 612.0}, {"height": 792.0, "page": 14, "width": 612.0}], "page-footers": [], "page-headers": [], "_s3_data": null, "identifiers": null}
\ No newline at end of file
+{"_name": "", "type": "pdf-document", "description": {"title": null, "abstract": null, "authors": null, "affiliations": null, "subjects": null, "keywords": null, "publication_date": null, "languages": null, "license": null, "publishers": null, "url_refs": null, "references": null, "publication": null, "reference_count": null, "citation_count": null, "citation_date": null, "advanced": null, "analytics": null, "logs": [], "collection": null, "acquisition": null}, "file-info": {"filename": "2305.03393v1.pdf", "filename-prov": null, "document-hash": "62f2a2163d768d5b125a207967797aefa6c9cc113de8bb5c725c582595dd0c1d", "#-pages": 14, "collection-name": null, "description": null, "page-hashes": [{"hash": "7d7ef24bf2a048bcc229d37583b737ee85f67a02864236764abcaca9eabc8b68", "model": "default", "page": 1}, {"hash": "45bd6ad4d3e145029fa89fbf741a81d8885eb87ef03d6744221c61e66358451b", "model": "default", "page": 2}, {"hash": "69656f07bd8fb7afc53ab6f3d0e9153a337b550522493bf18d702c8406a9c545", "model": "default", "page": 3}, {"hash": "5afca9340c5bda646a75b8c2a1bde1b8f7b89e08a64a3cc4732fd11c1c6ead48", "model": "default", "page": 4}, {"hash": "d3b9daa8fd5d091fb5ef9bce44f085dd282a137e215574fec9556904b25cea8a", "model": "default", "page": 5}, {"hash": "eaaaaebf96b567c9bd5696b2dd4d747b3b3ad40e15ca8dc8968c56060315f228", "model": "default", "page": 6}, {"hash": "d786b8d564d7a7c122f2cf573f0cc1f11ea0a559d93f19cf020c11360bce00b4", "model": "default", "page": 7}, {"hash": "839d5ba3f9d079e8b42470002e4d7cb9ac60681cd9e2f2e3bf41afa6884a170e", "model": "default", "page": 8}, {"hash": "d50e5f3b8b4d1d5b04d5b253b187da6f40784bee5bf36b7eaefcabbc89e7b7a9", "model": "default", "page": 9}, {"hash": "a1509c4093fe25dbcb07c87f394506182323289a17dd189679c0b6d8238c5aae", "model": "default", "page": 10}, {"hash": "ac5ff01e648170bbe641d6fd95dc4f952a8e0bf62308f109b7c49678cef97005", "model": "default", "page": 11}, {"hash": "6a9aa589dc4faead43b032ec733af0c4a6fedfa834aa56b1bfefc7458ea949cc", "model": "default", "page": 12}, {"hash": "467ed0563b555b6fd2a0bd2e4a7bf596c066b8f08d2e1fd33f6c6d8b1c445759", "model": "default", "page": 13}, {"hash": "435efd2ece1dfed60a8dcc1f7fd72dde2cb58c59f5aebc4d5ae2227510195b42", "model": "default", "page": 14}]}, "main-text": [{"prov": [{"bbox": [16.329214096069336, 236.99996948242188, 36.6031608581543, 582.52001953125], "page": 1, "span": [0, 37], "__ref_s3_data": null}], "text": "arXiv:2305.03393v1 [cs.CV] 5 May 2023", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [134.61328125, 644.6187133789062, 480.59735107421875, 676.8052978515625], "page": 1, "span": [0, 60], "__ref_s3_data": null}], "text": "Optimized Table Tokenization for Table Structure Recognition", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [138.6561737060547, 587.6192626953125, 476.05718994140625, 623.0816650390625], "page": 1, "span": [0, 238], "__ref_s3_data": null}], "text": "Maksym Lysak [0000 - 0002 - 3723 - $^{6960]}$, Ahmed Nassar[0000 - 0002 - 9468 - $^{0822]}$, Nikolaos Livathinos [0000 - 0001 - 8513 - $^{3491]}$, Christoph Auer[0000 - 0001 - 5761 - $^{0422]}$, and Peter Staar [0000 - 0002 - 8088 - 0823]", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [222.96609497070312, 555.623046875, 392.69110107421875, 575.94482421875], "page": 1, "span": [0, 49], "__ref_s3_data": null}], "text": "IBM Research {mly,ahn,nli,cau,taa}@zurich.ibm.com", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [162.13674926757812, 327.2655334472656, 452.4198913574219, 522.533447265625], "page": 1, "span": [0, 1198], "__ref_s3_data": null}], "text": "Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [162.6794891357422, 293.8035888671875, 452.2415771484375, 314.24090576171875], "page": 1, "span": [0, 90], "__ref_s3_data": null}], "text": "Keywords: Table Structure Recognition \u00b7 Data Representation \u00b7 Transformers \u00b7 Optimization.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.76512145996094, 259.3119201660156, 228.933837890625, 270.5150451660156], "page": 1, "span": [0, 14], "__ref_s3_data": null}], "text": "1 Introduction", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [134.01023864746094, 163.12771606445312, 480.595947265625, 244.2879638671875], "page": 1, "span": [0, 500], "__ref_s3_data": null}], "text": "Tables are ubiquitous in documents such as scientific papers, patents, reports, manuals, specification sheets or marketing material. They often encode highly valuable information and therefore need to be extracted with high accuracy. Unfortunately, tables appear in documents in various sizes, styling and structure, making it difficult to recover their correct structure with simple analytical methods. Therefore, accurate table extraction is achieved these days with machine-learning based methods.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.044189453125, 126.84117889404297, 480.74835205078125, 160.30677795410156], "page": 1, "span": [0, 235], "__ref_s3_data": null}], "text": "In modern document understanding systems [1,15], table extraction is typically a two-step process. Firstly, every table on a page is located with a bounding box, and secondly, their logical row and column structure is recognized. As of", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.28973388671875, 690.1593017578125, 139.494384765625, 698.4556884765625], "page": 2, "span": [0, 1], "__ref_s3_data": null}], "text": "2", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [167.312744140625, 689.8800048828125, 231.72227478027344, 699.0272827148438], "page": 2, "span": [0, 16], "__ref_s3_data": null}], "text": "M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [133.99227905273438, 591.5379028320312, 480.7561950683594, 666.4251098632812], "page": 2, "span": [0, 574], "__ref_s3_data": null}], "text": "Fig. 1. Comparison between HTML and OTSL table structure representation: (A) table-example with complex row and column headers, including a 2D empty span, (B) minimal graphical representation of table structure using rectangular layout, (C) HTML representation, (D) OTSL representation. This example demonstrates many of the key-features of OTSL, namely its reduced vocabulary size (12 versus 5 in this case), its reduced sequence length (55 versus 30) and a enhanced internal structure (variable token sequence length per row in HTML versus a fixed length of rows in OTSL).", "type": "paragraph", "name": "Text", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/0"}, {"prov": [{"bbox": [133.9597930908203, 270.46295166015625, 480.5923156738281, 340.515380859375], "page": 2, "span": [0, 435], "__ref_s3_data": null}], "text": "today, table detection in documents is a well understood problem, and the latest state-of-the-art (SOTA) object detection methods provide an accuracy comparable to human observers [7,8,10,14,23]. On the other hand, the problem of table structure recognition (TSR) is a lot more challenging and remains a very active area of research, in which many novel machine learning algorithms are being explored [3,4,5,9,11,12,13,14,17,18,21,22].", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.86209106445312, 126.80567932128906, 480.5948181152344, 268.64990234375], "page": 2, "span": [0, 911], "__ref_s3_data": null}], "text": "Recently emerging SOTA methods for table structure recognition employ transformer-based models, in which an image of the table is provided to the network in order to predict the structure of the table as a sequence of tokens. These image-to-sequence (Im2Seq) models are extremely powerful, since they allow for a purely data-driven solution. The tokens of the sequence typically belong to a markup language such as HTML, Latex or Markdown, which allow to describe table structure as rows, columns and spanning cells in various configurations. In Figure 1, we illustrate how HTML is used to represent the table-structure of a particular example table. Public table-structure data sets such as PubTabNet [22], and FinTabNet [21], which were created in a semi-automated way from paired PDF and HTML sources (e.g. PubMed Central), popularized primarily the use of HTML as ground-truth representation format for TSR.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [194.0343780517578, 689.6653442382812, 447.54290771484375, 698.948486328125], "page": 3, "span": [0, 60], "__ref_s3_data": null}], "text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [474.95513916015625, 690.1593017578125, 480.59124755859375, 698.3677978515625], "page": 3, "span": [0, 1], "__ref_s3_data": null}], "text": "3", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [133.981201171875, 579.9556884765625, 480.7418212890625, 673.815185546875], "page": 3, "span": [0, 584], "__ref_s3_data": null}], "text": "While the majority of research in TSR is currently focused on the development and application of novel neural model architectures, the table structure representation language (e.g. HTML in PubTabNet and FinTabNet) is usually adopted as is for the sequence tokenization in Im2Seq models. In this paper, we aim for the opposite and investigate the impact of the table structure representation language with an otherwise unmodified Im2Seq transformer-based architecture. Since the current state-of-the-art Im2Seq model is TableFormer [9], we select this model to perform our experiments.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.7724151611328, 460.7701416015625, 480.87481689453125, 577.6600341796875], "page": 3, "span": [0, 721], "__ref_s3_data": null}], "text": "The main contribution of this paper is the introduction of a new optimised table structure language (OTSL), specifically designed to describe table-structure in an compact and structured way for Im2Seq models. OTSL has a number of key features, which make it very attractive to use in Im2Seq models. Specifically, compared to other languages such as HTML, OTSL has a minimized vocabulary which yields short sequence length, strong inherent structure (e.g. strict rectangular layout) and a strict syntax with rules that only look backwards. The latter allows for syntax validation during inference and ensures a syntactically correct table-structure. These OTSL features are illustrated in Figure 1, in comparison to HTML.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.7509765625, 352.1451110839844, 480.6080017089844, 458.64886474609375], "page": 3, "span": [0, 626], "__ref_s3_data": null}], "text": "The paper is structured as follows. In section 2, we give an overview of the latest developments in table-structure reconstruction. In section 3 we review the current HTML table encoding (popularised by PubTabNet and FinTabNet) and discuss its flaws. Subsequently, we introduce OTSL in section 4, which includes the language definition, syntax rules and error-correction procedures. In section 5, we apply OTSL on the TableFormer architecture, compare it to TableFormer models trained on HTML and ultimately demonstrate the advantages of using OTSL. Finally, in section 6 we conclude our work and outline next potential steps.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.4993896484375, 319.3436584472656, 236.76913452148438, 330.5750732421875], "page": 3, "span": [0, 14], "__ref_s3_data": null}], "text": "2 Related Work", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [133.65347290039062, 126.65711212158203, 484.1204833984375, 304.6298522949219], "page": 3, "span": [0, 1161], "__ref_s3_data": null}], "text": "Approaches to formalize the logical structure and layout of tables in electronic documents date back more than two decades [16]. In the recent past, a wide variety of computer vision methods have been explored to tackle the problem of table structure recognition, i.e. the correct identification of columns, rows and spanning cells in a given table. Broadly speaking, the current deeplearning based approaches fall into three categories: object detection (OD) methods, Graph-Neural-Network (GNN) methods and Image-to-Markup-Sequence (Im2Seq) methods. Object-detection based methods [11,12,13,14,21] rely on tablestructure annotation using (overlapping) bounding boxes for training, and produce bounding-box predictions to define table cells, rows, and columns on a table image. Graph Neural Network (GNN) based methods [3,6,17,18], as the name suggests, represent tables as graph structures. The graph nodes represent the content of each table cell, an embedding vector from the table image, or geometric coordinates of the table cell. The edges of the graph define the relationship between the nodes, e.g. if they belong to the same column, row, or table cell.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.52096557617188, 690.1593017578125, 231.72227478027344, 699.0346069335938], "page": 4, "span": [0, 18], "__ref_s3_data": null}], "text": "4 M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [133.7613983154297, 532.5480346679688, 480.6270446777344, 674.1491088867188], "page": 4, "span": [0, 939], "__ref_s3_data": null}], "text": "Other work [20] aims at predicting a grid for each table and deciding which cells must be merged using an attention network. Im2Seq methods cast the problem as a sequence generation task [4,5,9,22], and therefore need an internal tablestructure representation language, which is often implemented with standard markup languages (e.g. HTML, LaTeX, Markdown). In theory, Im2Seq methods have a natural advantage over the OD and GNN methods by virtue of directly predicting the table-structure. As such, no post-processing or rules are needed in order to obtain the table-structure, which is necessary with OD and GNN approaches. In practice, this is not entirely true, because a predicted sequence of table-structure markup does not necessarily have to be syntactically correct. Hence, depending on the quality of the predicted sequence, some post-processing needs to be performed to ensure a syntactically valid (let alone correct) sequence.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.5825958251953, 305.3533020019531, 480.7930908203125, 530.6050415039062], "page": 4, "span": [0, 1404], "__ref_s3_data": null}], "text": "Within the Im2Seq method, we find several popular models, namely the encoder-dual-decoder model (EDD) [22], TableFormer [9], Tabsplitter[2] and Ye et. al. [19]. EDD uses two consecutive long short-term memory (LSTM) decoders to predict a table in HTML representation. The tag decoder predicts a sequence of HTML tags. For each decoded table cell (
), the attention is passed to the cell decoder to predict the content with an embedded OCR approach. The latter makes it susceptible to transcription errors in the cell content of the table. TableFormer address this reliance on OCR and uses two transformer decoders for HTML structure and cell bounding box prediction in an end-to-end architecture. The predicted cell bounding box is then used to extract text tokens from an originating (digital) PDF page, circumventing any need for OCR. TabSplitter [2] proposes a compact double-matrix representation of table rows and columns to do error detection and error correction of HTML structure sequences based on predictions from [19]. This compact double-matrix representation can not be used directly by the Img2seq model training, so the model uses HTML as an intermediate form. Chi et. al. [4] introduce a data set and a baseline method using bidirectional LSTMs to predict LaTeX code. Kayal [5] introduces Gated ResNet transformers to predict LaTeX code, and a separate OCR module to extract content.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.88829040527344, 209.4513397216797, 480.5937805175781, 303.2884216308594], "page": 4, "span": [0, 572], "__ref_s3_data": null}], "text": "Im2Seq approaches have shown to be well-suited for the TSR task and allow a full end-to-end network design that can output the final table structure without pre- or post-processing logic. Furthermore, Im2Seq models have demonstrated to deliver state-of-the-art prediction accuracy [9]. This motivated the authors to investigate if the performance (both in accuracy and inference time) can be further improved by optimising the table structure representation language. We believe this is a necessary step before further improving neural network architectures for this task.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.42018127441406, 175.88177490234375, 269.6244201660156, 186.8051300048828], "page": 4, "span": [0, 19], "__ref_s3_data": null}], "text": "3 Problem Statement", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [133.80313110351562, 126.69752502441406, 480.59368896484375, 160.46705627441406], "page": 4, "span": [0, 233], "__ref_s3_data": null}], "text": "All known Im2Seq based models for TSR fundamentally work in similar ways. Given an image of a table, the Im2Seq model predicts the structure of the table by generating a sequence of tokens. These tokens originate from a finite vocab-", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [194.02210998535156, 689.8338623046875, 447.54290771484375, 698.9061889648438], "page": 5, "span": [0, 60], "__ref_s3_data": null}], "text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [475.1318664550781, 690.1593017578125, 480.59124755859375, 698.4717407226562], "page": 5, "span": [0, 1], "__ref_s3_data": null}], "text": "5", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [133.90025329589844, 604.4931640625, 480.7872619628906, 673.93798828125], "page": 5, "span": [0, 422], "__ref_s3_data": null}], "text": "ulary and can be interpreted as a table structure. For example, with the HTML tokens
,
,
,
,
and
, one can construct simple table structures without any spanning cells. In reality though, one needs at least 28 HTML tokens to describe the most common complex tables observed in real-world documents [21,22], due to a variety of spanning cells definitions in the HTML token vocabulary.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [145.19676208496094, 562.5794677734375, 469.7522277832031, 571.8128051757812], "page": 5, "span": [0, 73], "__ref_s3_data": null}], "text": "Fig. 2. Frequency of tokens in HTML and OTSL as they appear in PubTabNet.", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/1"}, {"prov": [{"bbox": [133.7060546875, 259.57940673828125, 480.62744140625, 424.87249755859375], "page": 5, "span": [0, 1021], "__ref_s3_data": null}], "text": "Obviously, HTML and other general-purpose markup languages were not designed for Im2Seq models. As such, they have some serious drawbacks. First, the token vocabulary needs to be artificially large in order to describe all plausible tabular structures. Since most Im2Seq models use an autoregressive approach, they generate the sequence token by token. Therefore, to reduce inference time, a shorter sequence length is critical. Every table-cell is represented by at least two tokens (
and
). Furthermore, when tokenizing the HTML structure, one needs to explicitly enumerate possible column-spans and row-spans as words. In practice, this ends up requiring 28 different HTML tokens (when including column- and row-spans up to 10 cells) just to describe every table in the PubTabNet dataset. Clearly, not every token is equally represented, as is depicted in Figure 2. This skewed distribution of tokens in combination with variable token row-length makes it challenging for models to learn the HTML structure.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.89939880371094, 210.46835327148438, 480.5928955078125, 257.10150146484375], "page": 5, "span": [0, 313], "__ref_s3_data": null}], "text": "Additionally, it would be desirable if the representation would easily allow an early detection of invalid sequences on-the-go, before the prediction of the entire table structure is completed. HTML is not well-suited for this purpose as the verification of incomplete sequences is non-trivial or even impossible.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.75929260253906, 126.89654541015625, 480.5947265625, 208.89126586914062], "page": 5, "span": [0, 542], "__ref_s3_data": null}], "text": "In a valid HTML table, the token sequence must describe a 2D grid of table cells, serialised in row-major ordering, where each row and each column have the same length (while considering row- and column-spans). Furthermore, every opening tag in HTML needs to be matched by a closing tag in a correct hierarchical manner. Since the number of tokens for each table row and column can vary significantly, especially for large tables with many row- and column-spans, it is complex to verify the consistency of predicted structures during sequence", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.12826538085938, 690.1593017578125, 139.453125, 698.234130859375], "page": 6, "span": [0, 1], "__ref_s3_data": null}], "text": "6", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [167.2993927001953, 690.0819091796875, 231.72227478027344, 698.99951171875], "page": 6, "span": [0, 16], "__ref_s3_data": null}], "text": "M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [133.94253540039062, 651.3041381835938, 480.59478759765625, 673.705078125], "page": 6, "span": [0, 132], "__ref_s3_data": null}], "text": "generation. Implicitly, this also means that Im2Seq models need to learn these complex syntax rules, simply to deliver valid output.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.64344787597656, 496.2580871582031, 480.595703125, 649.443603515625], "page": 6, "span": [0, 977], "__ref_s3_data": null}], "text": "In practice, we observe two major issues with prediction quality when training Im2Seq models on HTML table structure generation from images. On the one hand, we find that on large tables, the visual attention of the model often starts to drift and is not accurately moving forward cell by cell anymore. This manifests itself in either in an increasing location drift for proposed table-cells in later rows on the same column or even complete loss of vertical alignment, as illustrated in Figure 5. Addressing this with post-processing is partially possible, but clearly undesired. On the other hand, we find many instances of predictions with structural inconsistencies or plain invalid HTML output, as shown in Figure 6, which are nearly impossible to properly correct. Both problems seriously impact the TSR model performance, since they reflect not only in the task of pure structure recognition but also in the equally crucial recognition or matching of table cell content.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.07444763183594, 460.4577331542969, 372.50848388671875, 472.3045959472656], "page": 6, "span": [0, 36], "__ref_s3_data": null}], "text": "4 Optimised Table Structure Language", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [133.82858276367188, 350.400146484375, 480.5947265625, 443.65216064453125], "page": 6, "span": [0, 563], "__ref_s3_data": null}], "text": "To mitigate the issues with HTML in Im2Seq-based TSR models laid out before, we propose here our Optimised Table Structure Language (OTSL). OTSL is designed to express table structure with a minimized vocabulary and a simple set of rules, which are both significantly reduced compared to HTML. At the same time, OTSL enables easy error detection and correction during sequence generation. We further demonstrate how the compact structure representation and minimized sequence length improves prediction accuracy and inference time in the TableFormer architecture.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.0214385986328, 316.9593811035156, 261.80108642578125, 326.9925231933594], "page": 6, "span": [0, 23], "__ref_s3_data": null}], "text": "4.1 Language Definition", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [134.03182983398438, 269.9826354980469, 480.5887145996094, 303.5955505371094], "page": 6, "span": [0, 165], "__ref_s3_data": null}], "text": "In Figure 3, we illustrate how the OTSL is defined. In essence, the OTSL defines only 5 tokens that directly describe a tabular structure based on an atomic 2D grid.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [149.35653686523438, 256.95648193359375, 409.3113708496094, 266.98114013671875], "page": 6, "span": [0, 57], "__ref_s3_data": null}], "text": "The OTSL vocabulary is comprised of the following tokens:", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [139.9448699951172, 235.22317504882812, 460.54443359375, 245.30445861816406], "page": 6, "span": [0, 72], "__ref_s3_data": null}], "text": "-\"C\" cell a new table cell that either has or does not have cell content", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [139.9716796875, 210.11834716796875, 480.59393310546875, 232.8718719482422], "page": 6, "span": [0, 82], "__ref_s3_data": null}], "text": "-\"L\" cell left-looking cell , merging with the left neighbor cell to create a span", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [140.17970275878906, 184.99545288085938, 480.58856201171875, 207.94252014160156], "page": 6, "span": [0, 81], "__ref_s3_data": null}], "text": "-\"U\" cell up-looking cell , merging with the upper neighbor cell to create a span", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [139.92364501953125, 172.88253784179688, 454.5549621582031, 183.41383361816406], "page": 6, "span": [0, 71], "__ref_s3_data": null}], "text": "-\"X\" cell cross cell , to merge with both left and upper neighbor cells", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [139.87696838378906, 160.93917846679688, 328.61676025390625, 170.83633422851562], "page": 6, "span": [0, 40], "__ref_s3_data": null}], "text": "-\"NL\" new-line , switch to the next row.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.19346618652344, 127.14515686035156, 480.5928039550781, 148.89442443847656], "page": 6, "span": [0, 99], "__ref_s3_data": null}], "text": "A notable attribute of OTSL is that it has the capability of achieving lossless conversion to HTML.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [193.9747772216797, 689.7752685546875, 447.54290771484375, 698.8756103515625], "page": 7, "span": [0, 60], "__ref_s3_data": null}], "text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [475.3976135253906, 690.1593017578125, 480.59124755859375, 698.609375], "page": 7, "span": [0, 1], "__ref_s3_data": null}], "text": "7", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [133.8881378173828, 635.6204833984375, 480.58740234375, 667.1154174804688], "page": 7, "span": [0, 207], "__ref_s3_data": null}], "text": "Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/2"}, {"prov": [{"bbox": [134.2874298095703, 477.7056579589844, 246.78787231445312, 487.5195007324219], "page": 7, "span": [0, 19], "__ref_s3_data": null}], "text": "4.2 Language Syntax", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [134.23097229003906, 457.80255126953125, 363.7961730957031, 467.56781005859375], "page": 7, "span": [0, 51], "__ref_s3_data": null}], "text": "The OTSL representation follows these syntax rules:", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [138.97299194335938, 424.0662536621094, 480.5890197753906, 445.8700256347656], "page": 7, "span": [0, 108], "__ref_s3_data": null}], "text": "1. Left-looking cell rule : The left neighbour of an \"L\" cell must be either another \"L\" cell or a \"C\" cell.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.19281005859375, 400.15325927734375, 480.59228515625, 421.95819091796875], "page": 7, "span": [0, 106], "__ref_s3_data": null}], "text": "2. Up-looking cell rule : The upper neighbour of a \"U\" cell must be either another \"U\" cell or a \"C\" cell.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.06527709960938, 388.19525146484375, 226.0736083984375, 397.4916687011719], "page": 7, "span": [0, 20], "__ref_s3_data": null}], "text": "3. Cross cell rule :", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [146.40036010742188, 352.3262939453125, 480.5923767089844, 396.9922180175781], "page": 7, "span": [0, 169], "__ref_s3_data": null}], "text": ": The left neighbour of an \"X\" cell must be either another \"X\" cell or a \"U\" cell, and the upper neighbour of an \"X\" cell must be either another \"X\" cell or an \"L\" cell.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.39491271972656, 339.79541015625, 474.5901794433594, 349.8867492675781], "page": 7, "span": [0, 78], "__ref_s3_data": null}], "text": "4. First row rule : Only \"L\" cells and \"C\" cells are allowed in the first row.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.3254852294922, 316.4543151855469, 480.58746337890625, 338.0946960449219], "page": 7, "span": [0, 84], "__ref_s3_data": null}], "text": "5. First column rule : Only \"U\" cells and \"C\" cells are allowed in the first column.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.22427368164062, 292.2819519042969, 480.5945739746094, 314.491455078125], "page": 7, "span": [0, 144], "__ref_s3_data": null}], "text": "6. Rectangular rule : The table representation is always rectangular - all rows must have an equal number of tokens, terminated with \"NL\" token.", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [133.6158447265625, 149.74966430664062, 480.5958251953125, 280.5412292480469], "page": 7, "span": [0, 848], "__ref_s3_data": null}], "text": "The application of these rules gives OTSL a set of unique properties. First of all, the OTSL enforces a strictly rectangular structure representation, where every new-line token starts a new row. As a consequence, all rows and all columns have exactly the same number of tokens, irrespective of cell spans. Secondly, the OTSL representation is unambiguous: Every table structure is represented in one way. In this representation every table cell corresponds to a \"C\"-cell token, which in case of spans is always located in the top-left corner of the table cell definition. Third, OTSL syntax rules are only backward-looking. As a consequence, every predicted token can be validated straight during sequence generation by looking at the previously predicted sequence. As such, OTSL can guarantee that every predicted sequence is syntactically valid.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.04405212402344, 126.91014099121094, 480.5926513671875, 148.8981170654297], "page": 7, "span": [0, 153], "__ref_s3_data": null}], "text": "These characteristics can be easily learned by sequence generator networks, as we demonstrate further below. We find strong indications that this pattern", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.1900634765625, 690.1593017578125, 139.46353149414062, 698.3311767578125], "page": 8, "span": [0, 1], "__ref_s3_data": null}], "text": "8", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [167.40870666503906, 690.0598754882812, 231.72227478027344, 699.074462890625], "page": 8, "span": [0, 16], "__ref_s3_data": null}], "text": "M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [134.2002410888672, 651.7838745117188, 480.5888366699219, 673.7068481445312], "page": 8, "span": [0, 84], "__ref_s3_data": null}], "text": "reduces significantly the column drift seen in the HTML based models (see Figure 5).", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.25576782226562, 620.8721313476562, 319.3470764160156, 630.8031005859375], "page": 8, "span": [0, 35], "__ref_s3_data": null}], "text": "4.3 Error-detection and -mitigation", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [133.90631103515625, 492.9853515625, 480.59576416015625, 610.5565185546875], "page": 8, "span": [0, 797], "__ref_s3_data": null}], "text": "The design of OTSL allows to validate a table structure easily on an unfinished sequence. The detection of an invalid sequence token is a clear indication of a prediction mistake, however a valid sequence by itself does not guarantee prediction correctness. Different heuristics can be used to correct token errors in an invalid sequence and thus increase the chances for accurate predictions. Such heuristics can be applied either after the prediction of each token, or at the end on the entire predicted sequence. For example a simple heuristic which can correct the predicted OTSL sequence on-the-fly is to verify if the token with the highest prediction confidence invalidates the predicted sequence, and replace it by the token with the next highest confidence until OTSL rules are satisfied.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.63143920898438, 459.85089111328125, 229.03533935546875, 471.56646728515625], "page": 8, "span": [0, 13], "__ref_s3_data": null}], "text": "5 Experiments", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [133.63893127441406, 339.67877197265625, 480.6024475097656, 445.8916015625], "page": 8, "span": [0, 684], "__ref_s3_data": null}], "text": "To evaluate the impact of OTSL on prediction accuracy and inference times, we conducted a series of experiments based on the TableFormer model (Figure 4) with two objectives: Firstly we evaluate the prediction quality and performance of OTSL vs. HTML after performing Hyper Parameter Optimization (HPO) on the canonical PubTabNet data set. Secondly we pick the best hyper-parameters found in the first step and evaluate how OTSL impacts the performance of TableFormer after training on other publicly available data sets (FinTabNet, PubTables-1M [14]). The ground truth (GT) from all data sets has been converted into OTSL format for this purpose, and will be made publicly available.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.0367889404297, 287.69140625, 480.5908203125, 308.2715148925781], "page": 8, "span": [0, 104], "__ref_s3_data": null}], "text": "Fig. 4. Architecture sketch of the TableFormer model, which is a representative for the Im2Seq approach.", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/3"}, {"prov": [{"bbox": [133.83853149414062, 126.85651397705078, 480.59173583984375, 172.45193481445312], "page": 8, "span": [0, 299], "__ref_s3_data": null}], "text": "We rely on standard metrics such as Tree Edit Distance score (TEDs) for table structure prediction, and Mean Average Precision (mAP) with 0.75 Intersection Over Union (IOU) threshold for the bounding-box predictions of table cells. The predicted OTSL structures were converted back to HTML format in", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [193.94395446777344, 689.7586669921875, 447.54290771484375, 698.8834228515625], "page": 9, "span": [0, 60], "__ref_s3_data": null}], "text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [474.9051818847656, 690.1593017578125, 480.59124755859375, 698.5001831054688], "page": 9, "span": [0, 1], "__ref_s3_data": null}], "text": "9", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [133.90585327148438, 640.3582153320312, 480.5957946777344, 673.7608642578125], "page": 9, "span": [0, 163], "__ref_s3_data": null}], "text": "order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.28504943847656, 613.6966552734375, 318.44842529296875, 623.6006469726562], "page": 9, "span": [0, 32], "__ref_s3_data": null}], "text": "5.1 Hyper Parameter Optimization", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [133.80441284179688, 537.6300659179688, 481.1519775390625, 607.1452026367188], "page": 9, "span": [0, 423], "__ref_s3_data": null}], "text": "We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.88543701171875, 464.55596923828125, 480.59539794921875, 517.7815551757812], "page": 9, "span": [0, 398], "__ref_s3_data": null}], "text": "Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.", "type": "caption", "name": "Caption", "font": null}, {"name": "Table", "type": "table", "$ref": "#/tables/0"}, {"prov": [{"bbox": [134.48985290527344, 274.2215881347656, 264.4033203125, 284.3811950683594], "page": 9, "span": [0, 24], "__ref_s3_data": null}], "text": "5.2 Quantitative Results", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [133.97792053222656, 174.46827697753906, 480.59576416015625, 268.4878234863281], "page": 9, "span": [0, 555], "__ref_s3_data": null}], "text": "We picked the model parameter configuration that produced the best prediction quality (enc=6, dec=6, heads=8) with PubTabNet alone, then independently trained and evaluated it on three publicly available data sets: PubTabNet (395k samples), FinTabNet (113k samples) and PubTables-1M (about 1M samples). Performance results are presented in Table. 2. It is clearly evident that the model trained on OTSL outperforms HTML across the board, keeping high TEDs and mAP scores even on difficult financial tables (FinTabNet) that contain sparse and large tables.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.90371704101562, 126.73831176757812, 480.6639099121094, 172.7313995361328], "page": 9, "span": [0, 289], "__ref_s3_data": null}], "text": "Additionally, the results show that OTSL has an advantage over HTML when applied on a bigger data set like PubTables-1M and achieves significantly improved scores. Finally, OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.6792755126953, 690.1593017578125, 144.2487335205078, 698.4376831054688], "page": 10, "span": [0, 2], "__ref_s3_data": null}], "text": "10", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [167.2496337890625, 690.1593017578125, 231.72048950195312, 699.0352783203125], "page": 10, "span": [0, 16], "__ref_s3_data": null}], "text": "M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [134.00595092773438, 645.5076904296875, 480.59356689453125, 677.1614379882812], "page": 10, "span": [0, 192], "__ref_s3_data": null}], "text": "Table 2. TSR and cell detection results compared between OTSL and HTML on the PubTabNet [22], FinTabNet [21] and PubTables-1M [14] data sets using TableFormer [9] (with enc=6, dec=6, heads=8).", "type": "caption", "name": "Caption", "font": null}, {"name": "Table", "type": "table", "$ref": "#/tables/1"}, {"prov": [{"bbox": [134.25314331054688, 493.7161560058594, 257.19561767578125, 503.76678466796875], "page": 10, "span": [0, 23], "__ref_s3_data": null}], "text": "5.3 Qualitative Results", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [133.7931365966797, 425.5223083496094, 480.6096496582031, 483.0732421875], "page": 10, "span": [0, 309], "__ref_s3_data": null}], "text": "To illustrate the qualitative differences between OTSL and HTML, Figure 5 demonstrates less overlap and more accurate bounding boxes with OTSL. In Figure 6, OTSL proves to be more effective in handling tables with longer token sequences, resulting in even more precise structure prediction and bounding boxes.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.934326171875, 352.2828369140625, 480.591064453125, 395.2126770019531], "page": 10, "span": [0, 270], "__ref_s3_data": null}], "text": "Fig. 5. The OTSL model produces more accurate bounding boxes with less overlap (E) than the HTML model (D), when predicting the structure of a sparse table (A), at twice the inference speed because of shorter sequence length (B),(C). \"PMC2807444_006_00.png\" PubTabNet. \u03bc", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/4"}, {"prov": [{"bbox": [227.91465759277344, 116.65360260009766, 230.10028076171875, 126.1739730834961], "page": 10, "span": [0, 1], "__ref_s3_data": null}], "text": "\u03bc", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [300.58056640625, 98.57134246826172, 302.72637939453125, 108.3780517578125], "page": 10, "span": [0, 1], "__ref_s3_data": null}], "text": "\u2265", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [194.172119140625, 689.804443359375, 447.54290771484375, 698.850830078125], "page": 11, "span": [0, 60], "__ref_s3_data": null}], "text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [471.22021484375, 690.1593017578125, 480.5894775390625, 698.3983154296875], "page": 11, "span": [0, 2], "__ref_s3_data": null}], "text": "11", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [134.00157165527344, 613.6331176757812, 480.82830810546875, 667.0059814453125], "page": 11, "span": [0, 390], "__ref_s3_data": null}], "text": "Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. \"PMC5406406_003_01.png\" PubTabNet.", "type": "caption", "name": "Caption", "font": null}, {"name": "Picture", "type": "figure", "$ref": "#/figures/5"}, {"prov": [{"bbox": [134.69354248046875, 690.152099609375, 231.72048950195312, 698.9852905273438], "page": 12, "span": [0, 19], "__ref_s3_data": null}], "text": "12 M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [134.32138061523438, 663.8826293945312, 219.25479125976562, 675.0826416015625], "page": 12, "span": [0, 12], "__ref_s3_data": null}], "text": "6 Conclusion", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [134.07997131347656, 588.5181884765625, 480.595703125, 645.8515014648438], "page": 12, "span": [0, 330], "__ref_s3_data": null}], "text": "We demonstrated that representing tables in HTML for the task of table structure recognition with Im2Seq models is ill-suited and has serious limitations. Furthermore, we presented in this paper an Optimized Table Structure Language (OTSL) which, when compared to commonly used general purpose languages, has several key benefits.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.63015747070312, 467.4183654785156, 480.6451416015625, 585.736328125], "page": 12, "span": [0, 724], "__ref_s3_data": null}], "text": "First and foremost, given the same network configuration, inference time for a table-structure prediction is about 2 times faster compared to the conventional HTML approach. This is primarily owed to the shorter sequence length of the OTSL representation. Additional performance benefits can be obtained with HPO (hyper parameter optimization). As we demonstrate in our experiments, models trained on OTSL can be significantly smaller, e.g. by reducing the number of encoder and decoder layers, while preserving comparatively good prediction quality. This can further improve inference performance, yielding 5-6 times faster inference speed in OTSL with prediction quality comparable to models trained on HTML (see Table 1).", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [133.8241424560547, 323.7073974609375, 480.5948181152344, 465.1226806640625], "page": 12, "span": [0, 926], "__ref_s3_data": null}], "text": "Secondly, OTSL has more inherent structure and a significantly restricted vocabulary size. This allows autoregressive models to perform better in the TED metric, but especially with regards to prediction accuracy of the table-cell bounding boxes (see Table 2). As shown in Figure 5, we observe that the OTSL drastically reduces the drift for table cell bounding boxes at high row count and in sparse tables. This leads to more accurate predictions and a significant reduction in post-processing complexity, which is an undesired necessity in HTML-based Im2Seq models. Significant novelty lies in OTSL syntactical rules, which are few, simple and always backwards looking. Each new token can be validated only by analyzing the sequence of previous tokens, without requiring the entire sequence to detect mistakes. This in return allows to perform structural error detection and correction on-the-fly during sequence generation.", "type": "paragraph", "name": "Text", "font": null}, {"prov": [{"bbox": [134.31680297851562, 287.61077880859375, 197.68641662597656, 298.98321533203125], "page": 12, "span": [0, 10], "__ref_s3_data": null}], "text": "References", "type": "subtitle-level-1", "name": "Section-header", "font": null}, {"prov": [{"bbox": [139.37100219726562, 227.38706970214844, 480.5920104980469, 269.8235168457031], "page": 12, "span": [0, 270], "__ref_s3_data": null}], "text": "1. Auer, C., Dolfi, M., Carvalho, A., Ramis, C.B., Staar, P.W.J.: Delivering document conversion as a cloud service with high throughput and responsiveness. CoRR abs/2206.00785 (2022). https://doi.org/10.48550/arXiv.2206.00785 , https://doi.org/10.48550/arXiv.2206.00785", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.86715698242188, 182.8286590576172, 480.6174011230469, 225.87879943847656], "page": 12, "span": [0, 301], "__ref_s3_data": null}], "text": "2. Chen, B., Peng, D., Zhang, J., Ren, Y., Jin, L.: Complex table structure recognition in the wild using transformer and identity matrix-based augmentation. In: Porwal, U., Forn\u00e9s, A., Shafait, F. (eds.) Frontiers in Handwriting Recognition. pp. 545561. Springer International Publishing, Cham (2022)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.72738647460938, 160.16236877441406, 480.5873107910156, 181.41339111328125], "page": 12, "span": [0, 140], "__ref_s3_data": null}], "text": "3. Chi, Z., Huang, H., Xu, H.D., Yu, H., Yin, W., Mao, X.L.: Complicated table structure recognition. arXiv preprint arXiv:1908.04729 (2019)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.9593963623047, 126.65552520751953, 480.5882568359375, 157.8516387939453], "page": 12, "span": [0, 204], "__ref_s3_data": null}], "text": "4. Deng, Y., Rosenberg, D., Mann, G.: Challenges in end-to-end neural scientific table recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 894-901. IEEE (2019)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [194.0724639892578, 689.6328735351562, 447.54290771484375, 698.8519287109375], "page": 13, "span": [0, 60], "__ref_s3_data": null}], "text": "Optimized Table Tokenization for Table Structure Recognition", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [471.1661376953125, 690.1593017578125, 480.5894775390625, 698.4201049804688], "page": 13, "span": [0, 2], "__ref_s3_data": null}], "text": "13", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [138.6960906982422, 641.0914306640625, 480.59478759765625, 672.9320068359375], "page": 13, "span": [0, 203], "__ref_s3_data": null}], "text": "5. Kayal, P., Anand, M., Desai, H., Singh, M.: Tables to latex: structure and content extraction from scientific tables. International Journal on Document Analysis and Recognition (IJDAR) pp. 1-10 (2022)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.54495239257812, 598.4913940429688, 480.7531433105469, 640.2967529296875], "page": 13, "span": [0, 264], "__ref_s3_data": null}], "text": "6. Lee, E., Kwon, J., Yang, H., Park, J., Lee, S., Koo, H.I., Cho, N.I.: Table structure recognition based on grid shape graph. In: 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). pp. 18681873. IEEE (2022)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [139.07086181640625, 576.4161376953125, 480.5901184082031, 596.6123046875], "page": 13, "span": [0, 131], "__ref_s3_data": null}], "text": "7. Li, M., Cui, L., Huang, S., Wei, F., Zhou, M., Li, Z.: Tablebank: A benchmark dataset for table detection and recognition (2019)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.5443878173828, 521.7116088867188, 480.8269348144531, 574.5029296875], "page": 13, "span": [0, 345], "__ref_s3_data": null}], "text": "8. Livathinos, N., Berrospi, C., Lysak, M., Kuropiatnyk, V., Nassar, A., Carvalho, A., Dolfi, M., Auer, C., Dinkla, K., Staar, P.: Robust pdf document conversion using recurrent neural networks. Proceedings of the AAAI Conference on Artificial Intelligence 35 (17), 15137-15145 (May 2021), https://ojs.aaai.org/index.php/ AAAI/article/view/17777", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [138.21878051757812, 487.909423828125, 480.5938720703125, 519.8042602539062], "page": 13, "span": [0, 234], "__ref_s3_data": null}], "text": "9. Nassar, A., Livathinos, N., Lysak, M., Staar, P.: Tableformer: Table structure understanding with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4614-4623 (June 2022)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.7440185546875, 422.8146057128906, 480.6158447265625, 486.7056579589844], "page": 13, "span": [0, 413], "__ref_s3_data": null}], "text": "10. Pfitzmann, B., Auer, C., Dolfi, M., Nassar, A.S., Staar, P.W.J.: Doclaynet: A large human-annotated dataset for document-layout segmentation. In: Zhang, A., Rangwala, H. (eds.) KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. pp. 3743-3751. ACM (2022). https://doi.org/10.1145/3534678.3539043 , https:// doi.org/10.1145/3534678.3539043", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.48020935058594, 378.9383850097656, 480.59295654296875, 421.14239501953125], "page": 13, "span": [0, 295], "__ref_s3_data": null}], "text": "11. Prasad, D., Gadpal, A., Kapadni, K., Visave, M., Sultanpure, K.: Cascadetabnet: An approach for end to end table detection and structure recognition from imagebased documents. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. pp. 572-573 (2020)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.6136016845703, 334.68109130859375, 480.6297302246094, 377.08355712890625], "page": 13, "span": [0, 281], "__ref_s3_data": null}], "text": "12. Schreiber, S., Agne, S., Wolf, I., Dengel, A., Ahmed, S.: Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR). vol. 1, pp. 1162-1167. IEEE (2017)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.72238159179688, 290.7889099121094, 480.75555419921875, 333.61895751953125], "page": 13, "span": [0, 275], "__ref_s3_data": null}], "text": "13. Siddiqui, S.A., Fateh, I.A., Rizvi, S.T.R., Dengel, A., Ahmed, S.: Deeptabstr: Deep learning based table structure recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1403-1409 (2019). https:// doi.org/10.1109/ICDAR.2019.00226", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.3740997314453, 247.3230743408203, 480.5928649902344, 289.9039306640625], "page": 13, "span": [0, 241], "__ref_s3_data": null}], "text": "14. Smock, B., Pesala, R., Abraham, R.: PubTables-1M: Towards comprehensive table extraction from unstructured documents. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4634-4642 (June 2022)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.6051483154297, 181.90472412109375, 480.6208190917969, 245.70274353027344], "page": 13, "span": [0, 405], "__ref_s3_data": null}], "text": "15. Staar, P.W.J., Dolfi, M., Auer, C., Bekas, C.: Corpus conversion service: A machine learning platform to ingest documents at scale. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 774-782. KDD '18, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3219819.3219834 , https://doi.org/10. 1145/3219819.3219834", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.76400756835938, 159.9412841796875, 480.5954284667969, 179.845703125], "page": 13, "span": [0, 96], "__ref_s3_data": null}], "text": "16. Wang, X.: Tabular Abstraction, Editing, and Formatting. Ph.D. thesis, CAN (1996), aAINN09397", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.76400756835938, 126.6559829711914, 480.5911865234375, 157.7118377685547], "page": 13, "span": [0, 195], "__ref_s3_data": null}], "text": "17. Xue, W., Li, Q., Tao, D.: Res2tim: Reconstruct syntactic structures from table images. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 749-755. IEEE (2019)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.76499938964844, 690.1593017578125, 231.72048950195312, 699.0250244140625], "page": 14, "span": [0, 19], "__ref_s3_data": null}], "text": "14 M. Lysak, et al.", "type": "page-header", "name": "Page-header", "font": null}, {"prov": [{"bbox": [134.63540649414062, 641.2738647460938, 480.59112548828125, 673.007568359375], "page": 14, "span": [0, 223], "__ref_s3_data": null}], "text": "18. Xue, W., Yu, B., Wang, W., Tao, D., Li, Q.: Tgrnet: A table graph reconstruction network for table structure recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1295-1304 (2021)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.76499938964844, 598.3690795898438, 480.9535217285156, 640.1014404296875], "page": 14, "span": [0, 269], "__ref_s3_data": null}], "text": "19. Ye, J., Qi, X., He, Y., Chen, Y., Gu, D., Gao, P., Xiao, R.: Pingan-vcgroup's solution for icdar 2021 competition on scientific literature parsing task b: Table recognition to html (2021). https://doi.org/10.48550/ARXIV.2105.01848 , https://arxiv.org/abs/2105.01848", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.35293579101562, 576.3993530273438, 480.5935363769531, 596.5462036132812], "page": 14, "span": [0, 147], "__ref_s3_data": null}], "text": "20. Zhang, Z., Zhang, J., Du, J., Wang, F.: Split, embed and merge: An accurate table structure recognizer. Pattern Recognition 126 , 108565 (2022)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.2264862060547, 521.74560546875, 480.8044738769531, 574.3355712890625], "page": 14, "span": [0, 329], "__ref_s3_data": null}], "text": "21. Zheng, X., Burdick, D., Popa, L., Zhong, X., Wang, N.X.R.: Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 697-706 (2021). https://doi.org/10.1109/WACV48630.2021. 00074", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [133.99171447753906, 477.6664123535156, 480.5955810546875, 519.9246826171875], "page": 14, "span": [0, 259], "__ref_s3_data": null}], "text": "22. Zhong, X., ShafieiBavani, E., Jimeno Yepes, A.: Image-based table recognition: Data, model, and evaluation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) Computer Vision - ECCV 2020. pp. 564-580. Springer International Publishing, Cham (2020)", "type": "paragraph", "name": "List-item", "font": null}, {"prov": [{"bbox": [134.23336791992188, 444.7017822265625, 480.59454345703125, 475.69757080078125], "page": 14, "span": [0, 206], "__ref_s3_data": null}], "text": "23. Zhong, X., Tang, J., Yepes, A.J.: Publaynet: largest dataset ever for document layout analysis. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1015-1022. IEEE (2019)", "type": "paragraph", "name": "List-item", "font": null}], "figures": [{"prov": [{"bbox": [150.0213623046875, 366.15130615234375, 464.4815673828125, 583.114990234375], "page": 2, "span": [0, 574], "__ref_s3_data": null}], "text": "Fig. 1. Comparison between HTML and OTSL table structure representation: (A) table-example with complex row and column headers, including a 2D empty span, (B) minimal graphical representation of table structure using rectangular layout, (C) HTML representation, (D) OTSL representation. This example demonstrates many of the key-features of OTSL, namely its reduced vocabulary size (12 versus 5 in this case), its reduced sequence length (55 versus 30) and a enhanced internal structure (variable token sequence length per row in HTML versus a fixed length of rows in OTSL).", "type": "figure", "bounding-box": null}, {"prov": [{"bbox": [137.5374755859375, 452.4152526855469, 476.1513366699219, 562.9699096679688], "page": 5, "span": [0, 73], "__ref_s3_data": null}], "text": "Fig. 2. Frequency of tokens in HTML and OTSL as they appear in PubTabNet.", "type": "figure", "bounding-box": null}, {"prov": [{"bbox": [164.22023010253906, 511.6170959472656, 448.9761047363281, 628.123291015625], "page": 7, "span": [0, 207], "__ref_s3_data": null}], "text": "Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding", "type": "figure", "bounding-box": null}, {"prov": [{"bbox": [141.4298095703125, 197.92733764648438, 472.34527587890625, 285.1344299316406], "page": 8, "span": [0, 104], "__ref_s3_data": null}], "text": "Fig. 4. Architecture sketch of the TableFormer model, which is a representative for the Im2Seq approach.", "type": "figure", "bounding-box": null}, {"prov": [{"bbox": [162.900146484375, 128.48397827148438, 451.3374328613281, 348.21990966796875], "page": 10, "span": [0, 270], "__ref_s3_data": null}], "text": "Fig. 5. The OTSL model produces more accurate bounding boxes with less overlap (E) than the HTML model (D), when predicting the structure of a sparse table (A), at twice the inference speed because of shorter sequence length (B),(C). \"PMC2807444_006_00.png\" PubTabNet. \u03bc", "type": "figure", "bounding-box": null}, {"prov": [{"bbox": [168.26930236816406, 157.55677795410156, 447.7568664550781, 609.8697509765625], "page": 11, "span": [0, 390], "__ref_s3_data": null}], "text": "Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. \"PMC5406406_003_01.png\" PubTabNet.", "type": "figure", "bounding-box": null}], "tables": [{"prov": [{"bbox": [139.82040405273438, 322.2669982910156, 474.80023193359375, 454.9158935546875], "page": 9, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.", "type": "table", "#-cols": 8, "#-rows": 7, "data": [[{"bbox": [160.3699951171875, 442.1952819824219, 168.0479278564453, 450.2650451660156], "spans": [[0, 0]], "text": "#", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [207.9739990234375, 442.1952819824219, 215.6519317626953, 450.2650451660156], "spans": [[0, 1]], "text": "#", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [239.79800415039062, 436.7162780761719, 278.3176574707031, 444.7860412597656], "spans": [[0, 2], [1, 2]], "text": "Language", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 0, "row-header": false, "row-span": [0, 2]}, {"bbox": [324.6700134277344, 442.1952819824219, 348.2641906738281, 450.2650451660156], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [324.6700134277344, 442.1952819824219, 348.2641906738281, 450.2650451660156], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 4, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [324.6700134277344, 442.1952819824219, 348.2641906738281, 450.2650451660156], "spans": [[0, 3], [0, 4], [0, 5]], "text": "TEDs", "type": "col_header", "col": 5, "col-header": false, "col-span": [3, 6], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [396.27099609375, 442.1952819824219, 417.1268310546875, 450.2650451660156], "spans": [[0, 6]], "text": "mAP", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [430.77099609375, 442.1952819824219, 467.1423034667969, 450.2650451660156], "spans": [[0, 7]], "text": "Inference", "type": "col_header", "col": 7, "col-header": false, "col-span": [7, 8], "row": 0, "row-header": false, "row-span": [0, 1]}], [{"bbox": [144.5919952392578, 429.2442932128906, 183.82806396484375, 437.3140563964844], "spans": [[1, 0]], "text": "enc-layers", "type": "col_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [192.1949920654297, 429.2442932128906, 231.43106079101562, 437.3140563964844], "spans": [[1, 1]], "text": "dec-layers", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [239.79800415039062, 436.7162780761719, 278.3176574707031, 444.7860412597656], "spans": [[0, 2], [1, 2]], "text": "Language", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [0, 2]}, {"bbox": [286.6860046386719, 429.2442932128906, 312.3326110839844, 437.3140563964844], "spans": [[1, 3]], "text": "simple", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [320.7019958496094, 429.2442932128906, 353.7198791503906, 437.3140563964844], "spans": [[1, 4]], "text": "complex", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [369.3059997558594, 429.2442932128906, 379.03094482421875, 437.3140563964844], "spans": [[1, 5]], "text": "all", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [394.927001953125, 431.2362976074219, 418.4727783203125, 439.3060607910156], "spans": [[1, 6]], "text": "(0.75)", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [427.14801025390625, 431.2362976074219, 470.76055908203125, 439.3060607910156], "spans": [[1, 7]], "text": "time (secs)", "type": "col_header", "col": 7, "col-header": false, "col-span": [7, 8], "row": 1, "row-header": false, "row-span": [1, 2]}], [{"bbox": [161.906005859375, 410.4142761230469, 166.512939453125, 418.4840393066406], "spans": [[2, 0]], "text": "6", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [209.50900268554688, 410.4142761230469, 214.11593627929688, 418.4840393066406], "spans": [[2, 1]], "text": "6", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [245.17599487304688, 402.9422912597656, 272.9395446777344, 423.96405029296875], "spans": [[2, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [289.0169982910156, 402.9422912597656, 310.0037536621094, 423.96405029296875], "spans": [[2, 3]], "text": "0.965 0.969", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [326.7170104980469, 402.9422912597656, 347.7037658691406, 423.96405029296875], "spans": [[2, 4]], "text": "0.934 0.927", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [363.6759948730469, 402.9422912597656, 384.6627502441406, 423.96405029296875], "spans": [[2, 5]], "text": "0.955 0.955", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [396.20599365234375, 402.9422912597656, 417.1927490234375, 424.0268249511719], "spans": [[2, 6]], "text": "0.88 0.857", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [439.5270080566406, 402.9422912597656, 458.3842468261719, 424.0268249511719], "spans": [[2, 7]], "text": "2.73 5.39", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [161.906005859375, 384.11328125, 166.512939453125, 392.18304443359375], "spans": [[3, 0]], "text": "4", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [209.50900268554688, 384.11328125, 214.11593627929688, 392.18304443359375], "spans": [[3, 1]], "text": "4", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [245.17599487304688, 376.64129638671875, 272.9395446777344, 397.66204833984375], "spans": [[3, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [289.0169982910156, 376.64129638671875, 310.0037536621094, 397.66204833984375], "spans": [[3, 3]], "text": "0.938 0.952", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [326.7170104980469, 376.64129638671875, 347.7037658691406, 397.66204833984375], "spans": [[3, 4]], "text": "0.904 0.909", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [363.6759948730469, 389.59228515625, 384.6627502441406, 397.66204833984375], "spans": [[3, 5]], "text": "0.927", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [394.6180114746094, 389.79852294921875, 418.77886962890625, 397.7248229980469], "spans": [[3, 6]], "text": "0.853", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [439.5270080566406, 389.79852294921875, 458.3842468261719, 397.7248229980469], "spans": [[3, 7]], "text": "1.97", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [161.906005859375, 357.8122863769531, 166.512939453125, 365.8820495605469], "spans": [[4, 0]], "text": "2", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [209.50900268554688, 357.8122863769531, 214.11593627929688, 365.8820495605469], "spans": [[4, 1]], "text": "4", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [245.17599487304688, 350.3403015136719, 272.9395446777344, 371.3610534667969], "spans": [[4, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [289.0169982910156, 363.2912902832031, 310.0037536621094, 371.3610534667969], "spans": [[4, 3]], "text": "0.923", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [326.7170104980469, 350.3403015136719, 347.7037658691406, 371.3610534667969], "spans": [[4, 4]], "text": "0.897 0.901", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [362.0880126953125, 363.2912902832031, 386.2488708496094, 384.7738342285156], "spans": [[4, 5]], "text": "0.938 0.915", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [396.20599365234375, 376.64129638671875, 417.1927490234375, 384.7110595703125], "spans": [[4, 6]], "text": "0.843", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [440.7669982910156, 376.64129638671875, 457.1468200683594, 384.7110595703125], "spans": [[4, 7]], "text": "3.77", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": null, "spans": [[5, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": null, "spans": [[5, 1]], "text": "", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": null, "spans": [[5, 2]], "text": "", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [289.0169982910156, 350.3403015136719, 310.0037536621094, 358.4100646972656], "spans": [[5, 3]], "text": "0.945", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": null, "spans": [[5, 4]], "text": "", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [362.0880126953125, 350.5465393066406, 386.2488708496094, 358.47283935546875], "spans": [[5, 5]], "text": "0.931", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [394.6180114746094, 350.3403015136719, 418.77886962890625, 371.423828125], "spans": [[5, 6]], "text": "0.859 0.834", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [439.5270080566406, 350.3403015136719, 458.3842468261719, 371.423828125], "spans": [[5, 7]], "text": "1.91 3.81", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [161.906005859375, 331.5102844238281, 166.512939453125, 339.5800476074219], "spans": [[6, 0]], "text": "4", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [209.50900268554688, 331.5102844238281, 214.11593627929688, 339.5800476074219], "spans": [[6, 1]], "text": "2", "type": "body", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [245.17599487304688, 324.0382995605469, 272.9395446777344, 345.06005859375], "spans": [[6, 2]], "text": "OTSL HTML", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [289.0169982910156, 324.0382995605469, 310.0037536621094, 345.06005859375], "spans": [[6, 3]], "text": "0.952 0.944", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [326.7170104980469, 324.0382995605469, 347.7037658691406, 345.06005859375], "spans": [[6, 4]], "text": "0.92 0.903", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [362.0880126953125, 324.0382995605469, 386.2488708496094, 345.1228332519531], "spans": [[6, 5]], "text": "0.942 0.931", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [394.6180114746094, 324.0382995605469, 418.77886962890625, 345.1228332519531], "spans": [[6, 6]], "text": "0.857 0.824", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [439.5270080566406, 324.0382995605469, 458.3842468261719, 345.1228332519531], "spans": [[6, 7]], "text": "1.22 2", "type": "body", "col": 7, "col-header": false, "col-span": [7, 8], "row": 6, "row-header": false, "row-span": [6, 7]}]], "model": null, "bounding-box": null}, {"prov": [{"bbox": [143.81715393066406, 528.7755126953125, 470.8412170410156, 635.86865234375], "page": 10, "span": [0, 0], "__ref_s3_data": null}], "text": "Table 2. TSR and cell detection results compared between OTSL and HTML on the PubTabNet [22], FinTabNet [21] and PubTables-1M [14] data sets using TableFormer [9] (with enc=6, dec=6, heads=8).", "type": "table", "#-cols": 7, "#-rows": 8, "data": [[{"bbox": null, "spans": [[0, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [215.52499389648438, 617.3963012695312, 254.04464721679688, 625.4660034179688], "spans": [[0, 1], [1, 1]], "text": "Language", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 0, "row-header": false, "row-span": [0, 2]}, {"bbox": [300.3970031738281, 622.851318359375, 323.9911804199219, 630.9210205078125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "TEDs", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [300.3970031738281, 622.851318359375, 323.9911804199219, 630.9210205078125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "TEDs", "type": "col_header", "col": 3, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [300.3970031738281, 622.851318359375, 323.9911804199219, 630.9210205078125], "spans": [[0, 2], [0, 3], [0, 4]], "text": "TEDs", "type": "col_header", "col": 4, "col-header": false, "col-span": [2, 5], "row": 0, "row-header": false, "row-span": [0, 1]}, {"bbox": [370.3450012207031, 617.371337890625, 414.7466125488281, 625.4410400390625], "spans": [[0, 5], [1, 5]], "text": "mAP(0.75)", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 0, "row-header": false, "row-span": [0, 2]}, {"bbox": [423.114013671875, 611.892333984375, 466.7265625, 630.9210205078125], "spans": [[0, 6], [1, 6]], "text": "Inference time (secs)", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 0, "row-header": false, "row-span": [0, 2]}], [{"bbox": null, "spans": [[1, 0]], "text": "", "type": "body", "col": 0, "col-header": false, "col-span": [0, 1], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [215.52499389648438, 617.3963012695312, 254.04464721679688, 625.4660034179688], "spans": [[0, 1], [1, 1]], "text": "Language", "type": "col_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 1, "row-header": false, "row-span": [0, 2]}, {"bbox": [262.4129943847656, 609.8992919921875, 288.0596008300781, 617.968994140625], "spans": [[1, 2]], "text": "simple", "type": "col_header", "col": 2, "col-header": false, "col-span": [2, 3], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [296.4289855957031, 609.8992919921875, 329.4468688964844, 617.968994140625], "spans": [[1, 3]], "text": "complex", "type": "col_header", "col": 3, "col-header": false, "col-span": [3, 4], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [345.0329895019531, 609.8992919921875, 354.7579345703125, 617.968994140625], "spans": [[1, 4]], "text": "all", "type": "col_header", "col": 4, "col-header": false, "col-span": [4, 5], "row": 1, "row-header": false, "row-span": [1, 2]}, {"bbox": [370.3450012207031, 617.371337890625, 414.7466125488281, 625.4410400390625], "spans": [[0, 5], [1, 5]], "text": "mAP(0.75)", "type": "col_header", "col": 5, "col-header": false, "col-span": [5, 6], "row": 1, "row-header": false, "row-span": [0, 2]}, {"bbox": [423.114013671875, 611.892333984375, 466.7265625, 630.9210205078125], "spans": [[0, 6], [1, 6]], "text": "Inference time (secs)", "type": "col_header", "col": 6, "col-header": false, "col-span": [6, 7], "row": 1, "row-header": false, "row-span": [0, 2]}], [{"bbox": [154.53799438476562, 591.0703125, 201.2412872314453, 599.1400146484375], "spans": [[2, 0], [3, 0]], "text": "PubTabNet", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 2, "row-header": false, "row-span": [2, 4]}, {"bbox": [222.43699645996094, 596.54931640625, 247.13226318359375, 604.6190185546875], "spans": [[2, 1]], "text": "OTSL", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [264.7439880371094, 596.54931640625, 285.7307434082031, 604.6190185546875], "spans": [[2, 2]], "text": "0.965", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [302.4440002441406, 596.54931640625, 323.4307556152344, 604.6190185546875], "spans": [[2, 3]], "text": "0.934", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [339.40301513671875, 596.54931640625, 360.3897705078125, 604.6190185546875], "spans": [[2, 4]], "text": "0.955", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [383.1159973144531, 596.7554931640625, 401.9732360839844, 604.6818237304688], "spans": [[2, 5]], "text": "0.88", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 2, "row-header": false, "row-span": [2, 3]}, {"bbox": [435.4930114746094, 596.7554931640625, 454.3502502441406, 604.6818237304688], "spans": [[2, 6]], "text": "2.73", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 2, "row-header": false, "row-span": [2, 3]}], [{"bbox": [154.53799438476562, 591.0703125, 201.2412872314453, 599.1400146484375], "spans": [[2, 0], [3, 0]], "text": "PubTabNet", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 3, "row-header": false, "row-span": [2, 4]}, {"bbox": [220.9029998779297, 583.5983276367188, 248.66656494140625, 591.6680297851562], "spans": [[3, 1]], "text": "HTML", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [264.7439880371094, 583.5983276367188, 285.7307434082031, 591.6680297851562], "spans": [[3, 2]], "text": "0.969", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [302.4440002441406, 583.5983276367188, 323.4307556152344, 591.6680297851562], "spans": [[3, 3]], "text": "0.927", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [339.40301513671875, 583.5983276367188, 360.3897705078125, 591.6680297851562], "spans": [[3, 4]], "text": "0.955", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [382.052001953125, 583.5983276367188, 403.03875732421875, 591.6680297851562], "spans": [[3, 5]], "text": "0.857", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 3, "row-header": false, "row-span": [3, 4]}, {"bbox": [436.73199462890625, 583.5983276367188, 453.11181640625, 591.6680297851562], "spans": [[3, 6]], "text": "5.39", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 3, "row-header": false, "row-span": [3, 4]}], [{"bbox": [155.94500732421875, 564.768310546875, 199.833740234375, 572.8380126953125], "spans": [[4, 0], [5, 0]], "text": "FinTabNet", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 4, "row-header": false, "row-span": [4, 6]}, {"bbox": [222.43699645996094, 570.248291015625, 247.13226318359375, 578.3179931640625], "spans": [[4, 1]], "text": "OTSL", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [264.7439880371094, 570.248291015625, 285.7307434082031, 578.3179931640625], "spans": [[4, 2]], "text": "0.955", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [302.4440002441406, 570.248291015625, 323.4307556152344, 578.3179931640625], "spans": [[4, 3]], "text": "0.961", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [337.81500244140625, 570.4544677734375, 361.9758605957031, 578.3807983398438], "spans": [[4, 4]], "text": "0.959", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [380.4639892578125, 570.4544677734375, 404.6248474121094, 578.3807983398438], "spans": [[4, 5]], "text": "0.862", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 4, "row-header": false, "row-span": [4, 5]}, {"bbox": [435.4930114746094, 570.4544677734375, 454.3502502441406, 578.3807983398438], "spans": [[4, 6]], "text": "1.85", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 4, "row-header": false, "row-span": [4, 5]}], [{"bbox": [155.94500732421875, 564.768310546875, 199.833740234375, 572.8380126953125], "spans": [[4, 0], [5, 0]], "text": "FinTabNet", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 5, "row-header": false, "row-span": [4, 6]}, {"bbox": [220.9029998779297, 557.2963256835938, 248.66656494140625, 565.3660278320312], "spans": [[5, 1]], "text": "HTML", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [264.7439880371094, 557.2963256835938, 285.7307434082031, 565.3660278320312], "spans": [[5, 2]], "text": "0.917", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [302.4440002441406, 557.2963256835938, 323.4307556152344, 565.3660278320312], "spans": [[5, 3]], "text": "0.922", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [341.70599365234375, 557.2963256835938, 358.0858154296875, 565.3660278320312], "spans": [[5, 4]], "text": "0.92", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [382.052001953125, 557.2963256835938, 403.03875732421875, 565.3660278320312], "spans": [[5, 5]], "text": "0.722", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 5, "row-header": false, "row-span": [5, 6]}, {"bbox": [436.73199462890625, 557.2963256835938, 453.11181640625, 565.3660278320312], "spans": [[5, 6]], "text": "3.26", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 5, "row-header": false, "row-span": [5, 6]}], [{"bbox": [148.62600708007812, 538.4673461914062, 207.15240478515625, 546.5370483398438], "spans": [[6, 0], [7, 0]], "text": "PubTables-1M", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 6, "row-header": false, "row-span": [6, 8]}, {"bbox": [222.43699645996094, 543.9473266601562, 247.13226318359375, 552.0170288085938], "spans": [[6, 1]], "text": "OTSL", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [264.7439880371094, 543.9473266601562, 285.7307434082031, 552.0170288085938], "spans": [[6, 2]], "text": "0.987", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [302.4440002441406, 543.9473266601562, 323.4307556152344, 552.0170288085938], "spans": [[6, 3]], "text": "0.964", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [337.81500244140625, 544.1535034179688, 361.9758605957031, 552.079833984375], "spans": [[6, 4]], "text": "0.977", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [380.4639892578125, 544.1535034179688, 404.6248474121094, 552.079833984375], "spans": [[6, 5]], "text": "0.896", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 6, "row-header": false, "row-span": [6, 7]}, {"bbox": [435.4930114746094, 544.1535034179688, 454.3502502441406, 552.079833984375], "spans": [[6, 6]], "text": "1.79", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 6, "row-header": false, "row-span": [6, 7]}], [{"bbox": [148.62600708007812, 538.4673461914062, 207.15240478515625, 546.5370483398438], "spans": [[6, 0], [7, 0]], "text": "PubTables-1M", "type": "row_header", "col": 0, "col-header": false, "col-span": [0, 1], "row": 7, "row-header": false, "row-span": [6, 8]}, {"bbox": [220.9029998779297, 530.9953002929688, 248.66656494140625, 539.0650024414062], "spans": [[7, 1]], "text": "HTML", "type": "row_header", "col": 1, "col-header": false, "col-span": [1, 2], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [264.7439880371094, 530.9953002929688, 285.7307434082031, 539.0650024414062], "spans": [[7, 2]], "text": "0.983", "type": "body", "col": 2, "col-header": false, "col-span": [2, 3], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [302.4440002441406, 530.9953002929688, 323.4307556152344, 539.0650024414062], "spans": [[7, 3]], "text": "0.944", "type": "body", "col": 3, "col-header": false, "col-span": [3, 4], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [339.40301513671875, 530.9953002929688, 360.3897705078125, 539.0650024414062], "spans": [[7, 4]], "text": "0.966", "type": "body", "col": 4, "col-header": false, "col-span": [4, 5], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [382.052001953125, 530.9953002929688, 403.03875732421875, 539.0650024414062], "spans": [[7, 5]], "text": "0.889", "type": "body", "col": 5, "col-header": false, "col-span": [5, 6], "row": 7, "row-header": false, "row-span": [7, 8]}, {"bbox": [436.73199462890625, 530.9953002929688, 453.11181640625, 539.0650024414062], "spans": [[7, 6]], "text": "3.26", "type": "body", "col": 6, "col-header": false, "col-span": [6, 7], "row": 7, "row-header": false, "row-span": [7, 8]}]], "model": null, "bounding-box": null}], "bitmaps": null, "equations": [], "footnotes": [], "page-dimensions": [{"height": 792.0, "page": 1, "width": 612.0}, {"height": 792.0, "page": 2, "width": 612.0}, {"height": 792.0, "page": 3, "width": 612.0}, {"height": 792.0, "page": 4, "width": 612.0}, {"height": 792.0, "page": 5, "width": 612.0}, {"height": 792.0, "page": 6, "width": 612.0}, {"height": 792.0, "page": 7, "width": 612.0}, {"height": 792.0, "page": 8, "width": 612.0}, {"height": 792.0, "page": 9, "width": 612.0}, {"height": 792.0, "page": 10, "width": 612.0}, {"height": 792.0, "page": 11, "width": 612.0}, {"height": 792.0, "page": 12, "width": 612.0}, {"height": 792.0, "page": 13, "width": 612.0}, {"height": 792.0, "page": 14, "width": 612.0}], "page-footers": [], "page-headers": [], "_s3_data": null, "identifiers": null}
\ No newline at end of file
diff --git a/tests/data/redp5110.doctags.txt b/tests/data/redp5110.doctags.txt
new file mode 100644
index 00000000..c830a72f
--- /dev/null
+++ b/tests/data/redp5110.doctags.txt
@@ -0,0 +1,1843 @@
+
+Front cover
+
+Row and Column Access Control Support in IBM DB2 for i
+
+
+
+International Technical Support Organization
+Row and Column Access Control Support in IBM DB2 for i
+November 2014
+Note: Before using this information and the product it supports, read the information in "Notices" on page vii.
+First Edition (November 2014)
+This edition applies to Version 7, Release 2 of IBM i (product number 5770-SS1).
+' Copyright International Business Machines Corporation 2014. All rights reserved.
+Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
+Contents
+
+Notices
+This information was developed for products and services offered in the U.S.A.
+IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
+IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to:
+IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
+The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
+This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.
+Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.
+IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.
+Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.
+Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
+This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.
+COPYRIGHT LICENSE:
+This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
+Trademarks
+IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (fi or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
+The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
+
+
+AS/400fiIBMfiRedpaper™
+DB2fiPower Systems™Redbooks (log o) fi System
+DRDAfiRedbooksfiifi
+
Figure 1-1 All-or-nothing access to the rows of a table
+
+Many businesses are trying to limit data access to a need-to-know basis. This security goal means that users should be given access only to the minimum set of data that is required to perform their job. Often, users with object-level access are given access to row and column values that are beyond what their business task requires because that object-level security provides an all-or-nothing solution. For example, object-level controls allow a manager to access data about all employees. Most security policies limit a manager to accessing data only for the employees that they manage.
+1.3.1 Existing row and column control
+Some IBM i clients have tried augmenting the all-or-nothing object-level security with SQL views (or logical files) and application logic, as shown in Figure 1-2. However, application-based logic is easy to bypass with all of the different data access interfaces that are provided by the IBM i operating system, such as Open Database Connectivity (ODBC) and System i Navigator.
+Using SQL views to limit access to a subset of the data in a table also has its own set of challenges. First, there is the complexity of managing all of the SQL view objects that are used for securing data access. Second, scaling a view-based security solution can be difficult as the amount of data grows and the number of users increases.
+Even if you are willing to live with these performance and management issues, a user with *ALLOBJ access still can directly access all of the data in the underlying DB2 table and easily bypass the security controls that are built into an SQL view.
+
Figure 1-2 Existing row and column controls
+
+1.3.2 New controls: Row and Column Access Control
+Based on the challenges that are associated with the existing technology available for controlling row and column access at a more granular level, IBM delivered new security support in the IBM i 7.2 release; this support is known as Row and Column Access Control (RCAC).
+The new DB2 RCAC support provides a method for controlling data access across all interfaces and all types of users with a data-centric solution. Moving security processing to the database layer makes it easier to build controls that meet your compliance policies. The RCAC support provides an additional layer of security that complements object-level authorizations to limit data access to a need-to-know basis. Therefore, it is critical that you first have a sound object-level security implementation in place.
+
+Chapter 2.
+Roles and separation of duties
+One of the primary objectives of row and column access control (RCAC) is to create data security policies that control and govern user access to data and limit the data access of DB2 designers and administrators to only the minimum that is required to do their jobs.
+To accomplish these tasks, RCAC engineers devised a set of functional roles that, as a group, implement effectively data access requirements and also limit the span of control of each role so that each role is given only the authorities that are needed to perform its specific set of tasks.
+This chapter describes the concepts of roles and separation of duties on DB2 for i and covers the following topics:
+GLYPH Roles
+GLYPH Separation of duties
+2.1 Roles
+Traditionally, data access roles are defined in a binary way, where access to the data is either not permitted or access to the data is permitted. A full access capability can also be instantiated by the *ALLOBJ special authority, either explicitly or implicitly, for the security officer. If you hold the role of security officer, or have all *ALLOBJ special authority, you have access to all the data, with no exceptions. Unfortunately, this might not meet the organization's requirements for limiting access to data or separation of duties.
+To assist with defining roles and the separation of duties with appropriate authority, IBM i provides function usage IDs . A function usage ID implements granular security controls rather than granting users powerful special authorities, such as all object, job control, or service.
+Roles are divided among the following DB2 functions and their corresponding function usage IDs:
+GLYPH DDM and IBM DRDAfi application server access: QIBM_DB_DDMDRDA
+GLYPH Toolbox application server access: QIBM_DB_ZDA
+GLYPH Database Administrator function: QIBM_DB_SQLADM
+GLYPH Database Information function: QIBM_DB_SYSMON
+GLYPH Security Administrator function: QIBM_DB_SECADM
+2.1.1 DDM and DRDA application server access: QIBM_DB_DDMDRDA
+The QIBM_DB_DDMDRDA function usage ID restricts access to the DDM and DRDA application server (QRWTSRVR). This function usage ID provides an easy alternative (rather than writing an exit program) to control access to DDM and DRDA from the server side. The function usage IDs ship with the default authority of *ALLOWED. The security officer can easily deny access to specific users or groups.
+This is an alternative to a User Exit Program approach. No coding is required, it is easy to change, and it is auditable.
+2.1.2 Toolbox application server access: QIBM_DB_ZDA
+The QIBM_DB_ZDA function usage ID restricts access to the optimized server that handles DB2 requests from clients (QZDASOINIT and QZDASSINIT). Server access is used by the ODBC, OLE DB, and .NET providers that ship with IBM i Access for Windows and JDBC Toolbox, Run SQL scripts, and other parts of System i Navigator and Navigator for i Web console.
+This function usage ID provides an easy alternative (rather than writing an exit program) to control access to these functions from the server side. The function usage IDs ship with the default authority of *ALLOWED. The security officer can easily deny access to specific users or groups.
+This is an alternative to a User Exit Program approach. No coding is required, it is easy to change, and it is auditable.
+2.1.3 Database Administrator function: QIBM_DB_SQLADM
+The Database Administrator function (QIBM_DB_SQLADM) is needed whenever a user is analyzing and viewing SQL performance data. Some of the more common database administrator functions include displaying statements from the SQL Plan Cache, analyzing SQL Performance Monitors and SQL Plan Cache Snapshots, and displaying the SQL details of a job other than your own.
+The Database Administrator function provides an alternative to granting *JOBCTL, but simply having the Database Administrator authorization does not carry with it all the needed object authorities for every administration task. The default behavior is to deny authorization.
+To perform database administrator tasks that are not related to performance analysis, you must refer to the details of the task to determine its specific authorization requirements. For example, to allow a database administrator to reorganize a table, the DBA must have additional object authorities to the table that are not covered by QIBM_DB_SQLADM.
+Granting QIBM_DB_SQLADM function usage
+Only the security administrator (*SECADM) is allowed to change the list of users that can perform Database Administration functions.
+2.1.4 Database Information function: QIBM_DB_SYSMON
+The Database Information function (QIBM_DB_SYSMON) provides much less authority than Database Administrator function. Its primary use allows a user to examine high-level database properties.
+For example, a user that does not have *JOBCTL or QIBM_DB_SQLADM can still view the SQL Plan Cache properties if granted authority to QIBM_DB_SYSMON. Without granting this authority, the default behavior is to deny authorization.
+Granting QIBM_DB_SYSMON function usage
+Only the security administrator (*SECADM) is allowed to change the list of users that can perform Database Information functions.
+2.1.5 Security Administrator function: QIBM_DB_SECADM
+The Security Administrator function (QIBM_DB_SECADM) grants authorities, revokes authorities, changes ownership, or changes the primary group without giving access to the object or, in the case of a database table, to the data that is in the table or allowing other operations on the table.
+Only those users with the QIBM_DB_SECADM function can administer and manage RCAC rules. RCAC can be used to prevent even users with *ALLOBJ authority from freely accessing all the data in a protected database. These users are excluded from data access unless they are specifically authorized by RCAC. Without granting this authority, the default behavior is to deny authorization.
+Granting QIBM_DB_SECADM function usage
+Only QSECOFR or a user with *SECADM special authority can grant the QIBM_DB_SECADM function usage to a user or group.
+2.1.6 Change Function Usage CL command
+The following CL commands can be used to work with, display, or change function usage IDs:
+GLYPH Work Function Usage ( WRKFCNUSG )
+GLYPH Change Function Usage ( CHGFCNUSG )
+GLYPH Display Function Usage ( DSPFCNUSG )
+For example, the following CHGFCNUSG command shows granting authorization to user HBEDOYA to administer and manage RCAC rules:
+CHGFCNUSG FCNID(QIBM_DB_SECADM) USER(HBEDOYA) USAGE(*ALLOWED)
+2.1.7 Verifying function usage IDs for RCAC with the FUNCTION_USAGE view
+The FUNCTION_USAGE view contains function usage configuration details. Table 2-1 describes the columns in the FUNCTION_USAGE view.
+
Table 2-1 FUNCTION_USAGE view
+
+
+
Table 2-1 FUNCTION_USAGE view
+Column nameData typeDescription
+FUNCTION_IDVARCHAR(30)ID of the function.
+USER_NAMEVARCHAR(10)Name of the user profile that has a usage setting for this function.
+USAGEVARCHAR(7)Usage setting: GLYPH ALLOWED: The user profile is allowed to use the function. GLYPH DENIED: The user profile is not allowed to use the function.
+USER_TYPEVARCHAR(5)Type of user profile: GLYPH USER: The user profile is a user. GLYPH GROUP: The user profile is a group.
+
+To discover who has authorization to define and manage RCAC, you can use the query that is shown in Example 2-1.
+
Example 2-1 Query to determine who has authority to define and manage RCAC
+
+
+
Example 2-1 Query to determine who has authority to define and manage RCAC
+2.2 Separation of duties
+Separation of duties helps businesses comply with industry regulations or organizational requirements and simplifies the management of authorities. Separation of duties is commonly used to prevent fraudulent activities or errors by a single person. It provides the ability for administrative functions to be divided across individuals without overlapping responsibilities, so that one user does not possess unlimited authority, such as with the *ALLOBJ authority.
+For example, assume that a business has assigned the duty to manage security on IBM i to Theresa. Before release IBM i 7.2, to grant privileges, Theresa had to have the same privileges Theresa was granting to others. Therefore, to grant *USE privileges to the PAYROLL table, Theresa had to have *OBJMGT and *USE authority (or a higher level of authority, such as *ALLOBJ). This requirement allowed Theresa to access the data in the PAYROLL table even though Theresa's job description was only to manage its security.
+In IBM i 7.2, the QIBM_DB_SECADM function usage grants authorities, revokes authorities, changes ownership, or changes the primary group without giving access to the object or, in the case of a database table, to the data that is in the table or allowing other operations on the table.
+QIBM_DB_SECADM function usage can be granted only by a user with *SECADM special authority and can be given to a user or a group.
+QIBM_DB_SECADM also is responsible for administering RCAC, which restricts which rows a user is allowed to access in a table and whether a user is allowed to see information in certain columns of a table.
+A preferred practice is that the RCAC administrator has the QIBM_DB_SECADM function usage ID, but absolutely no other data privileges. The result is that the RCAC administrator can deploy and maintain the RCAC constructs, but cannot grant themselves unauthorized access to data itself.
+Table 2-2 shows a comparison of the different function usage IDs and *JOBCTL authority to the different CL commands and DB2 for i tools.
+
Table 2-2 Comparison of the different function usage IDs and *JOBCTL authority
+
+
+
Table 2-2 Comparison of the different function usage IDs and *JOBCTL authority
+User action*JOBCTLQIBM_DB_SECADMQIBM_DB_SQLADMQIBM_DB_SYSMON No Authority
+SET CURRENT DEGREE (SQL statement)XX
+CHGQRYA command targeting a different user's jobXX
+STRDBMON or ENDDBMON commands targeting a different user's jobXX
+STRDBMON or ENDDBMON commands targeting a job that matches the current userXX XX
+QUSRJOBI() API format 900 or System i Navigator's SQL Details for JobXXX
+Visual Explain within Run SQL scriptsXXX X
+Visual Explain outside of Run SQL scriptsXX
+ANALYZE PLAN CACHE procedureXX
+DUMP PLAN CACHE procedureXX
+MODIFY PLAN CACHE procedureXX
+MODIFY PLAN CACHE PROPERTIES procedure (currently does not check authority)
+XX
+CHANGE PLAN CACHE SIZE procedure (currently does not check authority)XX
+
+
+
+User action*JOBCTLQIBM_DB_SECADMQIBM_DB_SQLADMQIBM_DB_SYSMONNo Authority
+START PLAN CACHE EVENT MONITOR procedureXX
+END PLAN CACHE EVENT MONITOR procedureXX
+END ALL PLAN CACHE EVENT MONITORS procedureXX
+Work with RCAC row permissions (Create, modify, or delete)X
+Work with RCAC column masks (Create, modify, or delete)X
+Change Object Owner ( CHGOBJOWN ) CL commandX
+Change Object Primary Group ( CHGOBJPGP ) CL commandX
+Grant Object Authority ( GRTOBJAUT ) CL commandX
+Revoke Object Authority ( RVKOBJAUT ) CL commandX
+Edit Object Authority ( EDTOBJAUT ) CL commandX
+Display Object Authority ( DSPOBJAUT ) CL commandX
+Work with Objects ( WRKOBJ ) CL commandX
+Work with Libraries ( WRKLIB ) CL commandX
+Add Authorization List Entry ( ADDAUTLE ) CL commandX
+Change Authorization List Entry ( CHGAUTLE ) CL commandX
+Remove Authorization List Entry ( RMVAUTLE ) CL commandX
+Retrieve Authorization List Entry ( RTVAUTLE ) CL commandX
+Display Authorization List ( DSPAUTL ) CL commandX
+Display Authorization List Objects ( DSPAUTLOBJ ) CL commandX
+Edit Authorization List ( EDTAUTL ) CL commandX
+Work with Authorization Lists ( WRKAUTL ) CL commandX
+
+
+Chapter 3.
+3
+Row and Column Access Control
+This chapter describes what Row and Column Access Control (RCAC) is, its components, and then illustrates RCAC with a simple example.
+The following topics are covered in this chapter:
+GLYPH Explanation of RCAC and the concept of access control
+GLYPH Special registers and built-in global variables
+GLYPH VERIFY_GROUP_FOR_USER function
+GLYPH Establishing and controlling accessibility by using the RCAC rule text
+GLYPH SELECT, INSERT, and UPDATE behavior with RCAC
+GLYPH Human resources example
+3.1 Explanation of RCAC and the concept of access control
+RCAC limits data access to those users who have a business "need to know". RCAC makes it easy to set up a rich and robust security policy that is based on roles and responsibilities. RCAC functionality is made available through the optional, no charge feature called "IBM Advanced Data Security for i", also known as option 47 of IBM i 7.2.
+In DB2 for i, RCAC is implemented using two different approaches that address the shortcomings of traditional control methods and mechanisms:
+GLYPH Row permissions
+GLYPH Column masks
+Another benefit of RCAC is that no database user is automatically exempt from the control. Users with *ALLOBJ authority can no longer freely access all of the data in the database unless they have the appropriate permission to do so. The ability to manage row permissions and column masks rests with the database security administrator. The RCAC definitions, enablement, and activation are controlled by SQL statements.
+Row permissions and column masks require virtually no application changes. RCAC is based on specific rules that are transparent to existing applications and SQL interfaces. Enforcement of your security policy does not depend on how applications or tools access the data.
+RCAC also facilitates multi-tenancy, which means that several independent customers or business units can share a single database table without being aware of one another. The RCAC row permission ensures each user sees only the rows they are entitled to view because the enforcement is handled by DB2 and not the application logic.
+Label-based access control (LBAC): RCAC and LBAC are not the same thing. LBAC is a security model that is primarily intended for government applications. LBAC requires that data and users be classified with a fixed set of rules that are implemented. RCAC is a general-purpose security model that is primarily intended for commercial customers. You can use RCAC to create your own security rules, which in turn allows for more flexibility.
+3.1.1 Row permission and column mask definitions
+The following sections define row permission and column masks.
+Row permission
+A row permission is a database object that manifests a row access control rule for a specific table. It is essentially a search condition that describes which rows you can access. For example, a manager can see only the rows that represent his or her employees.
+
The SQL CREATE PERMISSION statement that is shown in Figure 3-1 is used to define and initially enable or disable the row access rules.
+
Figure 3-1 CREATE PERMISSION SQL statement
+
+Column mask
+A column mask is a database object that manifests a column value access control rule for a specific column in a specific table. It uses a CASE expression that describes what you see when you access the column. For example, a teller can see only the last four digits of a tax identification number.
+Column masks replace the need to create and use views to implement access control. The SQL CREATE MASK statement that is shown in Figure 3-2 is used to define and initially enable or disable the column value access rules.
+
Figure 3-2 CREATE MASK SQL statement
+
+3.1.2 Enabling and activating RCAC
+You can enable, disable, or regenerate row permissions and column masks by using the SQL ALTER PERMISSION statement and the SQL ALTER MASK statement, as shown in Figure 3-3 on page 17.
+Enabling and disabling effectively turns on or off the logic that is contained in the row permission or column mask. Regenerating causes the row permission or column mask to be regenerated. The row permission definition in the catalog is used and existing dependencies and authorizations, if any, are retained. The row permission definition is reevaluated as though the row permission were being created. Any user-defined functions (UDFs) that are referenced in the row permission must be resolved to the same secure UDFs as were resolved during the original row permission or column mask creation. The regenerate option can be used to ensure that the RCAC logic is intact and still valid before any user attempts to access the table.
+Note: An exclusive lock is required on the table object to perform the alter operation. All open cursors must be closed.
+
Figure 3-3 ALTER PERMISSION and ALTER MASK SQL statements
+
+You can activate and deactivate RCAC for new or existing tables by using the SQL ALTER TABLE statement (Figure 3-4). The ACTIVATE or DEACTIVATE clause must be the option that is specified in the statement. No other alterations are permitted at the same time. The activating and deactivating effectively turns on or off all RCAC processing for the table. Only enabled row permissions and column masks take effect when activating RCAC.
+Note: An exclusive lock is required on the table object to perform the alter operation. All open cursors must be closed.
+
Figure 3-4 ALTER TABLE SQL statement
+
+When row access control is activated on a table, a default permission is established for that table. The name of this permission is QIBM_DEFAULT_ _. This default permission contains a simple piece of logic (0=1) which is never true. The default permission effectively denies access to every user unless there is a permission defined that allows access explicitly. If row access control is activated on a table, and there is no permission that is defined, no one has permission to any rows. All queries against the table produce an empty set.
+It is possible to define, create, and enable multiple permissions on a table. Logically, all of the permissions are ORed together to form a comprehensive test of the user's ability to access the data. A column can have only one mask that is defined over it. From an implementation standpoint, it does not matter if you create the column masks first or the row permissions first.
+Note: If a user does not have permission to access the row, the column mask logic is not invoked.
+3.2 Special registers and built-in global variables
+This section describes how you can use special registers and built-in global variables to implement RCAC.
+3.2.1 Special registers
+A special register is a storage area that is defined for an application process by DB2 and is used to store information that can be referenced in SQL statements. A reference to a special register is a reference to a value that is provided by the current server.
+IBM DB2 for i supports four different special registers that can be used to identify what user profiles are relevant to determining object authorities in the current connection to the server. SQL uses the term runtime authorization ID , which corresponds to a user profile on DB2 for i. Here are the four special registers:
+GLYPH USER is the runtime user profile that determines the object authorities for the current connection to the server. It has a data type of VARCHAR(18). This value can be changed by the SQL statement SET SESSION AUTHORIZATION .
+GLYPH SESSION_USER is the same as the USER register, except that it has a data type of VARCHAR(128).
+GLYPH CURRENT USER was added in IBM i 7.2 and is similar to the USER register, but it has one important difference in that it also reports adopted authority. High-level language programs and SQL routines such as functions, procedures, and triggers can optionally be created to run using either the caller's or the owner's user profile to determine data authorities. For example, an SQL procedure can be created to run under the owner's authority by specifying SET OPTION USRPRF=*OWNER . This special register can also be referenced as CURRENT_USER. It has a data type of VARCHAR(128).
+GLYPH SYSTEM_USER is the user profile that initiates the connection to the server. It is not used by RCAC, but is included here for completeness. Many jobs, including the QZDASOINIT prestarted jobs, initially connect to the server with a default user profile and then change to use some other user profile. SYSTEM_USER reports this value, typically QUSER for a QZDASOINIT job. It has a data type of VARCHAR(128).
+In addition to these four special registers, any of the DB2 special registers can be referenced as part of the rule text.
+Table 3-1 summarizes these special registers and their values.
+
Table 3-1 Special registers and their corresponding values
+
+
+
Table 3-1 Special registers and their corresponding values
+Special registerCorresponding value
+USER or SESSION_USERThe effective user of the thread excluding adopted authority.
+CURRENT_USERThe effective user of the thread including adopted authority. When no adopted authority is present, this has the same value as USER.
+SYSTEM_USERThe authorization ID that initiated the connection.
+
+Figure 3-5 shows the difference in the special register values when an adopted authority is used:
+GLYPH A user connects to the server using the user profile ALICE.
+GLYPH USER and CURRENT USER initially have the same value of ALICE.
+GLYPH ALICE calls an SQL procedure that is named proc1, which is owned by user profile JOE and was created to adopt JOE's authority when it is called.
+GLYPH While the procedure is running, the special register USER still contains the value of ALICE because it excludes any adopted authority. The special register CURRENT USER contains the value of JOE because it includes any adopted authority.
+GLYPH When proc1 ends, the session reverts to its original state with both USER and CURRENT USER having the value of ALICE.
+
Figure 3-5 Special registers and adopted authority
+
+3.2.2 Built-in global variables
+Built-in global variables are provided with the database manager and are used in SQL statements to retrieve scalar values that are associated with the variables.
+IBM DB2 for i supports nine different built-in global variables that are read only and maintained by the system. These global variables can be used to identify attributes of the database connection and used as part of the RCAC logic.
+Table 3-2 lists the nine built-in global variables.
+
Table 3-2 Built-in global variables
+
+
+
Table 3-2 Built-in global variables
+Global variableTypeDescription
+CLIENT_HOSTVARCHAR(255)Host name of the current client as returned by the system
+CLIENT_IPADDRVARCHAR(128)IP address of the current client as returned by the system
+CLIENT_PORTINTEGERPort used by the current client to communicate with the server
+PACKAGE_NAMEVARCHAR(128)Name of the currently running package
+PACKAGE_SCHEMAVARCHAR(128)Schema name of the currently running package
+PACKAGE_VERSIONVARCHAR(64)Version identifier of the currently running package
+ROUTINE_SCHEMAVARCHAR(128)Schema name of the currently running routine
+ROUTINE_SPECIFIC_NAMEVARCHAR(128)Name of the currently running routine
+ROUTINE_TYPECHAR(1)Type of the currently running routine
+
+3.3 VERIFY_GROUP_FOR_USER function
+The VERIFY_GROUP_FOR_USER function was added in IBM i 7.2. Although it is primarily intended for use with RCAC permissions and masks, it can be used in other SQL statements. The first parameter must be one of these three special registers: SESSION_USER, USER, or CURRENT_USER. The second and subsequent parameters are a list of user or group profiles. Each of these values must be 1 - 10 characters in length. These values are not validated for their existence, which means that you can specify the names of user profiles that do not exist without receiving any kind of error.
+If a special register value is in the list of user profiles or it is a member of a group profile included in the list, the function returns a long integer value of 1. Otherwise, it returns a value of 0. It never returns the null value.
+Here is an example of using the VERIFY_GROUP_FOR_USER function:
+1. There are user profiles for MGR, JANE, JUDY, and TONY.
+2. The user profile JANE specifies a group profile of MGR.
+3. If a user is connected to the server using user profile JANE, all of the following function invocations return a value of 1:
+VERIFY_GROUP_FOR_USER (CURRENT_USER, 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR', 'STEVE') The following function invocation returns a value of 0: VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JUDY', 'TONY')
+3.4 Establishing and controlling accessibility by using the RCAC rule text
+When defining a row permission or column mask, the "magic" of establishing and controlling accessibility comes from the rule text . The rule text represents the search criteria and logic that is implemented by the database engine.
+In the case of a row permission, the rule text is the "test" of whether the user can access the row. If the test result is true, the row can be accessed. If the test result is false, the row essentially does not exist for the user. From a set-at-a-time perspective, the permission defines which rows can be part of the query result set, and which rows cannot.
+In the case of a column mask, the rule text is both the test of whether the user can see the actual column value, and it is the masking logic if the user cannot have access to actual column value.
+For a simple example of implementing row permissions and column masks, see 3.6, "Human resources example" on page 22.
+In general, almost any set-based, relational logic is valid. For the row permission, the search condition follows the same rules that are used by the search condition in a WHERE clause.
+For the column mask, the logic follows the same rules as the CASE expression. The result data type, length, null attribute, and CCSID of the CASE expression must be compatible with the data type of the column. If the column does not allow the null value, the result of the CASE expression cannot be the NULL value. The application or interface making the data access request is expecting that all of the column attributes and values are consistent with the original definition, regardless of any masking.
+For more information about what is permitted, see the "Database programming" topic of the IBM i 7.2 Knowledge Center, found at:
+http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzahg/rzahgdbp.htm?lang =en
+One of the first tasks in either the row permission or the column mask logic is to determine who the user is, and whether they have access to the data. Elegant methods to establish the identity and attributes of the user can be employed by using the special registers, global variables, and the VERIFY function. After the user's identity is established, it is a simple matter of allowing or disallowing access by using true or false testing. The examples that are included in this paper demonstrate some of the more common and obvious techniques.
+More sophisticated methods can employ existential, day of year / time of day, and relational comparisons with set operations. For example, you can use a date master or date dimension table to determine whether the current date is a normal business day. If the current date is a valid business day, then access is allowed. If the current date is not a business day (for example a weekend day or holiday), access is denied. This test can be accomplished by performing a lookup using a subquery, such as the one that is shown in Example 3-1.
+
Example 3-1 Subquery that is used as part of the rule
+
+
+
Example 3-1 Subquery that is used as part of the rule
+CURRENT_DATE IN (SELECT D.DATE_KEYDATE_MASTER D D.BUSINESS_DAY = 'Y')
+FROM WHERE
+
+Given that joins and subqueries can be used to perform set-based operations against existing data that is housed in other objects, almost any relational test can be constructed. If the data in the objects is manipulated over time, the RCAC test logic (and user query results) can be changed without modifying the actual row permission or column mask. This includes moving a user from one group to another or changing a column value that is used to allow or disallow access. For example, if Saturday is now a valid business day, only the BUSINESS_DAY value in the DATE_MASTER must be updated, not the permission logic. This technique can potentially avoid downtime because of the exclusive lock that is needed on the table when adding or changing RCAC definitions.
+3.5 SELECT, INSERT, and UPDATE behavior with RCAC
+RCAC provides a database-centric approach to determining which rows can be accessed and what column values can be seen by a specific user. Given that the control is handled by DB2 internally, every data manipulation statement is under the influence of RCAC, with no exceptions. When accessing the table, the SELECT statements, searched UPDATE statements, and searched DELETE statements implicitly and transparently contain the row permission and the column mask rule text. This means that the data set can be logically restricted and reduced on a user by user basis.
+Furthermore, DB2 prevents an INSERT statement from inserting a row or an UPDATE statement from modifying a row such that the current user cannot be permitted to access it. You cannot create a situation in which the data you inserted or changed is no longer accessible to you.
+For more information and considerations about data movement in an RCAC environment, see Chapter 6, "Additional considerations" on page 85.
+Note: DB2 does not provide any indication back to the user that the data set requested was restricted or reduced by RCAC. This is by design, as it helps minimize any changes to the applications accessing the data.
+3.6 Human resources example
+This section illustrates with a simple example the usage of RCAC on a typical Human Resources application (schema). In this sample Human Resources schema, there is an important table that is called EMPLOYEES that contains all the information that is related to the employees of the company. Among the information that normally is stored in the EMPLOYEES table, there is some sensitive information that must be hidden from certain users:
+GLYPH Tax_Id information
+GLYPH YEAR of the birth date of the employee (hiding the age of the employee)
+In this example, there are four different types of users:
+GLYPH Employees
+GLYPH Managers
+GLYPH Human Resources Manager
+GLYPH Consultant/IT Database Engineer (In this example, this person is an external consultant that is not an employee of the company.)
+The following sections describe step-by-step what is needed to be done to implement RCAC in this environment.
+3.6.1 Assigning the QIBM_DB_SECADM function ID to the consultants
+The consultant must have authority to implement RCAC, so you must use one of the function IDs that are provided in DB2 for i (see 2.1.5, "Security Administrator function: QIBM_DB_SECADM" on page 9). Complete the following steps:
+1. Run the Change Functional Usage ( CHGFCNUSG ) CL commands that are shown in Example 3-2. These commands must be run by someone that has the *SECOFR authority.
+Example 3-2 Function ID required to implement RCAC
+CHGFCNUSG FCNID(QIBM_DB_SECADM) USER(HBEDOYA) USAGE(*ALLOWED) CHGFCNUSG FCNID(QIBM_DB_SECADM) USER(MCAIN) USAGE(*ALLOWED)
+2. There is a way to discover which user profiles have authorization to implement RCAC. This can be done by running the SQL statement that is shown in Example 3-3.
+Example 3-3 Verifying what user profiles have authorization to implement RCAC
+SELECT function_id, user_name, usage, user_type FROM qsys2.function_usage WHERE function_id ='QIBM_DB_SECADM' ORDER BY user_name;
+3. The result of the SQL statement is shown in Figure 3-6. In this example, either MCAIN or HBEDOYA can implement RCAC in the Human Resources database.
+Figure 3-6 Result of the function ID query
+3.6.2 Creating group profiles for the users and their roles
+Assuming that all the employees have a valid user profile, the next step is to create group profiles to group the employees. Complete the following steps:
+1. In this example, there are three group profiles:
+-HR (Human Resource personnel)
+-MGR (Managers)
+-EMP (Employees)
+These are created by creating user profiles with no password. Example 3-4 shows the Create User Profile ( CRTUSRPRF ) CL commands that you use to create these group profiles.
+Example 3-4 Creating group profiles
+CRTUSRPRF USRPRF(EMP) PASSWORD() TEXT('Employees Group') CRTUSRPRF USRPRF(MGR) PASSWORD() TEXT('Managers Group') CRTUSRPRF USRPRF(HR) PASSWORD() TEXT('Human Resources Group')
+2. You now must assign users to a group profile. Employees go in to the EMP group profile, Managers go into the MGR group profile, and Human Resource employees go into the HR group profile. For simplicity, this example selects one employee (DSSMITH), one manager (TQSPENSER), and one HR analyst (VGLUCCHESS).
+Note: Neither of the consultants (MCAIN and HBEDOYA) belong to any group profile.
+3.6.3 Demonstrating data access without RCAC
+Before implementing RCAC, run some simple SQL statements to demonstrate data access without RCAC. Complete the following steps:
+1. The first SQL statement, which is shown in Example 3-5, basically counts the total number of rows in the EMPLOYEES table.
+Example 3-5 Counting the number of employees
+SELECT COUNT(*) as ROW_COUNT FROM HR_SCHEMA.EMPLOYEES;
+The result of this query is shown in Figure 3-7, which is the total number of employees of the company.
+
Figure 3-7 Number of employees
+
+2. Run a second SQL statement (shown in Example 3-6) that lists the employees. If you have read access to the table, you see all the rows no matter who you are.
+Example 3-6 Displaying the information of the Employees
+SELECT EMPLOYEE_ID, LAST_NAME, JOB_DESCRIPTION, DATE_OF_BIRTH, TAX_ID, USER_ID, MANAGER_OF_EMPLOYEE FROM HR_SCHEMA.EMPLOYEES
+The result of this query is shown in Figure 3-8.
+Figure 3-8 List of employees without RCAC enabled
+3.6.4 Defining and creating row permissions
+Implement RCAC on the EMPLOYEES table by completing the following steps:
+1. Start by defining a row permission. In this example, the rules to enforce include the following ones:
+-Human Resources employees can see all the rows.
+-Managers can see only information for the employees that they manage.
+-Employees can see only their own information.
+-Consultants are not allowed to see any rows in the table.
+To implement this row permission, run the SQL statement that is shown in Example 3-7.
+Example 3-7 Creating a permission for the EMPLOYEE table
+CREATE PERMISSION HR_SCHEMA.PERMISSION1_ON_EMPLOYEES ON HR_SCHEMA.EMPLOYEES AS EMPLOYEES FOR ROWS WHERE ( VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR' ) = 1 ) OR ( VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND ( EMPLOYEES . MANAGER_OF_EMPLOYEE = SESSION_USER OR EMPLOYEES . USER_ID = SESSION_USER ) ) OR ( VERIFY_GROUP_FOR_USER ( SESSION_USER , 'EMP' ) = 1 AND EMPLOYEES . USER_ID = SESSION_USER ) ENFORCED FOR ALL ACCESS ENABLE ;
+2. Look at the definition of the table and see the permissions, as shown in Figure 3-9. QIBM_DEFAULT_EMPLOYEE_HR_SCHEMA is the default permission, as described in 3.1.2, "Enabling and activating RCAC" on page 16.
+
Figure 3-9 Row permissions that are shown in System i Navigator
+
+3.6.5 Defining and creating column masks
+Define the different masks for the columns that are sensitive by completing the following steps:
+1. Start with the DAY_OF_BIRTH column. In this example, the rules to enforce include the following ones:
+-Human Resources can see the entire date of birth of the employees.
+-Employees can see only their own date of birth.
+-Managers can see the date of birth of their employees masked with YEAR being 9999.
+To implement this column mask, run the SQL statement that is shown in Example 3-8.
+
Example 3-8 Creation of a mask on the DATE_OF_BIRTH column
+
+
+
Example 3-8 Creation of a mask on the DATE_OF_BIRTH column
+CREATE MASKHR_SCHEMA.MASK_DATE_OF_BIRTH_ON_EMPLOYEES
+ONHR_SCHEMA.EMPLOYEES AS EMPLOYEES
+FOR COLUMNDATE_OF_BIRTH
+
+RETURN CASE WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR', 'EMP' ) = 1 THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 9999 || '-' || MONTH ( EMPLOYEES . DATE_OF_BIRTH ) || '-' || DAY (EMPLOYEES.DATE_OF_BIRTH )) ELSE NULL END ENABLE ;
+2. The other column to mask in this example is the TAX_ID information. In this example, the rules to enforce include the following ones:
+-Human Resources can see the unmasked TAX_ID of the employees.
+-Employees can see only their own unmasked TAX_ID.
+-Managers see a masked version of TAX_ID with the first five characters replaced with the X character (for example, XXX-XX-1234).
+-Any other person sees the entire TAX_ID as masked, for example, XXX-XX-XXXX.
+To implement this column mask, run the SQL statement that is shown in Example 3-9.
+Example 3-9 Creating a mask on the TAX_ID column
+CREATE MASK HR_SCHEMA.MASK_TAX_ID_ON_EMPLOYEES ON HR_SCHEMA.EMPLOYEES AS EMPLOYEES FOR COLUMN TAX_ID RETURN CASE WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR' ) = 1 THEN EMPLOYEES . TAX_ID WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . TAX_ID WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 'XXX-XX-' CONCAT QSYS2 . SUBSTR ( EMPLOYEES . TAX_ID , 8 , 4 ) ) WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'EMP' ) = 1 THEN EMPLOYEES . TAX_ID ELSE 'XXX-XX-XXXX' END ENABLE ;
+3. Figure 3-10 shows the masks that are created in the HR_SCHEMA.
+
Figure 3-10 Column masks shown in System i Navigator
+
+3.6.6 Activating RCAC
+Now that you have created the row permission and the two column masks, RCAC must be activated. The row permission and the two column masks are enabled (last clause in the scripts), but now you must activate RCAC on the table. To do so, complete the following steps:
+1. Run the SQL statements that are shown in Example 3-10.
+Example 3-10 Activating RCAC on the EMPLOYEES table
+/* Active Row Access Control (permissions) */ /* Active Column Access Control (masks) */ ALTER TABLE HR_SCHEMA.EMPLOYEES ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL;
+2. Look at the definition of the EMPLOYEE table, as shown in Figure 3-11. To do this, from the main navigation pane of System i Navigator, click Schemas HR_SCHEMA Tables , right-click the EMPLOYEES table, and click Definition .
+
Figure 3-11 Selecting the EMPLOYEES table from System i Navigator
+
+3. The EMPLOYEES table definition is displayed, as shown in Figure 3-12. Note that the Row access control and Column access control options are checked.
+
Figure 3-12 RCAC enabled on the EMPLOYEES table
+
+3.6.7 Demonstrating data access with RCAC
+You are now ready to start testing RCAC with the four different users. Complete the following steps:
+1. The first SQL statement that is shown in Example 3-11 illustrates the EMPLOYEE count. You know that there are 42 rows from the query that was run before RCAC was put in place (see 3.6.3, "Demonstrating data access without RCAC" on page 24).
+Example 3-11 EMPLOYEES count
+SELECT COUNT(*) as ROW_COUNT FROM HR_SCHEMA.EMPLOYEES;
+2. The result of the query for a user that belongs to the HR group profile is shown in Figure 3-13. This user can see all the 42 rows (employees).
+Figure 3-13 Count of EMPLOYEES by HR
+
+3. The result of the same query for a user who is logged on as TQSPENSER (Manager) is shown in Figure 3-14. TQSPENSER has five employees in his department and he can also see his own row, which is why the count is 6.
+
Figure 3-14 Count of EMPLOYEES by a manager
+
+4. The result of the same query that is run by an employee (DSSMITH) gives the result that is shown in Figure 3-15. Each employee can see only his or her own data (row).
+Figure 3-15 Count of EMPLOYEES by an employee
+
+5. The result of the same query that is run by the Consultant/DBE gives the result that is shown in Figure 3-16. The consultants/DBE can manage and implement RCAC, but they do not see any rows at all.
+
Figure 3-16 Count of EMPLOYEES by a consultant
+
+Does the result make sense? Yes, it does because RCAC is enabled.
+6. Run queries against the EMPLOYEES table. The query that is used in this example runs and tests with the four different user profiles and is the same query that was run in 3.6.3, "Demonstrating data access without RCAC" on page 24. It is shown in Example 3-12.
+Example 3-12 SELECT statement to test with the different users
+SELECT EMPLOYEE_ID, LAST_NAME, JOB_DESCRIPTION, DATE_OF_BIRTH, TAX_ID, USER_ID, MANAGER_OF_EMPLOYEE FROM HR_SCHEMA.EMPLOYEES
+7. Figure 3-17 shows the results of the query for a Human Resources (VGLUCCHESS) user profile. The user can see all the rows and all the columns.
+Figure 3-17 SQL statement result by Human Resources user profile
+8. Figure 3-18 shows the results of the same query for the Manager (TQSPENSER). Notice the masking of the DATE_OF_BIRTH and TAX_ID columns.
+Figure 3-18 SQL statement result by Manager profile
+9. Figure 3-19 shows the results of the same query for an employee (DSSMITH). The employee can only see only his own data with no masking at all.
+Figure 3-19 SQL statement result by an employee profile
+10.Figure 3-20 shows the results of the same query for the Consultant/DBE, who is not one of the company's employees.
+Figure 3-20 SQL statement result by Consultant/DBE profile
+3.6.8 Demonstrating data access with a view and RCAC
+This section covers data access with a view and RCAC. Complete the following steps:
+1. The EMPLOYEES table has a column that is called On_Leave_Flag (Figure 3-21 on page 33) indicating that the employee is on Leave of Absence. For this purpose, a view is created that lists only the employees that are on leave.
+Figure 3-21 Employees on leave
+2. Example 3-13 shows the definition of the view.
+Example 3-13 VIew of employees on leave
+CREATE VIEW HR_SCHEMA.EMPLOYEES_ON_LEAVE (EMPLOYEE_ID, FIRST_NAME, MIDDLE_INITIAL, LAST_NAME, WORK_DEPARTMENT, PHONE_EXTENSION, JOB_DESCRIPTION, DATE_OF_BIRTH,
+TAX_ID, USER_ID, MANAGER_OF_EMPLOYEE, ON_LEAVE_FLAG )
+AS SELECT EMPLOYEE_ID, FIRST_NAME , MIDDLE_INITIAL, LAST_NAME , WORK_DEPARTMENT, PHONE_EXTENSION, JOB_DESCRIPTION, DATE_OF_BIRTH, TAX_ID, USER_ID, MANAGER_OF_EMPLOYEE, ON_LEAVE_FLAG FROM HR_SCHEMA.EMPLOYEES WHERE ON_LEAVE_FLAG = 'Y';
+3. Use the view to query the data and see who is on leave. The SQL statement that is used is shown in Example 3-14:
+Example 3-14 SQL statement for employees on leave
+SELECT EMPLOYEE_ID, LAST_NAME, JOB_DESCRIPTION, DATE_OF_BIRTH, TAX_ID, USER_ID, MANAGER_OF_EMPLOYEE FROM HR_SCHEMA.EMPLOYEES_ON_LEAVE;
+4. Start with the Human Resources person (VGLUCCHESS) and see what is the result of the previous query. He sees the two employees that are on leave and no masking is done over the DATE_OF_BIRTH and TAX_ID columns. The results of the query are shown in Figure 3-22.
+Figure 3-22 Employees on leave - Human Resources user
+5. Figure 3-23 shows what the Manager (TQSPENSER) gets when he runs the same query over the view. He sees only the employees that are on leave that are managed by him. In this example, it is one employee. The columns are masked, which confirms that RCAC is applied to the view as well.
+Figure 3-23 Employee on leave - Manager of Field Reps user
+6. Figure 3-24 shows what the employee (DSSMITH) gets when he runs the same query over the view. The employee gets an empty set or he gets only himself if he is on leave.
+.
+Figure 3-24 Employees on leave - employee user
+
+Chapter 4.
+4
+Implementing Row and Column Access Control: Banking example
+This chapter illustrates the Row and Column Access Control (RCAC) concepts using a banking example. Appendix A, "Database definitions for the RCAC banking example" on page 121 provides a script that you can use to create all the database definitions or DDLs to re-create this RCAC example.
+The following topics are covered in this chapter:
+GLYPH Business requirements for the RCAC banking scenario
+GLYPH Description of the users roles and responsibilities
+GLYPH Implementation of RCAC
+4.1 Business requirements for the RCAC banking scenario
+As part of a new internet banking project, the Bank decides to raise the level of data access control on the following three tables that are involved in the new customer-facing application:
+GLYPH CUSTOMERS
+GLYPH ACCOUNTS
+GLYPH TRANSACTIONS
+RCAC will be used to restrict access to the rows in these three tables by using permissions, and to restrict column values by using masks. The default position is that no user can access the rows in the tables. From there, specific bank employees are allowed access only to the rows for their job responsibilities. In addition, columns containing personal or sensitive data are masked appropriately. Bank customers are allowed access to only their rows and column values.
+In this example, it is assumed that the Bank employees have access to the tables when working on the premises only. Employee access to data is provided by programs and tools using standard DB2 interfaces, such as embedded SQL, ODBC, JDBC, and CLI. The database connection authentication for these interfaces uses the employee's personal and unique IBM i user profile. Operating in their professional role, employees do not have access to bank data through the Internet.
+Bank customers have access to their accounts and transactions by using a new web application. Each customer has unique credentials for logging in to the application. The authentication of the customer is handled by the web server. After the customer is authenticated, the web server establishes a connection to DB2 for data access. This connection uses a common IBM i user profile that is known as WEBUSER. This user profile is secured and is used only by the web application. No Bank employee has access to the WEBUSER profile, and no customer has an IBM i user profile.
+The customer's identity is passed to DB2 by using a global variable. The global variable is secured and can be accessed only by the WEBUSER. The web application sets the CUSTOMER_LOGIN_ID variable to the customer's login value. This value is compared to the customer's login value that is found in the CUSTOMER_LOGIN_ID column of the CUSTOMERS table.
+Applications that do not use the web interface do not have to be changed because the global variable is NULL by default.
+A diagram of the internet banking architecture is shown in Figure 4-1:
+GLYPH The row permission and column masks for the CUSTOMERS table are based on the group of which the user profile is part. If the user is a customer, their specific login ID also is tested.
+GLYPH The row permission and column mask for the ACCOUNTS table are based on the CUSTOMERS table permission rules. A subquery is used to connect the accounts (child) with the customer (parent).
+GLYPH The row permission for the TRANSACTIONS table is based on the ACCOUNTS table permission rules and the CUSTOMERS table permission rules. A subquery is used to connect the transactions (child) with the account (parent) and the account (child) with the customer (parent).
+
Figure 4-1 Internet banking example
+
+4.2 Description of the users roles and responsibilities
+During the requirements gathering phase, the following groups of users are identified and codified:
+GLYPH SECURITY: Security officer and security administrators
+GLYPH DBE: Database engineers
+GLYPH ADMIN: Bank business administrators
+GLYPH TELLER: Bank tellers
+GLYPH CUSTOMER: Bank customers using the internet
+GLYPH PUBLIC: Anyone not already in a group
+Based on their respective roles and responsibilities, the users (that is, a group) are controlled by row permissions and column masks. The chart that is shown in Figure 4-2 shows the rules for row and column access in this example.
+
+For the demonstration and testing of RCAC in this example, the following users interact with the database. Furthermore, the column masking rules are developed independently of the row permissions. If a person does not have permission to access the row, the column mask processing does not occur.
+GLYPH Hernando Bedoya is a DB2 for i database engineer with the user profile of HBEDOYA. He is part of the DBE group.
+GLYPH Mike Cain is a DB2 for i database engineer with the user profile of MCAIN. He is part of the DBE group.
+GLYPH Veronica G. Lucchess is a bank account administrator with the user profile of VGLUCCHESS. She is part of the ADMIN group.
+GLYPH Tom Q. Spenser is a bank teller with the user profile of TQSPENSER. He is part of the TELLER group.
+GLYPH The IT security officer has the user profile of SECURITY. She is not part of any group.
+GLYPH The online banking web application uses the user profile WEBUSER. This profile is part of the CUSTOMER group. Any future customer-facing applications can also use this group if needed.
+GLYPH Adam O. Olsen is a bank customer with a web application login ID of KLD72CQR8JG.
+4.3 Implementation of RCAC
+Figure 4-4 shows the data model of the banking scenario that is used in this example.
+
Figure 4-4 Data model of the banking scenario
+
+This section covers the following steps:
+GLYPH Reviewing the tables that are used in this example
+GLYPH Assigning function ID QIBM_DB_SECADM to the Database Engineers group
+GLYPH Creating group profiles for the users and their roles
+GLYPH Creating the CUSTOMER_LOGIN_ID global variable
+GLYPH Defining and creating row permissions
+GLYPH Defining and creating column masks
+GLYPH Restricting the inserting and updating of masked data
+GLYPH Activating row and column access control
+GLYPH Reviewing row permissions
+GLYPH Demonstrating data access with RCAC
+GLYPH Query implementation with RCAC activated
+4.3.1 Reviewing the tables that are used in this example
+This section reviews the tables that are used in this example. As shown in Figure 4-5, there are three main tables that are involved in the data model: CUSTOMERS, ACCOUNTS, and TRANSACTIONS. There are 90 customers.
+Figure 4-5 Tables that are used in the banking example
+Note: Appendix A, "Database definitions for the RCAC banking example" on page 121 provides a script that you can use to create all the database definitions or DDLs to re-create this RCAC example.
+To review the attributes of each table that is used in this banking example, complete the following steps:
+1. Review the columns of each the tables through System i Navigator. Expand Database named Database Schemas BANK_SCHEMA Tables .
+2. Right-click the CUSTOMERS table and select Definition . Figure 4-6 shows the attributes for the CUSTOMERS table. The Row access control and Column access control options are not selected, which indicates that the table does not have RCAC implemented.
+
Figure 4-6 CUSTOMERS table attributes
+
+3. Click the Columns tab to see the columns of the CUSTOMERS table, as shown in Figure 4-7.
+Figure 4-7 Column definitions of the CUSTOMERS table
+4. Click the Key Constraints , Foreign Key Constraints , and Check Constraints tabs to review the key, foreign, and check constraints on the CUSTOMERS table, as shown in Figure 4-8. There are no Foreign Key Constraints or Check Constraints on the CUSTOMERS table.
+
Figure 4-8 Reviewing the constraints on the CUSTOMERS table
+
+5. Review the definition of the ACCOUNTS table. The definition of the ACCOUNTS table is shown in Figure 4-9. RCAC has not been defined for this table yet.
+
Figure 4-9 ACCOUNTS table attributes
+
+6. Click the Columns tab to see the columns of the ACCOUNTS table, as shown in Figure 4-10.
+Figure 4-10 Column definitions of the ACCOUNTS table
+7. Click the Key Constraints , Foreign Key Constraints , and Check Constraints tabs to review the key, foreign, and check constraints on the ACCOUNTS table, as shown in Figure 4-11. There is one Foreign Key Constraint and no Check Constraints on the ACCOUNTS table.
+Figure 4-11 Reviewing the constraints on the ACCOUNTS table
+8. Review the definition of the TRANSACTIONS table. The definition of the TRANSACTIONS table is shown in Figure 4-12. RCAC is not defined for this table yet.
+
Figure 4-12 TRANSACTIONS table attributes
+
+9. Click the Columns tab to see the columns of the TRANSACTIONS table, as shown in Figure 4-13.
+Figure 4-13 Column definitions of the TRANSACTIONS table
+10.Click the Key Constraints , Foreign Key Constraints , and Check Constraints tabs to review the key, foreign, and check constraints on the TRANSACTIONS table, as shown in Figure 4-14. There is one Foreign Key Constraint and one Check Constraint on the TRANSACTIONS table.
+Figure 4-14 Reviewing the constraints on the TRANSACTIONS table
+Now that you have reviewed the database model for this example, the following sections describe the steps that are required to implement RCAC in this banking scenario.
+4.3.2 Assigning function ID QIBM_DB_SECADM to the Database Engineers group
+The first step is to assign the appropriate function usage ID to the Database Engineers (DBEs) that will be implementing RCAC. For a description of function usage IDs, see 2.1, "Roles" on page 8. In this example, the DBEs are users MCAIN and HBEDOYA.
+Complete the following steps:
+1. Right-click the database connection and select Application Administration , as shown in Figure 4-15.
+
Figure 4-15 Application administration
+
+2. The Application Administration window opens, as shown in Figure 4-16. Click IBM i Database and select the function usage ID of Database Security Administrator .
+
Figure 4-16 Application administration for IBM i
+
+3. Click Customize for the function usage ID of Database Security Administrator, as shown in Figure 4-17.
+
Figure 4-17 Customizing the Database Security Administrator function usage ID
+
+4. The Customize Access window opens, as shown in Figure 4-18. Click the users that need to implement RCAC. For this example, HBEDOYA and MCAIN are selected. Click Add and then click OK .
+
Figure 4-18 Customize Access window
+
+5. The Application Administrator window opens again. The function usage ID of Database Security Administrator now has an X in the Customized Access column, as shown in Figure 4-19.
+
Figure 4-19 Function usage ID Database Security Administrator customized
+
+6. Run an SQL query that shows which user profiles are enabled to define RCAC. The SQL query is shown in Figure 4-20.
+Figure 4-20 Query to display user profiles with function usage ID for RCAC
+4.3.3 Creating group profiles for the users and their roles
+The next step is to create the different group profiles (ADMIN, CUSTOMER, TELLER, and DBE) and assign the different user profiles to the different group profiles. For a description of the different groups and users for this example, see 4.2, "Description of the users roles and responsibilities" on page 39.
+Complete the following steps:
+1. On the main navigation pane of System i Navigator, right-click Groups and select New Group , as shown in Figure 4-21.
+
Figure 4-21 Creating group profiles
+
+2. The New Group window opens, as shown in Figure 4-22. For each new group, enter the Group name (ADMIN, CUSTOMER, TELLER, and DBE) and add the user profiles that are associated to this group by selecting the user profile and clicking Add .
+Figure 4-22 shows adding user TQSPENCER to the TELLER group profile.
+
Figure 4-22 Creating group profiles and adding users
+
+3. After you create all the group profiles, you should see them listed in System i Navigator under Users and Groups Groups , as shown in Figure 4-23.
+
Figure 4-23 Newly created group profiles
+
+4.3.4 Creating the CUSTOMER_LOGIN_ID global variable
+In this step, you create a global variable that is used to capture the Customer_Login_ID information, which is required to validate the permissions. For more information about global variables, see 3.2.2, "Built-in global variables" on page 19.
+Complete the following steps:
+1. From System i Navigator, under the schema Bank_Schema, right-click Global Variable and select New Global Variable , as shown in Figure 4-24.
+
Figure 4-24 Creating a global variable
+
+2. The New Global Variable window opens, as shown in Figure 4-25. Enter the global variable name of CUSTOMER_LOGIN_ID, select the data type of VARCHAR, and leave the default value of NULL. This default value ensures that users that do not use the web interface do not have permission to access the data. Click OK .
+
Figure 4-25 Creating a global variable called CUSTOMER_LOGIN_ID
+
+3. Now that the global variable is created, assign permissions to the variable so that it can be set by the program. Right-click the CUSTOMER_LOGIN_ID global variable and select Permissions , as shown in Figure 4-26.
+
Figure 4-26 Setting permissions on the CUSTOMER_LOGIN_ID global variable
+
+4. The Permissions window opens, as shown in Figure 4-27. Select Change authority for Webuser so that the application can set this global variable.
+
Figure 4-27 Setting change permissions for Webuser on the CUSTOMER_LOGIN_ID global variable
+
+4.3.5 Defining and creating row permissions
+You now ready to define the row permissions of the tables. Complete the following steps:
+1. From the navigation pane of System i Navigator, click Schemas BANK_SCHEMA , right-click Row Permissions , and select New Row Permission , as shown in Figure 4-28.
+
Figure 4-28 Selecting new row permissions
+
+2. The New Row Permission window opens, as shown in Figure 4-29. Enter the information regarding the row permissions on the CUSTOMERS table. This row permission defines what is established in the following policy:
+-User profiles that belong to DBE, ADMIN, and TELLER group profiles can see all the rows.
+-User profiles that belong to the CUSTOMERS group profile (that is, the WEBUSER user) can see only the rows that match their customer login ID. The login ID value representing the online banking user is passed from the web application to the database by using the global variable CUSTOMER_LOGIN_ID. The permission rule uses a subquery to check whether the global variable matches the CUSTOMER_LOGIN_ID column value in the CUSTOMERS table.
+-Any other user profile cannot see any rows at all.
+Select the Enabled option. Click OK .
+
Figure 4-29 New row permissions on the CUSTOMERS table
+
+3. Define the row permissions for the ACCOUNTS table. The New Row Permission window opens, as shown in Figure 4-30. Enter the information regarding the row permissions on the ACCOUNTS table. This row permission defines what is established in the following policy:
+-User profiles that belong to DBE, ADMIN and TELLER group profiles can see all the rows.
+-User profiles that belong to the CUSTOMERS group profile (that is, the WEBUSER user) can see only the rows that match their customer login ID. The login ID value representing the online banking user is passed from the web application to the database by using the global variable CUSTOMER_LOGIN_ID. The permission rule uses a subquery to check whether the global variable matches the CUSTOMER_LOGIN_ID column value in the CUSTOMERS table.
+-Any other user profile cannot see any rows at all.
+Select the Enabled option. Click OK .
+
Figure 4-30 New row permissions on the ACCOUNTS table
+
+4. Define the row permissions on the TRANSACTIONS table. The New Row Permission window opens, as shown in Figure 4-31. Enter the information regarding the row permissions on the TRANSACTIONS table. This row permission defines what is established in the following policy:
+-User profiles that belong to DBE, ADMIN, and TELLER group profiles can see all of the rows.
+-User profiles that belong to the CUSTOMERS group profile (that is, the WEBUSER user) can see only the rows that match their customer login ID. The login ID value representing the online banking user is passed from the web application to the database by using the global variable CUSTOMER_LOGIN_ID. The permission rule uses a subquery to check whether the global variable matches the CUSTOMER_LOGIN_ID column value in the CUSTOMERS table.
+Note: You must join back to ACCOUNTS and then to CUSTOMERS by using a subquery to check whether the global variable matches CUSTOMER_LOGIN_ID. Also, if the row permission or column mask rule text references another table with RCAC defined, the RCAC for the referenced table is ignored.
+-Any other user profile cannot see any rows at all. Select the Enabled option. Click OK .
+
Figure 4-31 New row permissions on the TRANSACTIONS table
+
+5. To verify that the row permissions are enabled, from System i Navigator, click Row Permissions , as shown in Figure 4-32. The three row permissions are created and enabled.
+
Figure 4-32 List of row permissions on BANK_SCHEMA
+
+4.3.6 Defining and creating column masks
+This section defines the masks on the columns. Complete the following steps:
+1. From the main navigation pane of System i Navigator, click Schemas BANK_SCHEMA , right-click Column Masks , and select New Column Mask , as shown in Figure 4-33.
+
Figure 4-33 Creating a column mask
+
+2. In the New Column Mask window, which is shown in Figure 4-34, enter the following information:
+-Select the CUSTOMERS table on which to create the column mask.
+-Select the Column to mask; in this example, it is CUSTOMER_EMAIL.
+-Define the masking logic depending on the rules that you want to enforce. In this example, either the ADMIN or CUSTOMER group profiles can see the entire email address; otherwise, it is masked to ****@****.
+Select the Enabled option. Click OK .
+
Figure 4-34 Defining a column mask on the CUSTOMERS table
+
+3. Repeat steps 1 on page 58 and 2 to create column masks for the following columns:
+-MASK_DRIVERS_LICENSE_ON_CUSTOMERS
+-MASK_LOGIN_ID_ON_CUSTOMERS
+-MASK_SECURITY_QUESTION_ANSWER_ON_CUSTOMERS
+-MASK_ACCOUNT_NUMBER_ON_ACCOUNTS
+-MASK_SECURITY_QUESTION_ON_CUSTOMERS
+-MASK_TAX_ID_ON_CUSTOMERS
+4. To verify that the column masks are enabled, from System i Navigator, click Column Masks , as shown in Figure 4-35. The seven column masks are created and enabled.
+
Figure 4-35 List of column masks on BANK_SCHEMA
+
+4.3.7 Restricting the inserting and updating of masked data
+This step defines the check constraints that support the column masks to make sure that on INSERTS or UPDATES, data is not written with a masked value. For more information about the propagation of masked data, see 6.8, "Avoiding propagation of masked data" on page 108.
+Complete the following steps:
+1. Create a check constraint on the column CUSTOMER_EMAIL in the CUSTOMERS table. From the navigation pane of System i Navigator, right-click the CUSTOMERS table and select Definition , as shown Figure 4-36
+
Figure 4-36 Definition of the CUSTOMERS table
+
+2. From the CUSTOMERS definition window, click the Check Constraints tab and click Add , as shown in Figure 4-37.
+
Figure 4-37 Adding a check constraint
+
+3. The New Check Constraint window opens, as shown in Figure 4-38. Complete the following steps:
+a. Select the CUSTOMER_EMAIL column.
+b. Enter the check constraint condition. In this example, specify CUSTOMER_EMAIL to be different from ****@****, which is the mask value.
+c. Select the On update violation, preserve column value option and click OK .
+
Figure 4-38 Specifying a new check constraint on the CUSTOMERS table
+
+4. Figure 4-39 shows that there is now a check constraint on the CUSTOMERS table that prevents any masked data from being updated to the CUSTOMER_EMAIL column.
+
Figure 4-39 Check constraint on the CUSTOMERS table
+
+5. Create all the other check constraints that are associated to each of the masks on the CUSTOMERS table. After this is done, these constraints should look like the ones that are shown in Figure 4-40.
+
Figure 4-40 List of check constraints on the CUSTOMERS table
+
+4.3.8 Activating row and column access control
+You are now ready to activate RCAC on all three tables in this example. Complete the following steps:
+1. Start by enabling RCAC on the CUSTOMERS table. From System i Navigator, right-click the CUSTOMERS table and select Definition . As shown in Figure 4-41, make sure that you select Row access control and Column access control . Click OK .
+
Figure 4-41 Enabling RCAC on the CUSTOMERS table
+
+2. Enable RCAC on the ACCOUNTS table. Right-click the ACCOUNTS table and select Definition . As shown Figure 4-42, make sure that you select Row access control and Column access control . Click OK .
+
Figure 4-42 Enabling RCAC on ACCOUNTS
+
+3. Enable RCAC on the TRANSACTIONS table. Right-click the TRANSACTIONS table and select Definition . As shown in Figure 4-43, make sure that you select Row access control . Click OK .
+
Figure 4-43 Enabling RCAC on TRANSACTIONS
+
+4.3.9 Reviewing row permissions
+This section displays all the row permissions after enabling RCAC. Complete the following steps:
+1. From System i Navigator, click Row Permissions , as shown in Figure 4-44. Three additional Row Permissions are added (QIBM_DEFAULT*). There is one per each row permission.
+
Figure 4-44 Row permissions after enabling RCAC
+
+2. Look at one of the row permission definitions by right-clicking it and selecting Definition , as shown in Figure 4-45.
+
Figure 4-45 Selecting row permission definition
+
+3. A window opens, as shown in Figure 4-46. Take note of the nonsensical search condition (0=1) of the QIBM_DEFAULT row permission. This permission is ORed with all of the others and it ensures that if someone does not meet any of the criteria from the row permission then this condition is tested, and because it is false the access is denied.
+
Figure 4-46 Search condition of the QIBM_DEFAULT row permission
+
+4.3.10 Demonstrating data access with RCAC
+You are now ready to test the RCAC definitions. Run the following SQL statements with each type of user (DBE, SECURITY, TELLER, ADMIN, and WEBUSER):
+GLYPH A SELECT statement that returns the SESSION_USER.
+GLYPH A SELECT statement that counts the customers from the CUSTOMER table. There are 90 customers in the CUSTOMER table.
+GLYPH A simple SELECT statement that returns the following output from the CUSTOMERS table ordered by customer_name:
+-c u s t o m e r _ i d
+-customer_name
+-customer_email
+-c u s t o m e r _ t a x _ i d
+-customer_drivers_license_number
+Data access for a DBE user with RCAC
+To test a DBE (MCAIN) user, complete the following steps:
+1. Confirm that the user is the user of the session by running the first SQL statement, as shown in Figure 4-47. In this example, MCAIN is the DBE user.
+Figure 4-47 DBE session user
+2. The number of rows that the DBE user MCAIN can see is shown in Figure 4-48.
+Figure 4-48 Number of rows that DBE user can see in the CUSTOMERS table
+3. The result of the third SQL statement is shown in Figure 4-49. Note the masked columns. User MCAIN can see all the rows in the CUSTOMERS table, but there are some columns where the result is masked.
+Figure 4-49 SQL statement that is run by the DBE user with masked columns
+Data access for SECURITY user with RCAC
+To test a SECURITY user, complete the following steps:
+1. Confirm that the user is the user of the session by running the first SQL statement, as shown in Figure 4-50. In this example, SECURITY is the security officer.
+
Figure 4-50 SECURITY session user
+
+2. The number of rows in the CUSTOMERS table that the security officer can see is shown in Figure 4-51. The security officer cannot see any data at all.
+Figure 4-51 Number of rows that the security officer can see in the CUSTOMERS table
+3. The result of the third SQL statement is shown in Figure 4-52. Note the empty set that is returned to the security officer.
+
Figure 4-52 SQL statement that is run by the SECURITY user - no results
+
+Data access for TELLER user with RCAC
+To test a Teller (TQSPENCER) user, complete the following steps:
+1. Confirm that the TELLER user is the user of the session by running the first SQL statement, as shown in Figure 4-53. In this example, TQSPENCER is a TELLER user.
+Figure 4-53 TELLER session user
+2. The number of rows in the CUSTOMERS table that the TELLER user can see is shown in Figure 4-54. The TELLER user can see all the rows.
+
Figure 4-54 Number of rows that the TELLER user can see in the CUSTOMERS table
+
+3. The result of the third SQL statement is shown in Figure 4-55. Note the masked columns. The TELLER user, TQSPENSER, can see all the rows, but there are some columns where the result is masked.
+Figure 4-55 SQL statement that is run by the TELLER user with masked columns
+Data access for ADMIN user with RCAC
+To test an ADMIN (VGLUCCHESS) user, complete the following steps:
+1. Confirm that the ADMIN user is the user of the session by running the first SQL statement, as shown in Figure 4-56. In this example, VGLUCCHESS is an ADMIN user.
+
Figure 4-56 ADMIN session user
+
+2. The number of rows that the ADMIN user can see is shown in Figure 4-57. The ADMIN user can see all the rows.
+Figure 4-57 Number of rows that the ADMIN can see in the CUSTOMERS table
+3. The result of the third SQL statement is shown in Figure 4-58. There are no masked columns.
+Figure 4-58 SQL statement that is run by the ADMIN user - no masked columns
+Data access for WEBUSER user with RCAC
+To test a CUSTOMERS (WEBUSER) user that accesses the database by using the web application, complete the following steps:
+1. Confirm that the user is the user of the session by running the first SQL statement, as shown in Figure 4-59. In this example, WEBUSER is a CUSTOMER user.
+
Figure 4-59 WEBUSER session user
+
+2. A global variable (CUSTOMER_LOGIN_ID) is set by the web application and then is used to check the row permissions. Figure 4-60 shows setting the global variable by using the customer login ID.
+
Figure 4-60 Setting the global variable CUSTOMER_LOGIN_ID
+
+3. Verify that the global variable was set with the correct value by clicking the Global Variable tab, as shown in Figure 4-61.
+
Figure 4-61 Viewing the global variable value
+
+4. The number of rows that the WEBUSER can see is shown in Figure 4-62. This user can see only the one row that belongs to his web-based user ID.
+
Figure 4-62 Number of rows that the WEBUSER can see in the CUSTOMERS table
+
+5. The result of the third SQL statement is shown in Figure 4-63. There are no masked columns, and the user can see only one row, which is the user's own row.
+Figure 4-63 SQL statement that is run by WEBUSER - no masked columns
+Other examples of data access with RCAC
+To run an SQL statement that lists all the accounts and current balance by customer, complete the following steps:
+1. Run the SQL statement that is shown in Figure 4-64 using the WEBUSER user profile. The SQL statement has no WHERE clause, but the WEBUSER can see only his accounts.
+Figure 4-64 List of accounts and current balance by customer using the WEBUSER user profile
+2. Figure 4-65 shows running a more complex SQL statement that calculates transaction total by account for year and quarter. Run this statement using the WEBUSER profile. The SQL statement has no WHERE clause, but the WEBUSER user can see only his transactions.
+Figure 4-65 Calculate transaction total by account for year and quarter using the WEBUSER profile
+3. Run the same SQL statement that lists the accounts and current balance by customer, but use a TELLER user profile. The result of this SQL statement is shown in Figure 4-66. The TELLER user can see all the rows in the CUSTOMERS table.
+Figure 4-66 List of accounts and current balance by customer using a TELLER user profile
+4.3.11 Query implementation with RCAC activated
+This section looks at some other interesting information that is related to RCAC by comparing the access plans of the same SQL statement without RCAC and with RCAC. This example uses Visual Explain and runs an SQL statement that lists the accounts and current balance by customer.
+Complete the following steps:
+1. Figure 4-67 shows the SQL statement in Visual Explain ran with no RCAC. The implementation of the SQL statement is a two-way join, which is exactly what the SQL statement is doing.
+
Figure 4-67 Visual Explain with no RCAC enabled
+
+2. Figure 4-68 shows the Visual Explain of the same SQL statement, but with RCAC enabled. It is clear that the implementation of the SQL statement is more complex because the row permission rule becomes part of the WHERE clause.
+
Figure 4-68 Visual Explain with RCAC enabled
+
+3. Compare the advised indexes that are provided by the Optimizer without RCAC and with RCAC enabled. Figure 4-69 shows the index advice for the SQL statement without RCAC enabled. The index being advised is for the ORDER BY clause.
+
Figure 4-69 Index advice with no RCAC
+
+4. Now, look at the advised indexes with RCAC enabled. As shown in Figure 4-70, there is an additional index being advised, which is basically for the row permission rule. For more information, see 6.4.2, "Index advisor" on page 99.
+
Figure 4-70 Index advice with RCAC enabled
+
+
+Chapter 5.
+5
+RCAC and non-SQL interfaces
+A benefit of Row and Column Access Control (RCAC) is that its security controls are enforced across all the interfaces that access DB2 for i because the security rules are defined and enforced at the database level. The examples that are shown in this paper focus on SQL-based access, but row permissions and column masks also are enforced for non-SQL interfaces, such as native record-level access in RPG and COBOL programs and CL commands, such as Display Physical File Member ( DSPPFM ) and Copy File ( CPYF ).
+This consistent enforcement across all interfaces is a good thing, but there are some nuances and restrictions as a result of applying an SQL-based technology such as RCAC to non-SQL interfaces. These considerations are described in this chapter.
+The following topics are covered in this chapter in this chapter:
+GLYPH Unsupported interfaces
+GLYPH Native query result differences
+GLYPH Accidental updates with masked values
+GLYPH System CL commands considerations
+5.1 Unsupported interfaces
+It is not possible to create a row permission or column mask on a distributed table or a program-described file.
+After a row permission or column mask is added to a table, there are some data access requests that no longer work. An attempt to open or query a table with activated RCAC controls involving any of the following scenarios is rejected with the CPD43A4 error message:
+GLYPH A logical file with multiple formats if the open attempt requests more than one format.
+GLYPH A table or query that specifies an ICU 2.6.1 sort sequence.
+GLYPH A table with read triggers.
+This unsupported interface error occurs when a table with RCAC controls is accessed, not when the RCAC control is created and activated.
+For example, assume that there is a physical file, PF1, which is referenced by a single format logical file (LFS) and a multi-format logical file (LFM). A row permission is successfully created and activated for PF1. Any application that accesses PF1 directly or LFS continues to work without any issues. However, any application that opens LFM with multiple formats receives an error on the open attempt after the row permission is activated for PF1.
+Important: This potential runtime error places a heavy emphasis on a comprehensive testing plan to ensure that all programs are tested. If testing uncovers an unsupported interface, then you must investigate whether the application can be rewritten to use a data access interface that is supported by RCAC.
+5.2 Native query result differences
+The SQL Query Engine (SQE) is the only engine that is enhanced by IBM to enforce RCAC controls on query requests. In order for native query requests to work with RCAC, these native query requests are now processed by SQE instead of the Classic Query Engine (CQE). Native query requests can consist of the following items:
+GLYPH Query/400
+GLYPH QQQQRY API
+GLYPH Open Query File ( OPNQRYF ) command
+GLYPH Run Query ( RUNQRY ) command
+GLYPH Native open (RPG, COBOL, OPNDBF, and so on) of an SQL view
+Legacy queries that have been running without any issues for many years and over many IBM i releases are now processed by a different query engine. As a result, the runtime behavior and results that are returned can be different for native query requests with RCAC enabled. The OPNQRYF command and Query/400 run with SQE by default.
+The following list documents some of the query output differences that can occur when native query requests are processed by CQE:
+GLYPH Different ordering in the result set
+GLYPH Different values for null columns or columns with errors
+GLYPH Suppression of some mapping error messages
+GLYPH Loss of RRN positioning capabilities
+GLYPH Duplicate key processing behavior differences
+GLYPH Missing key feedback
+For a list of the differences and additional details, see the IBM i Memo to Users Version 7.2 , found at:
+http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzahg/rzahgmtu.htm
+In addition, the performance of a native query with SQE can be different. It is possible that a new index or keyed logical file might need to be created to improve the performance.
+Important: Based on the potential impacts of query result set and performance differences, you should perform extensive functional testing and performance benchmarking of applications and reports that use native query interfaces.
+5.3 Accidental updates with masked values
+The masked values that are returned by a column mask can potentially cause the original data value to be accidentally overwritten, especially with applications using native record-level access.
+For example, consider a table containing three columns of first name, last name, and tax ID that is read by an RPG program. The user running the program is not authorized to see the tax ID value, so a masked value (*****3333) is written into the program's record buffer, as shown Figure 5-1.
+In this example, the application reads the data for an update to correct the misspelling of the last name. The last name value is changed to Smith in the buffer. Now, a WRITE request is issued by the program, which uses the contents of the record buffer to update the row in the underlying DB2 table. Unfortunately, the record buffer still contains a masked value for the tax ID, so the tax ID value in the table is accidentally set to the masked value.
+
Figure 5-1 Accidental update with masked values scenario
+
+Obviously, careful planning and testing should be exercised to avoid accidental updates with masked values.
+DB2 for i also enhanced its check constraint support in the IBM i 7.2 release with a new ON UPDATE clause that allows the existing value to be preserved when a masked value is detected by a check constraint. Details about how to employ this new check constraint support can be found in 6.8.1, "Check constraint solution" on page 108.
+5.4 System CL commands considerations
+As stated earlier, RCAC controls are enforced on all data access interfaces. This enforcement is not limited to programmatic interfaces; it also includes system CL commands that read and insert data, such as the Create Duplicate Object ( CRTDUPOBJ ) and Start DFU ( STRDFU ) CL commands. This section documents the behavior of the Create Duplicate Object ( CRTDUPOBJ ), Copy File ( CPYF ), and Copy Library ( CPYLIB ) CL commands with RCAC.
+5.4.1 Create Duplicate Object (CRTDUPOBJ) command
+The CRTDUPOBJ command is enhanced with a new Access Control ( ACCCTL ) parameter in the IBM i 7.2 release to copy RCAC controls to the new object being created. Row permissions and column masks are copied to the new object by default because the default value for the ACCCTL parameter is *ALL .
+If the invoker of the CRTDUPOBJ command asks for data to be copied with a value of *YES for the DATA parameter, the value of the ACCCTL parameter must be *ALL . If not, the command invocation receives an error.
+When data is copied to the duplicated object with the DATA parameter, all rows and unmasked column values are copied into the new object, even if the command invoker is not authorized to view all rows or certain column values. This behavior occurs because the RCAC controls also are copied to the new object. The copied RCAC controls enforce that only authorized users are allowed to view row and column values in the newly duplicated object.
+5.4.2 Copy File (CPYF) command
+The CPYF command copies only data, so there is no new parameter to copy RCAC controls to the target table. Therefore, if CPYF is used to create a target table, there are no RCAC controls placed on the target table.
+When RCAC controls are in place on the source table, the CPYF command is limited to reading rows and column values that are based on the invoker of the CPYF command. If a user is authorized to see all rows and column values, then all rows and unmasked column values are copied to the target table (assuming no RCAC controls are on the target table). If a user without full access runs the CPYF command, the CPYF command can copy only a subset of the rows into the target table. In addition, if that user can view only masked column values, then masked values are copied into the target table. This also applies to the Copy to Import File ( CPYTOIMPF ) command.
+If the target table has RCAC controls defined and activated, then the CPYF command is allowed only to add or replace rows in the target table based on the RCAC controls. If CPYF tries to add a row to the target table that the command invoker is not allowed to view according to the target RCAC controls, then an error is received.
+5.4.3 Copy Library (CPYLIB) command
+The CPYLIB command is enhanced with the same Access Control ( ACCCTL ) parameter as the CRTDUPOBJ command in the IBM i 7.2 release (see 5.4.1, "Create Duplicate Object (CRTDUPOBJ) command" on page 82). Row permissions and column masks are copied to the new object in the new library by default because the default value for the ACCCTL parameter is *ALL .
+
+Chapter 6.
+Additional considerations
+This chapter covers additional considerations that must be taken into account when implementing Row and Column Access Control (RCAC), including the following functions:
+GLYPH Timing of column masking
+GLYPH Data movement
+GLYPH Joins
+GLYPH Views
+GLYPH Materialized query tables
+GLYPH Index advisor
+GLYPH Monitoring, analysis, and debugging
+GLYPH Performance and scalability
+The following topics are covered in this chapter:
+GLYPH Timing of column masking
+GLYPH RCAC effects on data movement
+GLYPH RCAC effects on joins
+GLYPH Monitoring, analyzing, and debugging with RCAC
+GLYPH Views, materialized query tables, and query rewrite with RCAC
+GLYPH RCAC effects on performance and scalability
+GLYPH Exclusive lock to implement RCAC (availability issues)
+GLYPH Avoiding propagation of masked data
+GLYPH Triggers and functions (SECURED)
+GLYPH RCAC is only one part of the solution
+6
+6.1 Timing of column masking
+An important design and implementation consideration is the fact that RCAC column masking occurs after all of the query processing is complete, which means that the query results are not at all based on the masked values. Any local selection, joining, grouping, or ordering operations are based on the unmasked column values. Only the final result set is the target of the masking.
+An example of this situation is shown in Figure 6-1. However, note that aggregate functions (a form of grouping) are based on masked values.
+SELECT
+FROM GROUP BY ORDER BY
+Without RCAC Masking
+With RCAC Masking
+
+CREDIT_CARD_NUMBER, SUM(AMOUNT) AS TOTAL TRANSACTIONS
+CREDIT_CARD_NUMBER
+CREDIT_CARD_NUMBER;
+Conversely, field procedure masking causes the column values to be changed (that is, masked) and stored in the row. When the table is queried and the masked columns are referenced, the masked data is used for any local selection, joining, grouping, or ordering operations. This situation can have a profound effect on the query's final result set and not just on the column values that are returned. Field procedure masking occurs when the column values are read from disk before any query processing. RCAC masking occurs when the column values are returned to the application after query processing. This difference in behavior is shown in Figure 6-2.
+Note: Column masks can influence an SQL INSERT or UPDATE . For example, you cannot insert or update a table with column access control activated with masked data generated from an expression within the same statement that is based on a column with a column mask.
+
Figure 6-2 Masking differences between Fieldproc and RCAC
+
+6.2 RCAC effects on data movement
+As described earlier and shown in Figure 6-3, RCAC is applied pervasively regardless of the data access programming interface, SQL statement, or IBM i command. The effects of RCAC on data movement scenarios can be profound and possibly problematic. It is important to understand these effects and make the appropriate adjustments to avoid incorrect results or data loss.
+
Figure 6-3 RCAC and data movement
+
+The "user" that is running the data movement application or process, whether it be a high availability (HA) scenario, an extract, transform, load (ETL) scenario, or just copying data from one file or table to another one, must have permission to all the source rows without masking, and not be restricted from putting rows into the target. Allowing the data movement application or process to bypass the RCAC rules must be based on a clear and concise understanding of the organization's object security and data access policy. Proper design, implementation, and testing are critical success factors when applying RCAC.
+Important: RCAC is applied to the table or physical file access. It is not applied to the journal receiver access. Any and all database transactions are represented in the journal regardless of RCAC row permissions and column masks. This makes it essential that IBM i security is used to ensure that only authorized personnel have access to the journaled data.
+This section covers in detail the following three examples:
+GLYPH Effects when RCAC is defined on the source table
+GLYPH Effects when RCAC is defined on the target table
+GLYPH Effects when RCAC is defined on both source and target tables
+6.2.1 Effects when RCAC is defined on the source table
+Example 6-1 shows a simple example that illustrates the effect of RCAC as defined on the source table.
+Example 6-1 INSERT INTO TARGET statement
+INSERT INTO TARGET (SELECT * FROM SOURCE);
+For example, given a "source" table with a row permission defined as NAME <> 'CAIN' and a column mask that is defined to project the value 999.99 for AMOUNT, the SELECT statement produces a result set that has the RCAC rules applied. This reduced and modified result set is inserted into the "target" table even though the query is defined as returning all rows and all columns. Instead of seven rows that are selected from the source, only three rows are returned and placed into the target, as shown in Figure 6-4.
+
Figure 6-4 RCAC effects on data movement from SOURCE
+
+6.2.2 Effects when RCAC is defined on the target table
+Example 6-2 shows a simple example that illustrates the effect of RCAC as defined on the target table.
+Example 6-2 INSERT INTO TARGET statement
+INSERT INTO TARGET (SELECT * FROM SOURCE);
+Given a "target" table with a row permission defined as NAME <> 'CAIN' and a column mask that is defined to project the value 999.99 for AMOUNT, the SELECT statement produces a result set that represents all the rows and columns. The seven row result set is inserted into the "target", and the RCAC row permission causes an error to be returned, as shown in Figure 6-5. The source rows where NAME = 'CAIN' do not satisfy the target table's permission, and therefore cannot be inserted. In other words, you are inserting data that you cannot read.
+
Figure 6-5 RCAC effects on data movement on TARGET
+
+6.2.3 Effects when RCAC is defined on both source and target tables
+Example 6-3 shows a simple example that illustrates the effect of RCAC as defined on both the source and the target tables.
+Example 6-3 INSERT INTO TARGET statement
+INSERT INTO TARGET (SELECT * FROM SOURCE);
+Given a "source" table and a "target" table with a row permission defined as NAME <> 'CAIN' and a column mask that is defined to project the value 999.99 for AMOUNT, the SELECT statement produces a result set that has the RCAC rules applied. This reduced and modified result set is inserted into the "target" table even though the query is defined as returning all rows and all columns. Instead of seven rows that are selected from the source, only three rows are returned.
+Although the source rows where NAME <> 'CAIN' do satisfy the target table's permission, the AMOUNT column value of 999.99 represents masked data and therefore cannot be inserted. An error is returned indicating the failure, as shown in Figure 6-6. In this scenario, DB2 is protecting against an overt attempt to insert masked data.
+
Figure 6-6 RCAC effects on data movement on SOURCE and TARGET
+
+6.3 RCAC effects on joins
+As mentioned previously, a fundamental concept of row permission is that it defines a logical subset of rows that a user or group of users is permitted to access and use. This subset becomes the new basis of any query against the table that has RCAC enabled.
+Note: Thinking of the row permission as defining a virtual set of rows that can be operated on is the secret to understanding the effect of RCAC on any join operation.
+As shown in Figure 6-7, there are two different sets, set A and set B. However, set B has a row permission that subsets the rows that a user can see.
+
Figure 6-7 Set A and set B with row permissions
+
+6.3.1 Inner joins
+Inner join defines the intersection of two data sets. For a row to be returned from the inner join query, it must appear in both sets, as shown in Figure 6-8.
+
Figure 6-8 Inner join without RCAC permission
+
+Given that row permission serves to eliminate logically rows from one or more sets, the result set from an inner join (and a subquery) can be different when RCAC is applied. RCAC can reduce the number of rows that are permitted to be accessed by the join, as shown in Figure 6-9.
+Effect of column masks on inner joins: Because column masks are applied after the query final results are determined, the masked value has no effect on the join processing and corresponding query result set.
+
Figure 6-9 Inner join with RCAC permission
+
+6.3.2 Outer joins
+Outer joins preserve one or both sides of two data sets. A row can be returned from the outer join query if it appears in the primary set (LEFT, RIGHT, or both in the case of FULL), as shown in Figure 6-10. Column values from the secondary set are returned if the row has a match in the primary set. Otherwise, NULL is returned for the column value by default.
+
Figure 6-10 Outer join without RCAC permission
+
+Given that row permission serves to eliminate logically rows from one or more sets, more column values that are returned from the secondary table in outer join can be NULL when RCAC is applied, as shown in Figure 6-11.
+Effect of column masks on inner joins: Because column masks are applied after the query final results are determined, the masked value has no effect on the join processing and corresponding query result set.
+
Figure 6-11 Outer join with RCAC permission
+
+6.3.3 Exception joins
+Exception joins preserve one side of two data sets. A row can be returned from the exception join query if it appears in the primary set (LEFT or RIGHT) and the row does not appear in the secondary set, as shown in Figure 6-12. Column values from the secondary set are returned as NULL by default.
+
Figure 6-12 Exception join without RCAC permission
+
+Given that row permission serves to eliminate logically rows from one or more sets, more rows can appear to be exceptions when RCAC is applied, as shown in Figure 6-13. Also, because column masks are applied after the query final results are determined, the masked value has no effect on the join processing and corresponding query result set.
+
Figure 6-13 Exception join with RCAC permission
+
+6.4 Monitoring, analyzing, and debugging with RCAC
+It is assumed (and it is a critical success factor) that the database engineer or application developer has a thorough understanding of the DB2 for i Query Optimizer, Database Engine, and all the associated tools and techniques.
+The monitoring, analyzing, and debugging process basically stays the same when RCAC row permissions or column masks are in place, with a few important differences:
+GLYPH The underlying data access plan can be different and more complex based on the rule text.
+GLYPH The database results can be reduced or modified based on the rule text and user profile.
+GLYPH The run time of the request can be affected either positively or negatively based on the rule text.
+GLYPH For high-level language record level access, query plans must be considered, and not just program code.
+During analyzing and debugging, it is important to account for all of the RCAC definitions for each table or file to understand the logic and corresponding work that is associated with processing the row permissions and column masks. It is also important to realize that, depending on the user profile in effect at run time, the database actions and query results can be different.
+RCAC is designed and implemented to be transparent to the user. It is possible for user "Mike" and user "Hernando" to run the exact same query, against the exact same data on the exact same system, and get different result sets. There is no error, no warning, and no indication that RCAC reduced or modified the respective answers that are returned. Furthermore, it is also likely that user "Mike" and user "Hernando" have different query run times even though it appears that everything is the same for both users. The actual query plan contains the RCAC logic, and this additional code path can alter the amount of work that is needed to produce results, based on the user running the query.
+When monitoring, analyzing, and debugging a database process when RCAC is enabled, it is critical to keep as many of the "variables" the same as possible. Use a good scientific process. For example, when re-creating a problem situation running under the same user profile with the same data and under the same conditions, it is almost mandatory. Otherwise, the database behavior and query results can be different.
+To successfully perform monitoring, analyzing, and debugging when RCAC is enabled likely involves changes in the security and data access policies of the organization, and require new responsibilities, authority, and oversight within the data-centric application development community. As such, establishing and staffing the position of "database engineer" becomes even more important.
+6.4.1 Query monitoring and analysis tools
+When monitoring and collecting metrics on database requests, DB2 for i provides additional information that indicates row permissions or column masks are being applied. This information is integrated and part of the standard tools, such as Visual Explain, SQL Plan Cache Snapshot, and SQL Performance Monitor.
+
Figure 6-14 shows how Visual Explain externalizes RCAC.
+
Figure 6-14 Visual Explain indicating that RCAC is applied
+
+
Figure 6-15 shows the main dashboard of an SQL Performance Monitor. Click Summary .
+
Figure 6-15 SQL Performance Monitor
+
+
Figure 6-16 shows the summary of an SQL Performance Monitor with an indication that RCAC is applied.
+
Figure 6-16 SQL Performance Monitor indicating that RCAC is applied
+
+
Figure 6-17 shows the statements of an SQL Performance Monitor and how RCAC is externalized.
+
Figure 6-17 SQL Performance Monitor showing statements and RCAC
+
+When implementing RCAC as part of a comprehensive and pervasive data access control initiative, consider that the database monitoring and analysis tools can collect literal values that are passed as part of SQL statements. These literal values can be viewed as part of the information collected. If any of the literals are based on or are used with masked columns, it is important to review the database engineer's policy for viewing these data elements. For example, supposed that column CUSTOMER_TAX_ID is deemed masked for the database engineer and the CUSTOMER_TAX_ID column is used in a predicate as follows:
+WHERE CUSTOMER_TAX_ID = '123-45-7890'
+The literal value of '123-45-7890' is visible to the analyst, effectively exposing sensitive information. If this is not acceptable, you must implement the SYSPROC.SET_COLUMN_ATTRIBUTE procedure.
+The SET_COLUMN_ATTRIBUTE procedure sets the SECURE attribute for a column so that variable values that are used for the column cannot be seen in the SQL Performance Monitor, SQL Plan Cache Snapshot, or Visual Explain.
+6.4.2 Index advisor
+Because the RCAC rule text can be almost any valid SQL logic, including local selection predicates, join conditions, and subqueries, the standard query tuning techniques still apply. Without a doubt, a proper and adequate indexing strategy is a good starting point.
+The index advisor is not specifically enhanced for RCAC, but because the rule text is a fully integrated part of the query plan, any opportunities for indexing is advised based on the current Query Optimizer functionality. If an index is advised because of the RCAC rule text logic, there is no RCAC reason code provided. Analyzing the query plan and the RCAC rule text provides the understanding as to why the index is being advised.
+For example, the query that is shown in Figure 6-18 produces index advice for the user's predicate and the RCAC predicate.
+
Figure 6-18 Index advice and RCAC
+
+In Figure 6-19, index advisor is showing an index for the ACCOUNTS and CUSTOMERS tables based on the RCAC rule text.
+
Figure 6-19 Index advisor based on the RCAC rule
+
+For more information about creating and using indexes, see IBM DB2 for i indexing methods and strategies , found at:
+http://www.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sys_wp_db2_i_in dexing_methods_strategies
+6.4.3 Metadata using catalogs
+To make the discovery and identification of RCAC row permissions and column masks programmatically, query the QSYS2.SYSCONTROLS catalog view or the QSYS2.SYSCONTROLSDEP catalog view directly. Otherwise, the System i Navigator Database graphical interface can be used interactively.
+Figure 6-20 shows the QSYS2.SYSCONTROLS catalog view.
+Figure 6-20 RCAC and catalogs
+The SYSCONTROLS catalog view contains the following columns:
+GLYPH COLUMN_NAME
+GLYPH CONTROL_TYPE
+GLYPH CREATE_TIME
+GLYPH ENABLE
+GLYPH ENFORCED
+GLYPH ASP_NUMBER
+GLYPH IMPLICIT
+GLYPH LABEL
+GLYPH LAST_ALTERED
+GLYPH LONG_COMMENT
+GLYPH RCAC_NAME
+GLYPH RCAC_OWNER
+GLYPH RCAC_SCHEMA
+GLYPH RULETEXT
+GLYPH SYSTEM_COLUMN_NAME
+GLYPH SYSTEM_TABLE_NAME
+GLYPH SYSTEM_TABLE_SCHEMA
+GLYPH TABLE_NAME
+GLYPH TABLE_SCHEMA
+GLYPH TBCORRELATION
+The SYSCONTROLSDEP catalog view contains the following columns:
+GLYPH COLUMN_NAME
+GLYPH CONTROL_TYPE
+GLYPH IASP_NUMBER
+GLYPH OBJECT_NAME
+GLYPH OBJECT_SCHEMA
+GLYPH OBJECT_TYPE
+GLYPH PARM_SIGNATURE
+GLYPH RCAC_NAME
+GLYPH RCAC_SCHEMA
+GLYPH SYSTEM_TABLE_NAME
+GLYPH SYSTEM_TABLE_SCHEMA
+For more information, see the IBM i 7.2 DB2 for i SQL Reference Guide , found at:
+http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/db2/rbafzintro.htm?lang =en
+6.5 Views, materialized query tables, and query rewrite with RCAC
+This section covers the implications to views, materialized query tables (MQTs), and query rewrite when RCAC is activated on a table.
+6.5.1 Views
+Any access to an SQL view that is over one or more tables that have RCAC also have those row permissions and column masking rules applied. If an SQL view has predicates, those are logically ANDed with any search condition that is specified in the permissions that are defined on the underlying tables. The view does not have to project the columns that are referenced by the permissions. Figure 6-21 shows an example of a view definition and user query.
+
Figure 6-21 View definition and user query
+
+What the query optimizer plans for and what the database engine runs is shown in the Figure 6-22.
+
Figure 6-22 Query rewrite with RCAC
+
+6.5.2 Materialized query tables
+When the query to populate a materialized query table (MQT) is run by the system on either the create table or a refresh table, and one or more source tables have RCAC defined, the row permissions and column masks are ignored. This means that the MQT has all of the data.
+Because the MQT is a copy of the base table data, when a permission is created on the base table, all the related MQTs are altered to have a default row permission. This default permission prevents any of the rows from being directly queried.
+When a query implicitly uses an MQT, the underlying row permissions and column masks are built into the query that uses the MQT. In order for the MQT to be used for optimization, the MQT must include any columns that are used by the row permissions and column masks.
+The following example illustrates this scenario:
+1. Create schema and tables:
+CREATE SCHEMA Schema1;
+CREATE TABLE Schema1.employee(userID varchar(128), LocationID integer, Regionid integer);
+CREATE TABLE Schema1.Sales (INVOICE INTEGER NOT NULL, SALEAMT DECIMAL(5,2), TAXAMT DECIMAL(5,2), LOCATIONID INTEGER, REGIONID INTEGER);
+2. Create a row permission that allows the employees to see only rows from the region they work in:
+/* Create permission that only allows the employees to see rows from the region they work in */ CREATE PERMISSION Schema1.Sales_PERM1 ON schema1.sales FOR ROWS WHERE CURRENT_USER in (SELECT userId FROM schema1.employee E WHERE e.regionid = regionid) ENFORCED FOR ALL ACCESS ENABLE;
+3. Create an MQT to summarize sales by location:
+-- Create MQT to summarize sales by location -- This has all of the data. The schema1.sales_perm1 predicate was not applied CREATE TABLE Schema1.Location_Sales_MQT as AS (SELECT LocationID, SUM(Saleamt) as Total_Location_Sales FROM SCHEMA1.SALES GROUP BY LOCATIONID) DATA INITIALLY DEFERRED REFRESH DEFERRED
+MAINTAINED BY USER;
+4. Populate the MQT (permission is not applied):
+/* Populate the MQT - Permission not applied here */ REFRESH TABLE Schema1.Location_Sales_MQT
+The following query matches Location_Sales_MQT, but it cannot be used because it does not have column regionid, which is needed by the schema1.sales_PERM1 permission:
+SELECT Locationid, sum(SALEAMT) FROM schema1.sales GROUP BY locationid;
+5. Create an MQT to summarize by region and location:
+-- MQT to summarize by region and location Create table schema1.Region_Location_Sales_MQT as AS (SELECT REGIONID, LocationID, SUM(Saleamt) as Total_Location_Sales FROM SCHEMA1.SALES GROUP BY REGIONID, LOCATIONID) DATA INITIALLY DEFERRED REFRESH DEFERRED MAINTAINED BY USER;
+6. Populate the Region_location_Sales_MQT (permission not applied):
+/* Populate the Region_location_Sales_MQT - Permission not applied here */ Refresh table schema1.Region_Location_Sales_MQT
+The following query can use the Region_location_SALES_MQT because it has REGIONID, which is required for the schema1.sales_PERM1 permission:
+SELECT Locationid, sum(SALEAMT) FROM schema1.sales GROUP BY locationid;
+This example has the following additional implications:
+GLYPH Users must be prevented from explicitly querying the MQT or a view that is created over it. Those two cases bypass the row permission and column mask rules from the underlying tables.
+GLYPH If the user writes code to update incrementally an MQT, that code must be run from a user that has permission to view all of the rows and all columns in their unmasked state. Otherwise, the MQT contents are not complete and queries that implicitly use the MQT might get wrong results.
+GLYPH To prevent this, a check constraint can be created to cause an error if masked data was inserted into the MQT.
+6.5.3 Query rewrite
+Query rewrite is a technique that the optimizer can use to change the original request to improve performance.
+For example, a query that references Table1 might be rewritten to access an MQT over Table1, or it might also be optimized to access only the fields in an index that is defined over Table1 and avoid touching Table1. With RCAC, defining these rewrites can still occur, but the MQT or index also must include all columns that are needed by the row permissions or column masks that are defined on Table1.
+As part of adding RCAC, the impact to these potentially significant performance optimizations must be considered. Usage of MQTs or index-only access might be reduced or eliminated by enabling RCAC.
+6.6 RCAC effects on performance and scalability
+As with any discussion that is related to performance and scalability, nothing is certain or guaranteed. There are always many variables that are involved. First, a good foundation of knowledge and skill is required to appreciate fully what is occurring when a database request is handled within an RCAC enabled environment. Implementing the row permission or column masks involves the query optimizer and database engine. The process that identifies the rows that you have permission to access is considered a "query", and as such a query plan must be formulated. In the case of SQL requests, the RCAC portion of the query is combined with the user's query, much like a query referencing a view.
+For native record level access, this RCAC "query" is also built and used to test the permission. When a file is opened, the RCAC rule text logic is included, optimized, and run as part of the native read, write, update, or delete operation. The amount of work (and time) required to identify the record based on the user's permission is directly related to the complexity and depth of the logic that is needed to identify the records that can be returned.
+A simple example to illustrate this concept is a random read using a keyed logical file (that is, an index). In its purest form, a random read uses two data access methods: index probe (find the key and RRN) and table probe (find the record using RRN). If the RCAC rule text specifies five nested subqueries to determine whether the user has access to the record, this logic must be added to the path. The subquery processing now becomes part of the original "random read" request. Instead of two simple I/Os to retrieve the record, there can be a minimum of 12 I/Os to retrieve the same record. These I/Os can be done with a result of "not found" if the user is not entitled to any of the records.
+For programs that access records sequentially, in or out of key order, the added RCAC logic can have a profound effect on the performance and scalability. Reading the "next record" in order is no longer a simple matter of positioning to the next available key, as shown in Figure 6-23.
+
Figure 6-23 Native record access with no RCAC
+
+Before the record, as identified by the key, is considered available, the RCAC logic must be run. If the record is rejected by RCAC, the next record in sequence that is permissible must be identified. This spinning through the records can take a long time and uses many resources, as shown in Figure 6-24.
+
Figure 6-24 Native record level access with RCAC
+
+After the row permissions and column masks are designed and implemented, adequate performance and scalability testing are recommended.
+6.7 Exclusive lock to implement RCAC (availability issues)
+When defining permissions or enabling RCAC, an exclusive lock on the base table is obtained. The impact to other applications depends on the order of create permission and the alter table to activate RCAC.
+Consider the following scenarios:
+GLYPH Scenario 1: Adding permissions and RCAC is not enabled on the table:
+-Job 1 reading data from the table (open for input) holds a *SHRRD on the member and a *SHRRD on the data.
+-Job 2 adding, updating, or deleting rows from table (open for output) holds a *SHRRD on the member and a *SHRUPD on the data.
+-Job 4 allocates the object and gets a *SHRRD on the file and a *EXCLRD on the data.
+-Job 3 attempts to add a permission to the table. Permission is added and the pseudo-closed cursors for Job1 and Job 2 are closed. Job 4 still holds the *SHRRD on the file and *EXCLRD on the data.
+The net result from Scenario 1 is that you can add permissions without having to end the applications that are reading the base table.
+GLYPH Scenario 2: Altering a table to activate RCAC requires that all applications using the table be ended. The alter table requires exclusive use of the table.
+GLYPH Scenario 3: Altering the table to activate RCAC before the permissions are added. The alter table requires exclusive use of the table, as in scenario 2. All applications must be ended to perform this alter. After the alter is complete, any applications trying to read data do not get any results, and attempts to insert new rows returns the following message:
+SQ20471] INSERT or UPDATE does not satisfy row permissions.
+To create a permission in this case requires that you end all the applications, unlike scenario 1 where permissions can be added while the applications were active. In this case, the applications must be ended to run the create permission.
+6.8 Avoiding propagation of masked data
+Operations such as insert or update into a table with active column access control can fail if the input data is masked data. This can happen when data to be inserted or updated contains the masked value as a result of a SELECT from a table with active column access control.
+For example, assume TABLE1 and TABLE2 have active column access control and for insert, selecting data from TABLE2 returns the masked data. The following INSERT returns an error:
+INSERT INTO TABLE1 SELECT * FROM TABLE2
+The masked data that is returned from the SELECT * FROM TABLE2 might not be valid input data for TABLE1 because of data type or column check constraint.
+There are two ways to prevent this situation from happening: Define a check constraint or create a before trigger.
+6.8.1 Check constraint solution
+One way to prevent this problem is to define a check constraint.
+As part of RCAC, new SQL syntax is provided to allow an action to be performed when a violation of the check constraints check condition occurs instead of giving that error. However, if the check condition is still not met after the action, a hard error is returned. A check constraint with the new on-violation-clause is allowed on both the CREATE TABLE and ALTER TABLE statements.
+In the Example 6-4, the mask is defined to return a value of 'XXX-XX-nnnn' for any query that is not done by a user profile in the DBMGR group. The constraint checks that the column SSN does not have the masked value.
+Example 6-4 Check constraint to avoid masked data
+CREATE SCHEMA MY_LIB SET SCHEMA MY_LIB CREATE TABLE MY_LIB.EMP_INFO (COL1_name CHAR(10) WITH DEFAULT 'DEFAULT', COL2_ssn CHAR(11) WITH DEFAULT 'DEFAULT') CREATE MASK MASK_ssn ON MY_LIB.EMP_INFO FOR COLUMN COL2_ssn RETURN CASE WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'DBMGR' ) = 1 THEN COL2_ssn
+ELSE 'XXX-XX-'||SUBSTR(COL2_ssn,8,4) END ENABLE | /* Check constraint for the update and insert.*/ ALTER TABLE MY_LIB.EMP_INFO ADD CONSTRAINT MASK_ssn_preserve CHECK(SUBSTR(COL2_ssn,1,7)<>'XXX-XX-') -- Allow any value other than the mask ON UPDATE VIOLATION PRESERVE COL2_ssn -- Don't update the mask portion of the existing value ON INSERT VIOLATION SET COL2_ssn = DEFAULT -- for insert set this to the default value.
+6.8.2 Before trigger solution
+The actions that are described in Example 6-4 on page 108 for ON UPDATE VIOLATION and ON INSERT VIOLATION also can be handled by a before trigger, as shown in Example 6-5.
+Example 6-5 Before trigger to avoid masked data
+CREATE TRIGGER PREVENT_MASK_SSN BEFORE INSERT OR UPDATE ON MY_LIB.EMP_INFO REFERENCING NEW ROW AS N OLD ROW AS O FOR EACH ROW MODE DB2ROW SECURED WHEN(SUBSTR(N.COL2_ssn,1,7) = 'XXX-XX-') BEGIN IF INSERTING THEN SET N.COL2_ssn = DEFAULT; ELSEIF UPDATING THEN SET N.COL2_ssn = O.COL2_ssn; END IF; END
+6.9 Triggers and functions (SECURED)
+There are some considerations that must be considered when there are triggers and functions on tables that have RCAC enabled. The purpose of SECURE for triggers and functions is so that a user who is allowed to create a trigger or function is not necessarily able to make it SECURE themselves. This prevents the trigger/function developer from adding code that skims off data that they are not allowed to see.
+6.9.1 Triggers
+Triggers have access to the data in rows outside of the row permission or column masking. An after trigger has access to the new row image after the permission has allowed the update or insert to occur. Therefore, the triggers can potentially change the insert or update image value so that it violates the permission.
+Any triggers that are defined on a table must be created with an attribute that designates that it is SECURED when RCAC definitions are created or altered for that table, as shown in Example 6-6. The same applies to a view that has an instead of trigger. That trigger must be secure at the point RCAC is enabled for any of the underlying tables the view is over.
+Example 6-6 Trigger SECURED
+/* Trigger created with the SECURED attribute */ CREATE TRIGGER PREVENT_MASK_SSN BEFORE INSERT OR UPDATE ON MY_LIB.EMP_INFO REFERENCING NEW ROW AS N OLD ROW AS O FOR EACH ROW MODE DB2ROW SECURED WHEN(SUBSTR(N.COL2_ssn,1,7) = 'XXX-XX-') BEGIN IF INSERTING THEN SET N.COL2_ssn = DEFAULT; ELSEIF UPDATING THEN SET N.COL2_ssn = O.COL2_ssn; END IF; END
+6.9.2 Functions
+Within a CREATE PERMISSION or CREATE MASK , a function can be called. Because that UDF has access to the data before the RCAC rules are applied, the SECURE attribute is required on that function, as shown in Example 6-7.
+Example 6-7 Specifying SECURED on a function
+CREATE PERMISSION SCHEMA.PERM1 ON SCHEMA.TABLE1 FOR ROWS WHERE MY_UDF(CURRENT_USER,COLUMN1) = 1 ENFORCED FOR ALL ACCESS ENABLE; CREATE FUNCTION MY_UDF (INP1 CHAR(32), INP2 INTEGER) Returns INTEGER LANGUAGE SQL CONTAINS SQL SECURED
+The SECURED attribute of MY_UDF signifies that the function is considered secure for RCAC. If a function is called from an SQL statement, and references a column in a table that has RCAC, it must be declared as secure. In that case, if the secure function calls other functions, they are not validated to confirm whether they are secure.
+Consider the following examples:
+GLYPH Table1 has RCAC defined and enabled. SELECT MY_UDF2(Column2) from schema.table1. MY_UDF2 must be created with the SECURED attribute. If MY_UDF2 invokes MY_UDF3, there is no checking to ensure that it is also created with SECURED. NOT SECURED is the default on the create function unless SECURED is explicitly selected.
+This same rule applies for any function that might be invoked with a masked column specified as an argument.
+GLYPH Table2 column SSN has a column mask that is defined on it. SELECT MY_UDF4(SSN) from table2. Because SSN has a column mask that is defined, MY_UDF4 must be created with the SECURED attribute.
+6.10 RCAC is only one part of the solution
+When designing and implementing RCAC row permissions, special attention should be given to the effectiveness and limitations of controlling data access. Data can be housed in objects other than tables or physical files. The role and responsibility of the database user, for example, the database engineer, must be reconciled with their respective authority and access privileges.
+Figure 6-25 illustrates that object level security is the first check and that RCAC permissions provide control only on tables and physical files.
+
Figure 6-25 Object-level security and RCAC permissions
+
+To get access to the table and the rows, the user must pass the object level authority test and the RCAC permission test.
+The IBM i journal captures the transactional data and places an image of the row in the journal receiver. If the user has access to the journal receiver, the row image can be viewed if the user has authority to the journal receiver.
+Although the SQL Plan Cache data, the SQL Plan Cache Snapshot data, and the SQL Performance Monitor data do not reveal the results of queries, they can show the literal values that are passed along with the SQL statements.
+The ability to monitor, analyze, debug, and tune data-centric applications effectively and efficiently requires some understanding of the underlying data, or at least the attributes of the data. The organization must be willing to reconcile the conflicting requirements of "restricting access to data", and "needing access to data".
+
+Chapter 7.
+7
+Row and Column Access Control management
+After Row and Column Access Control (RCAC) definitions are defined and activated in a database, your management processes must be adjusted to accommodate these new security controls. This chapter highlights some of the changes that should be considered.
+The following topics are covered in this chapter:
+GLYPH Managing row permissions and column masks
+GLYPH Managing tables with row permissions and column masks
+GLYPH Monitoring and auditing function usage
+7.1 Managing row permissions and column masks
+This section focuses on the management of the RCAC row permissions and column masks.
+7.1.1 Source management
+The SQL statements that are used to define row permissions and column masks should be managed with a change management process. Ideally, you already are using a change management process for your database definitions, and that same process can be extended to cover your RCAC definitions.
+If you are using SQL DDL to define your DB2 tables, then you have the option of adding the RCAC definitions to the same source file as the table definition. The benefit of this approach is that it keeps all DDL that is related to a table in a single source file. The downside is that if you must re-create only the RCAC definitions and leave the table unchanged, then you must identify and extract only the RCAC definitions from the source file. There are situations where the row permissions and column masks must be changed or re-created without changing the definition of the associated table.
+7.1.2 Modifying definitions
+After RCAC is activated for a table, the row permission and column mask definitions can be re-created to change the data access behavior for that table. Usage of the OR REPLACE clause on the CREATE MASK and CREATE PERMISSION SQL statements simplifies the re-creation process by folding in the deletion of the existing RCAC definition.
+This capability makes it easy to change your RCAC definitions as you test the controls with your applications and identify tweaks that must be made to your RCAC implementation. However, re-creation of RCAC definitions does require an exclusive lock to be acquired on the table during the process.
+7.1.3 Turning on and off
+As described in 3.1.2, "Enabling and activating RCAC" on page 16, the SQL ALTER statement can turn on and off row permissions and column masks. The ALTER MASK and A LTER PERMISSION statements allow an individual row permission or column mask to be turned off with the DISABLE option and back on with the ENABLE option. The ALTER TABLE statement can deactivate enforcement of all the row permissions and column masks for a single table.
+Important: Although these capabilities make it easy to temporarily turn off RCAC security so that you can make environment or application changes, these processes require an exclusive lock to be obtained on a table. Therefore, this activity must be planned carefully to avoid disruptions and outages.
+7.1.4 Regenerating
+DB2 also can regenerate an existing row permission or column mask. This regenerate option can be useful with more complex RCAC definitions that reference other DB2 objects.
+For example, consider a row permission on an ACCOUNTS table (PERMISSION1_ON_ACCOUNTS). The ACCOUNTS table row permission references and compares columns in the CUSTOMERS table. When the definition of the CUSTOMERS table changes, DB2 does not check to determine whether the change to the CUSTOMERS table breaks the ACCOUNTS table row permission. If this table definition change does break the row permission, an error does not surface until an application tries to read rows from the ACCOUNTS table.
+Instead of waiting for an application to detect this error, the REGENERATE option can be used on the ACCOUNTS row permission. The REGENERATE option returns an error if the change in the CUSTOMERS table definition causes the row permission to be invalid. In this way, the row permission can be proactively corrected before an application discovers the error.
+7.2 Managing tables with row permissions and column masks
+This section examines the object management considerations after RCAC is added to a DB2 table.
+7.2.1 Save and restore
+Row permissions and column masks are stored in the DB2 table object itself, so they are automatically saved and restored when the DB2 table object is saved and restored. Therefore, no adjustments must be made to your database backup process to accommodate RCAC.
+Save and restore processing works fine with RCAC if the RCAC definition does not reference other DB2 objects other than the table over which they are defined. When the RCAC definition has dependencies on other DB2 objects, the restore process is much more challenging.
+For example, assume that the BANKSCHEMA library (which is the system name or short name for the schema long name of BANK_SCHEMA) is saved and restored into a library named BANK_TEST. Recall from the example in 7.1.4, "Regenerating" on page 114 that the row permission on the ACCOUNTS table references the CUSTOMERS table (… SELECT C.CUSTOMER_ID FROM CUSTOMERS C …). After the restore operation, the ACCOUNTS row permission still references the CUSTOMERS table in BANK_SCHEMA because DB2 explicitly qualifies all object references when the row permission or column mask is created. The restore processing does not change the explicit qualification from BANK_SCHEMA to BANK_TEST. As a result, the restored ACCOUNTS row permission now depends on DB2 objects residing in a different schema, even though it was not created that way originally. For more details, see Figure 7-1.
+
Figure 7-1 Restoring tables to different schemas
+
+The only way to fix this issue is to re-create the row permission or column mask after the restore operation. Re-creation of the row permission or column mask is required only for definitions that reference other DB2 objects, but it is simpler to re-create all of the RCAC definitions instead of a subset. For example, generate the SQL using System i Navigator, clear the "Schema qualify names for objects" and select the "OR REPLACE clause", and then run the generated script.
+7.2.2 Table migration
+There are several IBM i CL commands, such as Move Object ( MOVOBJ ), Create Duplicate Object ( CRTDUPOBJ ), and Copy Library ( CPYLIB ), which are used to migrate a table from one library to another one. Often, this migration is done to create different versions of the table that can be used for development or testing purposes.
+The migration of a table with RCAC has the same challenges as restore processing. If the RCAC definition references other DB2 objects, then IBM i CL commands do not change the schema names that are explicitly qualified by the DB2 internal RCAC processing.
+Again, re-creating the row permission or column mask is the only way to fix the issue of references to DB2 objects in other schemas.
+7.3 Monitoring and auditing function usage
+While establishing proper roles for users, separating duties using function usage IDs, and defining RCAC policies allows you to implement an effective and pervasive data access control scheme. How do you monitor and audit everyone who is involved in the implementation of that scheme? The answer is to use IBM i journaling. A special journal that is called QAUDJRN, also known as the audit journal , can provide a record and audit trail of many security relevant events that occur on the system, including RCAC-related events.
+The tasks and operations of security administrators and database engineers who are collaborating can (and should) be effectively monitored and audited to ensure that the organization's data access control and governance policies are in place and enabled. For example, the Database Engineers can be involved in designing and developing functions and triggers that must be secured using the SECURE attribute. Otherwise, without properly securing functions and triggers, the RCAC controls can be bypassed.
+A new journal entry type of "AX" for journal entry code "T" (audit trail) is now used for RCAC. More information about the journaling of RCAC operations can be found in the following documents:
+GLYPH IBM i Version 7.2 Journal Management Guide , found at:
+http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzaki/rzakiprintthis .htm?lang=en
+GLYPH IBM i Version 7.2 Security Reference Guide , found at:
+http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzarl/rzarlkickoff.h tm?lang=en
+
+Chapter 8.
+Designing and planning for success
+Although successfully implementing Row and Column Access Control (RCAC) is based on knowledge and skills, designing and planning are fundamental aspects. This chapter describes the need for a deep understanding of the technology, and good design, proper planning, and adequate testing.
+The following topics are covered in this chapter:
+GLYPH Implementing RCAC with good design and proper planning
+GLYPH DB2 for i Center of Excellence
+8
+8.1 Implementing RCAC with good design and proper planning
+By using RCAC, the row and column data that is returned to the requester can be controlled and governed by a set of data-centric policies that are defined with SQL and implemented within DB2 for i.
+RCAC provides fine-grained access control and is complementary to IBM i object-level security. With the new RCAC feature of DB2 for i, the database engineer, in partnership with the data owner and security officer, can ensure that users have access to the data based on their level of authorization and responsibility.
+This situation also can include separation of duties, such as allowing the application developers to design and implement the solutions, but restricting them from accessing the production data based on policy. Just because someone writes and owns the program, it does not mean that they have access to all the sensitive data that their program can potentially read.
+This paper has described the following pervasive power and advantages of RCAC:
+GLYPH Access can be controlled through simple or sophisticated logic.
+GLYPH Virtually no application changes are required.
+GLYPH The implementation of the access policy is part of the DB2 data access layer.
+GLYPH Table data is protected regardless of the interface that is used.
+GLYPH No user is inherently exempted from the access control policies.
+GLYPH Groups of users can share policies and permissions.
+A deep understanding of the technology, and proper planning, good design, adequate testing, and monitored deployment are critical for success. This includes the usage of quality assurance testing, and realistic performance and scalability exercises that serve to demonstrate that all of your requirements are being met. As part of the verification process, the usage of in-depth proofs of concepts and proofs of technology are recommended, if not essential. When RCAC is activated, the results of queries can change. Anticipating this change and realizing the effects of RCAC before going live are of the utmost importance.
+With the ever-growing value of data, and the vast and varied database technology that is available today, it is crucial to have a person or persons on staff who specialize in data-centric design, development, and deployment. This role and responsibility falls on the database engineer. With the availability of DB2 RCAC, the importance of full-time database engineering has grown.
+8.2 DB2 for i Center of Excellence
+To further assist you with understanding and implementing RCAC, the DB2 for i Center of Excellence team offers an RCAC education and consulting workshop. In addition to knowledge transfer, a working session allows for a review of your data access control requirements, review of the current environment, solution ideation, and high-level solution design.
+If you are interested in engaging with the DB2 for i Center of Excellence, contact Mike Cain at mcain@us.ibm.com .
+
+Appendix A.
+
+Database definitions for the RCAC banking example
+This appendix provides the database definitions or DDLs to re-create the Row and Column Access Control (RCAC) scenario that is described in Chapter 4, "Implementing Row and Column Access Control: Banking example" on page 37. The script that is shown in Example A-1 is the DDL script that is used to implement this example.
+Example A-1 DDL script to implement the RCAC banking example
+/* Database Definitions for RCAC Bank Scenario */ /* Schema */ CREATE SCHEMA BANK_SCHEMA FOR SCHEMA BANKSCHEMA ; /* Global Variable */ CREATE VARIABLE BANK_SCHEMA.CUSTOMER_LOGIN_ID VARCHAR( 30) ; LABEL ON VARIABLE BANK_SCHEMA.CUSTOMER_LOGIN_ID IS 'Customer''s log in value passed by web application' ; /* Tables */ CREATE TABLE BANK_SCHEMA.CUSTOMERS ( CUSTOMER_ID FOR COLUMN CUSTO00001 INTEGER GENERATED ALWAYS AS IDENTITY ( START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE NO CYCLE NO ORDER CACHE 20 ), CUSTOMER_NAME FOR COLUMN CUSTO00002 VARCHAR(30) CCSID 37 NOT NULL , CUSTOMER_ADDRESS FOR COLUMN CUSTO00003 VARCHAR(30) CCSID 37 NOT NULL , CUSTOMER_CITY FOR COLUMN CUSTO00004 VARCHAR(30) CCSID 37 NOT NULL , CUSTOMER_STATE FOR COLUMN CUSTO00005 CHAR(2) CCSID 37 NOT NULL , CUSTOMER_PHONE FOR COLUMN CUSTO00006 CHAR(10) CCSID 37 NOT NULL , CUSTOMER_EMAIL FOR COLUMN CUSTO00007 VARCHAR(30) CCSID 37 NOT NULL , CUSTOMER_TAX_ID FOR COLUMN CUSTO00008 CHAR(11) CCSID 37 NOT NULL , CUSTOMER_DRIVERS_LICENSE_NUMBER FOR COLUMN CUSTO00012 CHAR(13) CCSID 37 DEFAULT NULL , CUSTOMER_LOGIN_ID FOR COLUMN CUSTO00009 VARCHAR(30) CCSID 37 DEFAULT NULL , CUSTOMER_SECURITY_QUESTION FOR COLUMN CUSTO00010 VARCHAR(100) CCSID 37 DEFAULT NULL ,
+CUSTOMER_SECURITY_QUESTION_ANSWER FOR COLUMN CUSTO00011 VARCHAR(100) CCSID 37 DEFAULT NULL , INSERT_TIMESTAMP FOR COLUMN INSER00001 TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP IMPLICITLY HIDDEN , UPDATE_TIMESTAMP FOR COLUMN UPDAT00001 TIMESTAMP GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP NOT NULL IMPLICITLY HIDDEN , CONSTRAINT BANK_SCHEMA.CUSTOMER_ID_PK PRIMARY KEY( CUSTOMER_ID ) ) ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ADD CONSTRAINT BANK_SCHEMA.CUSTOMER_LOGIN_ID_UK UNIQUE( CUSTOMER_LOGIN_ID ) ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ADD CONSTRAINT BANK_SCHEMA.CUSTOMER_DRIVERS_LICENSE_CHECK CHECK( CUSTOMER_DRIVERS_LICENSE_NUMBER <> '*************' ) ON UPDATE VIOLATION PRESERVE CUSTOMER_DRIVERS_LICENSE_NUMBER ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ADD CONSTRAINT BANK_SCHEMA.CUSTOMER_EMAIL_CHECK CHECK( CUSTOMER_EMAIL <> '****@****' ) ON UPDATE VIOLATION PRESERVE CUSTOMER_EMAIL ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ADD CONSTRAINT BANK_SCHEMA.CUSTOMER_LOGIN_ID_CHECK CHECK( CUSTOMER_LOGIN_ID <> '*****' ) ON INSERT VIOLATION SET CUSTOMER_LOGIN_ID = DEFAULT ON UPDATE VIOLATION PRESERVE CUSTOMER_LOGIN_ID ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ADD CONSTRAINT BANK_SCHEMA.CUSTOMER_SECURITY_QUESTION_CHECK CHECK( CUSTOMER_SECURITY_QUESTION_ANSWER <> '*****' ) ON INSERT VIOLATION SET CUSTOMER_SECURITY_QUESTION_ANSWER = DEFAULT ON UPDATE VIOLATION PRESERVE CUSTOMER_SECURITY_QUESTION_ANSWER ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ADD CONSTRAINT BANK_SCHEMA.CUSTOMER_SECURITY_QUESTION_ANSWER CHECK( CUSTOMER_SECURITY_QUESTION <> '*****' ) ON INSERT VIOLATION SET CUSTOMER_SECURITY_QUESTION = DEFAULT ON UPDATE VIOLATION PRESERVE CUSTOMER_SECURITY_QUESTION ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ADD CONSTRAINT BANK_SCHEMA.CUSTOMER_TAX_ID_CHECK CHECK( CUSTOMER_TAX_ID <> 'XXX-XX-XXXX' AND SUBSTR ( CUSTOMER_TAX_ID , 1 , 7 ) <> 'XXX-XX-' ) ON UPDATE VIOLATION PRESERVE CUSTOMER_TAX_ID ; CREATE TABLE BANK_SCHEMA.ACCOUNTS ( ACCOUNT_ID INTEGER GENERATED ALWAYS AS IDENTITY ( START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE NO CYCLE NO ORDER CACHE 20 ), CUSTOMER_ID FOR COLUMN CUSTID INTEGER NOT NULL , ACCOUNT_NUMBER FOR COLUMN ACCOUNTNO VARCHAR(50) CCSID 37 NOT NULL , ACCOUNT_NAME FOR COLUMN ACCOUNTNAM CHAR(12) CCSID 37 NOT NULL , ACCOUNT_DATE_OPENED FOR COLUMN OPENDATE DATE DEFAULT CURRENT_DATE , ACCOUNT_DATE_CLOSED FOR COLUMN CLOSEDATE DATE DEFAULT NULL , ACCOUNT_CURRENT_BALANCE FOR COLUMN ACCTBAL DECIMAL(11, 2) NOT NULL DEFAULT 0 , INSERT_TIMESTAMP FOR COLUMN INSDATE TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP IMPLICITLY HIDDEN , UPDATE_TIMESTAMP FOR COLUMN UPDDATE TIMESTAMP GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP NOT NULL IMPLICITLY HIDDEN , CONSTRAINT BANK_SCHEMA.ACCOUNT_ID_PK PRIMARY KEY( ACCOUNT_ID ) );
+ALTER TABLE BANK_SCHEMA.ACCOUNTS ADD CONSTRAINT BANK_SCHEMA.ACCOUNT_CUSTOMER_ID_FK FOREIGN KEY( CUSTOMER_ID ) REFERENCES BANK_SCHEMA.CUSTOMERS ( CUSTO00001 ) ON DELETE RESTRICT ON UPDATE RESTRICT ; ALTER TABLE BANK_SCHEMA.ACCOUNTS ADD CONSTRAINT BANK_SCHEMA.ACCOUNT_NUMBER_CHECK CHECK( ACCOUNT_NUMBER <> '*****' ) ON UPDATE VIOLATION PRESERVE ACCOUNT_NUMBER ; CREATE TABLE BANK_SCHEMA.TRANSACTIONS FOR SYSTEM NAME TRANS ( TRANSACTION_ID FOR COLUMN TRANS00001 INTEGER GENERATED ALWAYS AS IDENTITY ( START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE NO CYCLE NO ORDER CACHE 20 ), ACCOUNT_ID INTEGER NOT NULL , TRANSACTION_TYPE FOR COLUMN TRANS00002 CHAR(1) CCSID 37 NOT NULL , TRANSACTION_DATE FOR COLUMN TRANS00003 DATE NOT NULL DEFAULT CURRENT_DATE , TRANSACTION_TIME FOR COLUMN TRANS00004 TIME NOT NULL DEFAULT CURRENT_TIME , TRANSACTION_AMOUNT FOR COLUMN TRANS00005 DECIMAL(11, 2) NOT NULL , INSERT_TIMESTAMP FOR COLUMN INSER00001 TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP IMPLICITLY HIDDEN , UPDATE_TIMESTAMP FOR COLUMN UPDAT00001 TIMESTAMP GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP NOT NULL IMPLICITLY HIDDEN , CONSTRAINT BANK_SCHEMA.TRANSACTION_ID_PK PRIMARY KEY( TRANSACTION_ID ) ) ; ALTER TABLE BANK_SCHEMA.TRANSACTIONS ADD CONSTRAINT BANK_SCHEMA.TRANSACTIONS_ACCOUNT_ID_FK FOREIGN KEY( ACCOUNT_ID ) REFERENCES BANK_SCHEMA.ACCOUNTS ( ACCOUNT_ID ) ON DELETE RESTRICT ON UPDATE RESTRICT ; /* Permissions and Masks */ CREATE PERMISSION BANK_SCHEMA.PERMISSION1_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR ROWS WHERE ( QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'DBE' , 'ADMIN' , 'TELLER' ) = 1 ) OR ( QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 AND ( C . CUSTOMER_LOGIN_ID = BANK_SCHEMA . CUSTOMER_LOGIN_ID ) ) ENFORCED FOR ALL ACCESS ENABLE ; CREATE MASK BANK_SCHEMA.MASK_EMAIL_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_EMAIL RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_EMAIL WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_EMAIL ELSE '****@****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_TAX_ID_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_TAX_ID RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1
+THEN C . CUSTOMER_TAX_ID WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN ( 'XXX-XX-' CONCAT QSYS2 . SUBSTR ( C . CUSTOMER_TAX_ID , 8 , 4 ) ) WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_TAX_ID ELSE 'XXX-XX-XXXX' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_DRIVERS_LICENSE_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_DRIVERS_LICENSE_NUMBER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER ELSE '*************' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_LOGIN_ID_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_LOGIN_ID RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_LOGIN_ID WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_LOGIN_ID ELSE '*****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_SECURITY_QUESTION_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_SECURITY_QUESTION RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION ELSE '*****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_SECURITY_QUESTION_ANSWER_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_SECURITY_QUESTION_ANSWER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION_ANSWER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION_ANSWER ELSE '*****' END ENABLE ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL ;
+CREATE PERMISSION BANK_SCHEMA.PERMISSION1_ON_ACCOUNTS ON BANK_SCHEMA.ACCOUNTS AS A FOR ROWS WHERE ( QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'DBE' , 'ADMIN' , 'TELLER' ) = 1 ) OR ( QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 AND ( A . CUSTOMER_ID IN ( SELECT C . CUSTOMER_ID FROM BANK_SCHEMA . CUSTOMERS C WHERE C . CUSTOMER_LOGIN_ID = BANK_SCHEMA . CUSTOMER_LOGIN_ID ENFORCED FOR ALL ACCESS ENABLE ; CREATE MASK BANK_SCHEMA.MASK_ACCOUNT_NUMBER_ON_ACCOUNTS ON BANK_SCHEMA.ACCOUNTS AS A FOR COLUMN ACCOUNT_NUMBER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN A . ACCOUNT_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN A . ACCOUNT_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN A . ACCOUNT_NUMBER ELSE '*****' END ENABLE ; ALTER TABLE BANK_SCHEMA.ACCOUNTS ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL ; CREATE PERMISSION BANK_SCHEMA.PERMISSION1_ON_TRANSACTIONS ON BANK_SCHEMA.TRANSACTIONS AS T FOR ROWS WHERE ( QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'DBE' , 'ADMIN' , 'TELLER' ) = 1 ) OR ( QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 AND ( T . ACCOUNT_ID IN ( SELECT A . ACCOUNT_ID FROM BANK_SCHEMA . ACCOUNTS A WHERE A . CUSTOMER_ID IN ( SELECT C . CUSTOMER_ID FROM BANK_SCHEMA . CUSTOMERS C WHERE C . CUSTOMER_LOGIN_ID = BANK_SCHEMA . CUSTOMER_LOGIN_ID ENFORCED FOR ALL ACCESS ENABLE ; ALTER TABLE BANK_SCHEMA.TRANSACTIONS ACTIVATE ROW ACCESS CONTROL ; /* END */
+Related publications
+The publications that are listed in this section are considered suitable for a more detailed description of the topics that are covered in this paper.
+Other publications
+These publications are relevant as further information sources:
+GLYPH IBM DB2 for i indexing methods and strategies white paper:
+http://www.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sys_wp_db2_i _indexing_methods_strategies
+GLYPH IBM i Memo to Users Version 7.2 :
+http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzahg/rzahgmtu.htm
+GLYPH IBM i Version 7.2 DB2 for i SQL Reference Guide :
+http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/db2/rbafzintro.htm?l ang=en
+GLYPH IBM i Version 7.2 Journal Management Guide :
+http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzaki/rzakiprintthis .htm?lang=en
+GLYPH IBM i Version 7.2 Security Reference Guide :
+http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzarl/rzarlkickoff.h tm?lang=en
+Online resources
+These websites are relevant as further information sources:
+GLYPH Database programming topic of the IBM i 7.2 IBM Knowledge Center: http://www-01.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzahg/rzahgdbp.htm?l ang=en
+GLYPH Identity Theft Resource Center
+http://www.idtheftcenter.org
+GLYPH Ponemon Institute
+http://www.ponemon.org/
+Help from IBM
+IBM Support and downloads ibm.com /support IBM Global Services ibm.com /services
+Back cover
+Row and Column Access Control Support in IBM DB2 for i
+Implement roles and separation of duties
+Leverage row permissions on the database
+Protect columns by defining column masks
+This IBM Redpaper publication provides information about the IBM i 7.2 feature of IBM DB2 for i Row and Column Access Control (RCAC). It offers a broad description of the function and advantages of controlling access to data in a comprehensive and transparent way. This publication helps you understand the capabilities of RCAC and provides examples of defining, creating, and implementing the row permissions and column masks in a relational database environment.
+This paper is intended for database engineers, data-centric application developers, and security officers who want to design and implement RCAC as a part of their data control and governance policy. A solid background in IBM i object level security, DB2 for i relational database concepts, and SQL is assumed.
+REDP-5110-00
+
+
+INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION
+BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE
+IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.
+For more information: ibm.com /redbooks
+
\ No newline at end of file
diff --git a/tests/data/redp5110.json b/tests/data/redp5110.json
index 646de0ce..d7b9230b 100644
--- a/tests/data/redp5110.json
+++ b/tests/data/redp5110.json
@@ -1 +1 @@
-{"_name": "", "type": "pdf-document", "description": {"title": null, "abstract": null, "authors": null, "affiliations": null, "subjects": null, "keywords": null, "publication_date": null, "languages": null, "license": null, "publishers": null, "url_refs": null, "references": null, "publication": null, "reference_count": null, "citation_count": null, "citation_date": null, "advanced": null, "analytics": null, "logs": [], "collection": null, "acquisition": null}, "file-info": {"filename": "redp5110.pdf", "filename-prov": null, "document-hash": "3f8b6f0cb6d21ff16bdd7254c47ba72984b7ed1b70114e833c30f19be5366ad6", "#-pages": 146, "collection-name": null, "description": null, "page-hashes": [{"hash": "042dcdd712c3671577114114227f75ce1b5fe22a78e589c60b27d3c414ca914e", "model": "default", "page": 1}, {"hash": "19c7033f317f569819298dcaf98d4fd119632b01b323f3e244b6c14cd46b27b0", "model": "default", "page": 2}, {"hash": "1650a40ffe39a2240d05bdf5a7297a9e7de9c2564373213b732eb2009de23fd5", "model": "default", "page": 3}, {"hash": "fd0e00135169f317b2e2ab993cc64383dca2511f4a9e954563050a69dbefc35f", "model": "default", "page": 4}, {"hash": "dd607eefa7f279633dce503515463003c0167d6e1480e41daf39d95a03b02156", "model": "default", "page": 5}, {"hash": "69724844504d443f2f7dabc9d6cc912e26f1aba1fc51ddb2f248aa6f8da70505", "model": "default", "page": 6}, {"hash": "3ca620d960ef23d3419b3de71eb985eaa9bd54b7c1463116d4d11f64ab6515a8", "model": "default", "page": 7}, {"hash": "f360d9c1a29f5d9cc38f7a149b5e82ae9c177dedf534141f5d96d41792ccca01", "model": "default", "page": 8}, {"hash": "aaee7dcc87c982f44b3311ea587d9fee5d510de9567f84832e8b2effbf5e4c49", "model": "default", "page": 9}, {"hash": "f54ad5009578acd50e29ddf9e764f3894aef129245709bdda6695aca35080ef1", "model": "default", "page": 10}, {"hash": "35f70e10a2408e0395dfa9e894c5173186ac4481f414e41666e0be54f194accd", "model": "default", "page": 11}, {"hash": "64e97a3d553d9443178aae195f16f327cf503bb9c6930fe13af66b9fed277578", "model": "default", "page": 12}, {"hash": "995809366f67a29d338e5d08064a21a5bcda880bb0fe9d31085a3361059cf9ca", "model": "default", "page": 13}, {"hash": "b33a9cb89864b8461e994bc178c0f348722a75445a176a0ff059a1f1c6013c38", "model": "default", "page": 14}, {"hash": "37b17e27e1e6d405ed9c79a1282703930b1e8e1bff6b849a19ce614e5f874577", "model": "default", "page": 15}, {"hash": "ed6d8cc30effd85fb3a8b189732a80dd1d56dbc7fa4f079cd6d16f6084f4545a", "model": "default", "page": 16}, {"hash": "a355435891596f80e1ea7f3feef6b93a4f82caf62044e09a86e9ce2e02236715", "model": "default", "page": 17}, {"hash": "1d071bfa86d2d97bc7251f5f837deb4b3b72f422b79f76a83457210d40125b2a", "model": "default", "page": 18}, {"hash": "a74e58c9cd8ff01b37e4fe7df505cf495b9c1892db449b93e9076bb71fbd2ef2", "model": "default", "page": 19}, {"hash": "e83cbcc9e475190599ffc079b9266548d97fe0de76a0cb33c9fd50ef25237242", "model": "default", "page": 20}, {"hash": "c52304c295fd7f20396f82ab2bad8f0a085f067afc5692772fb9391ea880bcde", "model": "default", "page": 21}, {"hash": "86497e2615bb82251139e933e8e64153814e4ba46a499195083de8da6f5b89f9", "model": "default", "page": 22}, {"hash": "925398aa64327096c129a383e4bbec2eb083163878227c2d4e3166b44207fc03", "model": "default", "page": 23}, {"hash": "9d4e3d06a5f05410069b2b9486ec876c0e749fc8287c5d2c89940f4c44af96b5", "model": "default", "page": 24}, {"hash": "3956d5e714edf8547117687948339cc61c0727eaea2e2ad3b81e87963c1b73f0", "model": "default", "page": 25}, {"hash": "0bb0e09bd6e39cfc3da30376daecd1ad025ac38727078fd57ed04ab76e6dc8f3", "model": "default", "page": 26}, {"hash": "45005581d511136999fbc537f9465bb0b068b312ece0b9dcffe8f47a2af795fd", "model": "default", "page": 27}, {"hash": "4250019942cd107c8068cdf7c0c40c32f1735b6cd39e83eebd6b88f15f7af945", "model": "default", "page": 28}, {"hash": "d932d7afb19cda22b09acd96262695d080061df5f6f61323bbf3151b44707b0f", "model": "default", "page": 29}, {"hash": "bf6eb386ea506279669df237b54e8d789fa70b12d2830a42649632e5b057343f", "model": "default", "page": 30}, {"hash": "5dea54e30c89afe307a397ed24e083324991a1ddb17b94119f149183c1592cd7", "model": "default", "page": 31}, {"hash": "40fac6dd979f00f24fdcd1f07afad352b233f6926b8dfc8315e47c5304df1009", "model": "default", "page": 32}, {"hash": "40378b24c9b151d146ccd959a701dddfc8d9bac79a2075706c34d22dc185afd1", "model": "default", "page": 33}, {"hash": "935989acb8f1108365160d6428516b2b5cca95e12c75fb33818a33ad20730014", "model": "default", "page": 34}, {"hash": "570c8b11193a5b9e26d2b5a680c137cc6acbbb3c4c8dbfd02e96410f67444fab", "model": "default", "page": 35}, {"hash": "9f21fc6a00cee78376ee9fc31eb93ae5f0cde918f78b361f1ff0d2a1db7dfc01", "model": "default", "page": 36}, {"hash": "0e68f946bcdf7f573d88eed366216b5ba0ed470fcab1a783bcfb894802bf284e", "model": "default", "page": 37}, {"hash": "6ca7e5139b0a1993e0dd093698a9df1c1201091e509ec715d25c871c05a0863e", "model": "default", "page": 38}, {"hash": "c441267b99ad21ec04958ba35dcd465ce775b2c51c03ba67a4cfbb76f9955907", "model": "default", "page": 39}, {"hash": "aa16dbe8fa7fcd0634cf4930aa82a13c4f2d8621e759cec9c3097c15975551d2", "model": "default", "page": 40}, {"hash": "a1994f1ff203311afdc2424fedfad6f0429ccefb39ef62f7107ff75934404093", "model": "default", "page": 41}, {"hash": "92f8bad908b6a17adb727f822d8f77b673f79db90763faa32a648d89de97a0ae", "model": "default", "page": 42}, {"hash": "7cde568961d0f4ab1186b75a8d4f024a56b5065814f2050e7deda89fcb940064", "model": "default", "page": 43}, {"hash": "2d6e9fa06bae3a81449a646b629af6332dfc5780e5787e89a1eb491e60a8b95f", "model": "default", "page": 44}, {"hash": "c3c1468d8e9bbca1ac57cb97b7d6e191e3138cd98c919473a3deab89982d46fa", "model": "default", "page": 45}, {"hash": "3efc7b8e4918efef458011a9d564a062ba25e10f1b1998db385c746404995af2", "model": "default", "page": 46}, {"hash": "c96cd910329a52e1c256c61bafef7551e838ffe55cfc8de60ab8d1770a614d2a", "model": "default", "page": 47}, {"hash": "ed43a8e94b831c81406d263c7e72cb18279ff682bf82ca21d26bc8eaf58939b7", "model": "default", "page": 48}, {"hash": "beaba63670852ef3937e53edfd9c65e8381ccad289cf377ea1819ed4499649a5", "model": "default", "page": 49}, {"hash": "029387a73b937661bd354c45643d77243aae30a9e1dd692c26cadab54b33f630", "model": "default", "page": 50}, {"hash": "96cee9e611cde6da9b28630ae44aa4dddfb372bec1ad1400a4e5e0c641c18e9b", "model": "default", "page": 51}, {"hash": "d5f7a2c44833429eec81845b03adc589ed3fa9dbacfb90cbe3ac733cfb86306c", "model": "default", "page": 52}, {"hash": "0e398142d223dfaf46ad1d76702b89aa208b23fdc9f5fb7aaba1472a9db53b7b", "model": "default", "page": 53}, {"hash": "59664e9cadd6da670dd867311b1c5d9789cd944186e8ff42375b9719ddc43cf9", "model": "default", "page": 54}, {"hash": "5e4e6eaeafaf43a18590db6079f775401f7689d694cda14516fb000f7d85885c", "model": "default", "page": 55}, {"hash": "68496b0fe32a5149c0d6e70fef47ac02544a1db8176b6fa31c2c4bc59b35f933", "model": "default", "page": 56}, {"hash": "ac1bffe2a57f4b9f610dac9745f85bf8029c04e6279bae1fd942b030ca7e3635", "model": "default", "page": 57}, {"hash": "42616e9b91f856e761cf994d852d7c913e50b2fc00ce04e71cd28d51a4c88bf1", "model": "default", "page": 58}, {"hash": "4e9917d93adf25e36c0eeb37beb7881df8d8de40b23fdcde3f8c35e8867b4f7b", "model": "default", "page": 59}, {"hash": "7a484f738feda7e2327ce3bae87e5989b008d1309008f5fc237a681be7b4780c", "model": "default", "page": 60}, {"hash": "2957be6c48ca15c71ae2d63191e3ec999a65771e444c197828a2efe54aad7dee", "model": "default", "page": 61}, {"hash": "81d885ff0652b16f490f2bdf49bf5b2f85bdea4ea7dc85f98de238b437812522", "model": "default", "page": 62}, {"hash": "c0a9752603b861a7c13d678d1c89174f140ae5ef1fc4af32a872ae99bd09b494", "model": "default", "page": 63}, {"hash": "9fa129577bad65520977b6742108edd287a8413c1f002a0fcde9e8d4649e5ca3", "model": "default", "page": 64}, {"hash": "720722b50e586615b5a55451ec49b89048aecbb7450b7bf952ab7b8cab856b63", "model": "default", "page": 65}, {"hash": "91c76d552d29f2d09c34608319dd7729bd1309ccfadd56f22a00d25e8bbce771", "model": "default", "page": 66}, {"hash": "d9a6a973665fd160fb9cf52d6444cd4be6bf5a977666b625f58858ba507b0ee2", "model": "default", "page": 67}, {"hash": "dcc11d3809231dfdbe15f28126c3c6c7016f0d239c48829860133e645f0b4e9e", "model": "default", "page": 68}, {"hash": "18f5746455a39ff66f0d83bf5dcc45151e5313ccf038da38b25195a135445d23", "model": "default", "page": 69}, {"hash": "6f150521a19ebcc1dc711a861d26a1447ee33c01d770b6e985ed23ac4c3bce0b", "model": "default", "page": 70}, {"hash": "2675ed680861667ca9a8eb01fffa6b1ffc5c682d1217a7ee211ee1a14f066301", "model": "default", "page": 71}, {"hash": "cc1b3ad555bc13b0266cc1dd1646f6703b96043a17865254191fb28200897100", "model": "default", "page": 72}, {"hash": "d69dc0543126dbc6d00e1e8ce512bbf99efcda00f45cae9ab93877fc9e833308", "model": "default", "page": 73}, {"hash": "3afbdd3081b903b7941e16a1b3e0feebb23b70fa6a850e3b1119172763263fdb", "model": "default", "page": 74}, {"hash": "9ab6f9e4fd7c147650dbf4b3226a4805d3e3a86af0be0496be4cbd7eb2fe38dc", "model": "default", "page": 75}, {"hash": "3cd1d3fe8ed3a77aeaf1b68c9faa81fdc1209f44b20dd695826bfb009497af91", "model": "default", "page": 76}, {"hash": "3e0d46cb61ec6ec6ba1aa5f21e61d8988b7c531c3928c1cfa2ea5a35c5f7556f", "model": "default", "page": 77}, {"hash": "1d2d26c6366591fa7103e6920121f20b7d47e252f8e5598bc9b0d10d88b0a876", "model": "default", "page": 78}, {"hash": "6b74896cf6d9d79d6eea588138972973314a1e883e4a92eb39533e096e5fea4c", "model": "default", "page": 79}, {"hash": "2b53410a79b04ddd9d95ca46742e1916b631d56c91e67426449a2f48303233c9", "model": "default", "page": 80}, {"hash": "1cad2f44f63e2c43c0950ba8863f3a3d0f2f4afa1ae6f9ca2ceb992a34061d98", "model": "default", "page": 81}, {"hash": "1fd53dcb8bd415d94cbebe26f4938b10551f29603658e5d92b9932d2179878ba", "model": "default", "page": 82}, {"hash": "4ef9b11fb0f67f1227d7241f38a68b1e7d12cccb90802424b6fc139e84e73241", "model": "default", "page": 83}, {"hash": "1c2ea11640d6d0298f383f42acc541cee1d082453dc6c201fbd0dfe2c3583a6d", "model": "default", "page": 84}, {"hash": "fe89905acb289f8126f56f0fa57b0032cf459757a285a28e18a4fa79d0f37ff5", "model": "default", "page": 85}, {"hash": "897bc2fcbbd0147b2ad32d7130836346100dd1f483bb904be454bddee79032d3", "model": "default", "page": 86}, {"hash": "c8e638b82bad37d6d6528852ca8f58d16aa6de3ae113f9f59cc061591bbe36d4", "model": "default", "page": 87}, {"hash": "c8b4dcf9ac58518dfd7a0030612750ef310992ecfa1352cc501a3183eddc63ac", "model": "default", "page": 88}, {"hash": "311e4dab810a715c0dd964b03c57ef59105b844638789454a5a31285bb20b6c5", "model": "default", "page": 89}, {"hash": "bcc127d2a49aaeb213cddec0bef6623f19a01d5ea42b6f7495b4f803405c42f6", "model": "default", "page": 90}, {"hash": "bb0ab5360776e0488e57ac48e39d6e0df6200c2570723dcb807ad3f679c09534", "model": "default", "page": 91}, {"hash": "6fd7cdacf0d19eda989b99c3b1e02ef6d6643dbc6cfa6f10037bd0ebb7cd10b5", "model": "default", "page": 92}, {"hash": "d33f0c4ae60d66663fa25b1f7675c11437badaa8a8fa7e51daeebc6141df12ed", "model": "default", "page": 93}, {"hash": "315310a543a8ecc45c434d0e0b8aa54c6566d53d61acb74820a6649e583f9cb2", "model": "default", "page": 94}, {"hash": "38d412966dfe997ab9448d2df046448e5ebbedd2531b8527bd744c8bb5440508", "model": "default", "page": 95}, {"hash": "08d37d1668223a1a7194cf811cd594cfe30e422dd1695df02a8b73a7b735084b", "model": "default", "page": 96}, {"hash": "31d9ea5f81342dbfdc72492243a2e7f0aa9817d84d61eab0181aeaa71d75d7f5", "model": "default", "page": 97}, {"hash": "1cb53ff64bc87e1939f8b45a89a00a6267a02e718ec0c634cf7e20936ffdd4f2", "model": "default", "page": 98}, {"hash": "1245402b982e1a9d1065ac0c0cad30336aa14ecdc2cb3ef4a5c36bc55e9bbd10", "model": "default", "page": 99}, {"hash": "c38f21714819257f54186f075bf6b9446113e03dd6d40e5fd1319fd5cd3c359c", "model": "default", "page": 100}, {"hash": "9bb82caef77080aa11554e67ab1f214e5cf5e8fe2415663d128ba541cf314d5b", "model": "default", "page": 101}, {"hash": "714f390df026d13c65dea02894cf3d91496fd2ae3a94073d90f7714df79d47ee", "model": "default", "page": 102}, {"hash": "f19f8a6e418fdf2a42d8ede7c788f9f8cf33b907e3bb606e9c829320dff3bb5f", "model": "default", "page": 103}, {"hash": "2b15ecb09a734a16ed9804314a6cc9f03a12af63a904fac62a97ea21b1d2ecef", "model": "default", "page": 104}, {"hash": "8b15d46f01007cf63e5bad57b8cd889275c11e6b58bebe48ffec8842d67e7277", "model": "default", "page": 105}, {"hash": "f20a188209524e8fd1692faa3d3450cd075bb45f2962693371867cf166456dc1", "model": "default", "page": 106}, {"hash": "2d4dbf9c96c18bffaeb3b1bd321acea187066e968dd034c585a81a547f4c93c1", "model": "default", "page": 107}, {"hash": "0ef40f53d56676acaf1aef17676d06262391f04c8277eb1ba32ab7ca5d97e875", "model": "default", "page": 108}, {"hash": "25dbff770b7e10a2a2e2668b2f2977d99ed53ed37d3390e1f89d9245abf83e72", "model": "default", "page": 109}, {"hash": "2572c0b17f240729b504355e11e0d2009a92925a1faaa7b66aea649dc59d7905", "model": "default", "page": 110}, {"hash": "a3e79679ca89ec169e9967808ff8b3f9c2c2db25c113cb68c3f3a993eef15408", "model": "default", "page": 111}, {"hash": "5a47310eb886fad70101ea30ef05dee49cbda1d8a7e2446c3c61b66b3f634039", "model": "default", "page": 112}, {"hash": "992b747ebf8d366fcc11d36599c33ed004584f000855942db59e5a30dd625c7c", "model": "default", "page": 113}, {"hash": "f0bb099090288d2d8c2dad45a22598a924b5c8c3b739206496022a8985d56e25", "model": "default", "page": 114}, {"hash": "5d4e2ca3c369a87ae1732a86f0553fe650005db4637a792963f02fee28a3f1dd", "model": "default", "page": 115}, {"hash": "d23e9d367ce0fa476a6c89009c6fc6c8dd8e15dac6c21b1457a87c8ea89fc6ab", "model": "default", "page": 116}, {"hash": "2ed8bcad41539c0196738efdced854e4c0c11736a062c2bb382517307308315f", "model": "default", "page": 117}, {"hash": "a6d6fd7589a6dddaea1ae0ee683f34ba67d229ad1489d43cd55ab4bfa0a09e48", "model": "default", "page": 118}, {"hash": "4373bdfba2b9cb9f431054a081bcdbb9fde02a2a7c555237105645fc7c4300c6", "model": "default", "page": 119}, {"hash": "b9ba9a2d9c6e8fae2ae668710eb75f4e32a1debfca93371c7d2b12c849bd22da", "model": "default", "page": 120}, {"hash": "f0ac55799e80466c2f68c00232e96f16c893b304c5af92380071564bfd79cc2f", "model": "default", "page": 121}, {"hash": "a619cca5375467d6cbf87c25836da41e5a09dcab342c685b34539dd82fe86989", "model": "default", "page": 122}, {"hash": "d5eb13189c1badbc8317352c3077a84871640f1c42ba8d544f2b66e9788940b4", "model": "default", "page": 123}, {"hash": "5328248231376143041b9f94792b736e39d597c55126949b59362f6464ea0a04", "model": "default", "page": 124}, {"hash": "5201845b41de7b7c02c15934aa48093d9c3b7dd783a32f1f6887d16ab27736fd", "model": "default", "page": 125}, {"hash": "53ef8bd7beea5d3619cc02586077a54911c327d5b912872da834d7e26cbddda7", "model": "default", "page": 126}, {"hash": "eb5a30dbe63c79925f80db77000a9ae325904111ec3a76d12f0eabe9ea8184b5", "model": "default", "page": 127}, {"hash": "ea0c7446fc6d2d362e73d4581e7b8ad4608d1a569eaf7728b2565e9a62bfacc2", "model": "default", "page": 128}, {"hash": "ce7040d1ddf6c4ad312a07c56ce385cc338cb6dad98a350a3145fa651df24e10", "model": "default", "page": 129}, {"hash": "a59661e9111d2f306b39d51a1d1c2b60fafa5a0053a15e5c4df080974b4b9c8e", "model": "default", "page": 130}, {"hash": "e0eebbd57c73414b07cd40507f8b0dc3e30b7621a4da103a1b11b98178d614da", "model": "default", "page": 131}, {"hash": "663d5c537942f854d04a288e7cddc273cb931a1671b07345cf6fbd87593e6960", "model": "default", "page": 132}, {"hash": "ee15d566c88e74395f5c9cf500a25235527c226a22ac85bd940113a29690fcd3", "model": "default", "page": 133}, {"hash": "16dcf411e2a595080c73aa2c3aac658c7ea34947642e9f5d74b30637a8232ba0", "model": "default", "page": 134}, {"hash": "d06b834379d4d7edede6ad45cab9324d8ed03f6553a6ace9eef8ee2911517eae", "model": "default", "page": 135}, {"hash": "f39abd05ea9ae74cdd31f3fe7fc2cafb94364c90ff8f85b38fd763e0b4f00492", "model": "default", "page": 136}, {"hash": "c8cc8d0266caeb8d3547582e443238d020cc2b89b9b0a27881fa53a2d53eb373", "model": "default", "page": 137}, {"hash": "5df7c7769a47c31ede50376223cd8c64a630f146185eabfd69e6def4904d11e9", "model": "default", "page": 138}, {"hash": "752a8ff175ffefd5467eb28072d1ae016e4f2d121a42de192874c1314d8782af", "model": "default", "page": 139}, {"hash": "80196ef5402921f88f9a620eecc70cd40660a88bc53f0d7b41932ef750af8cf8", "model": "default", "page": 140}, {"hash": "e0675b1f0bfe007f57df25c89b6606a7fb711a9a2aea0b6ab3ed7f0c344938d9", "model": "default", "page": 141}, {"hash": "34c60aca3232bf01b5bcc0d4f745ecba5742a056e7cd56e78e733d27165319f5", "model": "default", "page": 142}, {"hash": "8add7158d438c17581bf11a58d377832b87438adddd357fc1df9627a01bb050c", "model": "default", "page": 143}, {"hash": "c6bfbf013724102c875b7177a50d9eeebd48325dc2c1ff163e018a5d86b4b638", "model": "default", "page": 144}, {"hash": "6272edb80b7baf8c345cdc69fd8b613712da5cca430baeee8b2bf74383b20940", "model": "default", "page": 145}, {"hash": "637ac3e09c925390e82504f989601641999e308491f5cd0cd8db2a22021a5412", "model": "default", "page": 146}]}, "main-text": [{"text": "Front cover", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [287.82000732421875, 741.251953125, 418.83355712890625, 763.4519653320312], "page": 1, "span": [0, 11], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/0"}, {"text": "Row and Column Access Control Support in IBM DB2 for i", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [35.70000076293945, 625.8219604492188, 584.6428833007812, 709.2680053710938], "page": 1, "span": [0, 54], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/1"}, {"text": "ibm.com /redbooks", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [36.900001525878906, 26.895000457763672, 164.45849609375, 42.13602828979492], "page": 1, "span": [0, 17], "__ref_s3_data": null}]}, {"name": "Picture", "type": "figure", "$ref": "#/figures/2"}, {"name": "Picture", "type": "figure", "$ref": "#/figures/3"}, {"text": "International Technical Support Organization", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [191.8931884765625, 706.8230590820312, 468.1595153808594, 720.9096069335938], "page": 3, "span": [0, 44], "__ref_s3_data": null}]}, {"text": "Row and Column Access Control Support in IBM DB2 for i", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [191.5712432861328, 659.2655639648438, 551.7711181640625, 688.3182373046875], "page": 3, "span": [0, 54], "__ref_s3_data": null}]}, {"text": "November 2014", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [191.92127990722656, 629.265869140625, 290.98956298828125, 642.7371215820312], "page": 3, "span": [0, 13], "__ref_s3_data": null}]}, {"text": "REDP-5110-00", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [479.2291259765625, 27.93828010559082, 547.263671875, 38.04776382446289], "page": 3, "span": [0, 12], "__ref_s3_data": null}]}, {"text": "Note: Before using this information and the product it supports, read the information in \"Notices\" on page vii.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [70.37338256835938, 680.7003173828125, 511.2250671386719, 703.3181762695312], "page": 4, "span": [0, 111], "__ref_s3_data": null}]}, {"text": "First Edition (November 2014)", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [64.45094299316406, 96.07437896728516, 206.09754943847656, 106.79737091064453], "page": 4, "span": [0, 29], "__ref_s3_data": null}]}, {"text": "This edition applies to Version 7, Release 2 of IBM i (product number 5770-SS1).", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.08177947998047, 73.64718627929688, 422.2424621582031, 83.91992950439453], "page": 4, "span": [0, 80], "__ref_s3_data": null}]}, {"text": "' Copyright International Business Machines Corporation 2014. All rights reserved.", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [63.635929107666016, 44.85982894897461, 426.39117431640625, 54.95832443237305], "page": 4, "span": [0, 82], "__ref_s3_data": null}]}, {"text": "Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.18267822265625, 23.176387786865234, 547.2008666992188, 43.96644592285156], "page": 4, "span": [0, 136], "__ref_s3_data": null}]}, {"text": "Contents", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [64.80000305175781, 695.9519653320312, 168.73440551757812, 718.7908325195312], "page": 5, "span": [0, 8], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/0"}, {"text": "' Copyright IBM Corp. 2014. All rights reserved.", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [63.926761627197266, 27.811120986938477, 257.24334716796875, 37.25619888305664], "page": 5, "span": [0, 48], "__ref_s3_data": null}]}, {"text": "iii", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [538.4729614257812, 27.93828010559082, 547.25927734375, 38.0196647644043], "page": 5, "span": [0, 3], "__ref_s3_data": null}]}, {"text": "iv", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [64.56709289550781, 27.93828010559082, 75.64199829101562, 37.95931625366211], "page": 6, "span": [0, 2], "__ref_s3_data": null}]}, {"text": "Row and Column Access Control Support in IBM DB2 for i", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [90.20014190673828, 27.85855484008789, 331.77874755859375, 37.22001647949219], "page": 6, "span": [0, 54], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/1"}, {"name": "Table", "type": "table", "$ref": "#/tables/2"}, {"text": "Contents", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [488.2200012207031, 28.136999130249023, 529.1115112304688, 37.02998352050781], "page": 7, "span": [0, 8], "__ref_s3_data": null}]}, {"text": "v", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [541.4024658203125, 27.93828010559082, 547.3956298828125, 37.15127944946289], "page": 7, "span": [0, 1], "__ref_s3_data": null}]}, {"text": "vi", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [64.29622650146484, 27.93828010559082, 75.64199829101562, 37.651676177978516], "page": 8, "span": [0, 2], "__ref_s3_data": null}]}, {"text": "Row and Column Access Control Support in IBM DB2 for i", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [90.30646514892578, 27.79586410522461, 331.6808776855469, 37.322059631347656], "page": 8, "span": [0, 54], "__ref_s3_data": null}]}, {"text": "Notices", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [64.80000305175781, 695.9519653320312, 151.5048065185547, 718.7636108398438], "page": 9, "span": [0, 7], "__ref_s3_data": null}]}, {"text": "This information was developed for products and services offered in the U.S.A.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.18147277832031, 649.8180541992188, 413.7007141113281, 660.0758666992188], "page": 9, "span": [0, 78], "__ref_s3_data": null}]}, {"text": "IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.14546966552734, 579.6738891601562, 547.235595703125, 640.0175170898438], "page": 9, "span": [0, 625], "__ref_s3_data": null}]}, {"text": "IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to:", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.0940933227539, 540.159912109375, 547.2992553710938, 570.1964721679688], "page": 9, "span": [0, 232], "__ref_s3_data": null}]}, {"text": "IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.593505859375, 529.7247314453125, 489.1996154785156, 540.0978393554688], "page": 9, "span": [0, 92], "__ref_s3_data": null}]}, {"text": "The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION \"AS IS\" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.16057586669922, 459.4730224609375, 547.1917114257812, 520.091796875], "page": 9, "span": [0, 541], "__ref_s3_data": null}]}, {"text": "This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [63.943748474121094, 410.14208984375, 547.2783813476562, 449.93365478515625], "page": 9, "span": [0, 345], "__ref_s3_data": null}]}, {"text": "Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [63.966217041015625, 369.6625671386719, 539.7974243164062, 400.06964111328125], "page": 9, "span": [0, 286], "__ref_s3_data": null}]}, {"text": "IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.32443237304688, 339.65264892578125, 547.1986694335938, 360.1954650878906], "page": 9, "span": [0, 135], "__ref_s3_data": null}]}, {"text": "Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.14064025878906, 269.77093505859375, 544.1587524414062, 329.7679443359375], "page": 9, "span": [0, 526], "__ref_s3_data": null}]}, {"text": "Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.13702392578125, 219.69473266601562, 547.231689453125, 259.8896789550781], "page": 9, "span": [0, 408], "__ref_s3_data": null}]}, {"text": "This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.02989196777344, 169.76266479492188, 545.7865600585938, 209.7733154296875], "page": 9, "span": [0, 359], "__ref_s3_data": null}]}, {"text": "COPYRIGHT LICENSE:", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [64.42018127441406, 150.16415405273438, 172.49951171875, 160.39039611816406], "page": 9, "span": [0, 18], "__ref_s3_data": null}]}, {"text": "This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.03350067138672, 79.5408706665039, 547.2437744140625, 140.08206176757812], "page": 9, "span": [0, 619], "__ref_s3_data": null}]}, {"text": "' Copyright IBM Corp. 2014. All rights reserved.", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [63.92543411254883, 27.7843074798584, 257.24334716796875, 37.34343719482422], "page": 9, "span": [0, 48], "__ref_s3_data": null}]}, {"text": "vii", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [535.465576171875, 27.93828010559082, 547.250244140625, 37.77464294433594], "page": 9, "span": [0, 3], "__ref_s3_data": null}]}, {"text": "Trademarks", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [64.19252014160156, 706.0162963867188, 154.14569091796875, 721.5706787109375], "page": 10, "span": [0, 10], "__ref_s3_data": null}]}, {"text": "IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (fi or \u2122), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.04251861572266, 629.2591552734375, 547.2604370117188, 689.3146362304688], "page": 10, "span": [0, 591], "__ref_s3_data": null}]}, {"text": "The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.07420349121094, 599.2596435546875, 546.6150512695312, 619.2008666992188], "page": 10, "span": [0, 133], "__ref_s3_data": null}]}, {"name": "Table", "type": "table", "$ref": "#/tables/3"}, {"text": "The following terms are trademarks of other companies:", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.15382385253906, 537.2783203125, 311.9006652832031, 547.204833984375], "page": 10, "span": [0, 54], "__ref_s3_data": null}]}, {"text": "Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [63.90792465209961, 507.27880859375, 509.53704833984375, 527.1090698242188], "page": 10, "span": [0, 117], "__ref_s3_data": null}]}, {"text": "Other company, product, or service names may be trademarks or service marks of others.", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.3842544555664, 486.98126220703125, 464.51568603515625, 497.27496337890625], "page": 10, "span": [0, 86], "__ref_s3_data": null}]}, {"text": "viii", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [63.940345764160156, 26.91827964782715, 81.16200256347656, 36.210243225097656], "page": 10, "span": [0, 4], "__ref_s3_data": null}]}, {"text": "Row and Column Access Control Support in IBM DB2 for i", "type": "page-footer", "name": "Page-footer", "font": null, "prov": [{"bbox": [95.68927764892578, 26.413494110107422, 337.0337829589844, 36.1352424621582], "page": 10, "span": [0, 54], "__ref_s3_data": null}]}, {"text": "DB2 for i Center of Excellence", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [64.80000305175781, 706.416015625, 235.86239624023438, 717.5160522460938], "page": 11, "span": [0, 30], "__ref_s3_data": null}]}, {"text": "Solution Brief IBM Systems Lab Services and Training", "type": "paragraph", "name": "Text", "font": null, "prov": [{"bbox": [93.55310821533203, 636.66357421875, 234.06729125976562, 654.3007202148438], "page": 11, "span": [0, 52], "__ref_s3_data": null}]}, {"text": "Highlights", "type": "subtitle-level-1", "name": "Section-header", "font": null, "prov": [{"bbox": [144.47474670410156, 454.5254211425781, 188.74681091308594, 464.9404296875], "page": 11, "span": [0, 10], "__ref_s3_data": null}]}, {"text": "GLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPH GLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [144.74562072753906, 433.3105773925781, 242.87388610839844, 447.85009765625], "page": 11, "span": [0, 532], "__ref_s3_data": null}]}, {"text": "GLYPHGLYPH GLYPHGLYPHGLYPH GLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPH GLYPHGLYPHGLYPH GLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPH GLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [144.467529296875, 402.7626953125, 259.22869873046875, 425.5424499511719], "page": 11, "span": [0, 876], "__ref_s3_data": null}]}, {"text": "GLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH", "type": "paragraph", "name": "List-item", "font": null, "prov": [{"bbox": [144.52346801757812, 379.9961242675781, 249.8356170654297, 394.7245788574219], "page": 11, "span": [0, 672], "__ref_s3_data": null}]}, {"text": "GLYPHGLYPH GLYPH GLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH GLYPHGLYPHGLYPH GLYPH GLYPH GLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPHGLYPH