AI Challenge - SQuAD 1.1 with BERT-Base Guidelines

  • PLEASE SEE UPDATES IN SECTION 1.6

Introduction

Language understanding is an ongoing challenge and one of the most relevant and influential areas across any industry.

“Bidirectional Encoder Representations from Transformers (BERT) is a new method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. When BERT was originally published it achieved state-of-the-art performance in eleven natural language understanding tasks.

BERT is a method of pre-training language representations, meaning that we train a general-purpose "language understanding" model on a large text corpus (like Wikipedia), and then use that model for downstream NLP tasks that we care about (like question answering). BERT outperforms previous methods because it is the first unsupervised, deeply bidirectional system for pre-training NLP.

 

1  SQuAD 1.1 with Tensorflow BERT-BASE

1.1 About the application and benchmarks

This guide is to be used as a starting point.  It does not provide detailed guidance on optimizations and additional tuning.  Please follow the guidelines in the Competition Limits section of this document.

1.1.1  About BERT-BASE

BERT, or Bidirectional Encoder Representations from Transformers, is a new method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks.

BERT is a method of pre-training language representations, meaning that we train a general-purpose "language understanding" model on a large text corpus (like Wikipedia), and then use that model for downstream NLP tasks that we care about (like question answering). BERT outperforms previous methods because it is the first unsupervised, deeply bidirectional system for pre-training NLP.

BERT’s model architecture is a multi-layer bidirectional Transformer encoder based on the original implementation described in Vaswani et al. (2017) and released in the tensor2tensor library. The architecture of BERT is almost identical to the original Transformer. A good reference guides for its implementation is “The Annotated Transformer.”

The developer team denote the number of layers (i.e., Transformer blocks) as L, the hidden size as H, and the number of self-attention heads as A. The team primarily report results on two model sizes:

·       BERT-BASE (L=12, H=768, A=12, Total Parameters=110M) and

·       BERT-LARGE (L=24, H=1024, A=16, Total Parameters=340M)

BERT-BASE contains 110M parameters and BERT-LARGE contains 340M parameters.

For the purposes of this challenge, we will be using BERT-BASE.

1.1.2 About SQuAD 1.1

The Stanford Question Answering Dataset (SQuAD) is a popular question answering benchmark dataset. BERT (at the time of the release) obtains state-of-the-art results on SQuAD with almost no task-specific network architecture modifications or data augmentation. However, it does require semi-complex data pre-processing and post-processing to deal with (a) the variable-length nature of SQuAD context paragraphs, and (b) the character-level answer annotations which are used for SQuAD training. This processing is implemented and documented in run_squad.py.

1.2  Running SQuAD 1.1 fine tuning and inference

1.2.1  Using Docker and NVIDIA Docker Image

docker pull nvcr.io/nvidia/tensorflow:20.02-tf1-py3 docker images REPOSITORY                                                       TAG                 IMAGE ID            CREATED             SIZE nvcr.io/nvidia/tensorflow                                        20.02-tf1-py3       0c7b70421b78        7 weeks ago         9.49GB

Example of how to run the container:

Usage:  docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

docker run -it --net=host -v bigdata:/bigdata 0c7b70421b78

1.2.2   Download the benchmark codes

Note: if you are using the docker container above, you already have the code and examples in /workspace/nvidia-examples/bert/ and can skip this step.

NVIDIA BERT codes is a publicly available implementation of BERT. It supports Multi-GPU training with Horovod - NVIDIA BERT fine-tune code uses Horovod to implement efficient multi-GPU training with NCCL.

[~]# git clone https://github.com/NVIDIA/DeepLearningExamples.git

You may use other implementations, optimize and tune; but you must use the BERT-Base uncased pre-trained model for the purposes of this challenge.

Some other examples include:

1.2.3  Download BERT-BASE model file

The BERT-BASE, Uncased model file contains 12-layer, 768-hidden, 12-heads, 110M parameters. Its download link can be found at https://github.com/google-research/bert

We will create directories and download to :

/workspace/nvidia-examples/bert/data/download/google_pretrained_weights

1.2.4   Download the SQuAD 1.1 dataset

To run on SQuAD, you will first need to download the dataset. The SQuAD website does not seem to link to the v1.1 datasets any longer, but the necessary files can be found here:

We will download these to: /workspace/nvidia-examples/bert/data/download/squad/v1.1

1.2.5   Start fine tuning

BERT representations can be fine tuned with just one additional output layer for a state-of-the-art Question Answering system. From within the container, you can use the following script to run fine-training for SQuAD.

Note : consider logging results with “>2&1 | tee $LOGFILE” for submissions to judges

For SQuAD 1.1 FP16 training with XLA using a DGX-1 with (8) V100 32G, run:

For SQuAD 1.1 FP16 training with XLA using (4) T4 16GB GPU's run:

1.2.6  Verify results

Note : part of your final score includes these results:

{"exact_match": 78.0321665089877, "f1": 86.34229152935384}

1.2.7  (Optional) Alternative method with Lambda Labs

look for similar output

To check score:

Note : part of your final score includes these results:

{"exact_match": 78.1929990539262, "f1": 86.51319484763773}

1.2.8  Example predict Q&A on real data

Example predict Q&A on real data is available here: github.com/google-research/bert

Note : This is the method that judges will use to score unseen data

1.2.9  Create a sample input file

Create a simple input file, save as test_input.json in json format (note the "id" to reference later).

Using vi editor should automatically handle the formatting of json, or switch to paste mode (:set paste -> [paste text] -> :set nopaste):



1.2.10  Run run_squad.py

Run run_squad.py as do-predict=true using fine-tuned model checkpoint :


Note:  If you are using alternative method from Lamda Labs, you will need to use that checkpoint :

You should see similar output below

1.2.11  Check correctness in file :  predictions.json

1.2.12  Check accuracy in file:  nbest_predictions.json

Note :  Part of your final score is based on inferencing unseen data; which will be provided by the judges on the day of the challenge. 

Scores will be derived from the nbest_predictions.json output for each question on the context.

1.3  Challenge Limitation

  • Must stick to pre-defined model (BERT-Base, Uncased)

  • Teams can locally cache (on SSD) starting model weights and dataset

  • HuggingFace implementation (TensorFlow/PyTorch) is the official standard. Usage of other implementation, or modification to official, is subject to approval.

  • Teams are allowed to explore different optimizers (SGD/Adam etc.) or learning rate schedules, or any other techniques that do not modify model architecture.

  • Entire model must be fine-tuned (cannot freeze layers)

  • You must provide all scripts and methodology used to achieve results

1.4  Teams must produce

  • Training scripts with their full training routine and command lines and output

  • Evaluation-only script for verification of result.

  • Final model ckpt and inference files

  • Team’s training scripts and methodology, command line and logs of runs

  • run_squad.py predictions.json and nbest_predictions.json

1.5  Final Scoring

The judges will score with standard evaluate-v1.1.py from Squad 1.1

 i.e. {"exact_match": 81.01229895931883, "f1": 88.61239393038589}

Final scores from unseen data of multiple questions; prediction from file, using standard run_squad.py

 

1.6  UPDATES (June 8, 2020)

In past discussion we had questions on training BERT from scratch; this is beyond the scope of this competition and is not allowed. You will need to use the BERT-BASE model file as outlined in the guidelines section 1.2.3

Change/modify the output layer and to allow additional layers
Allow for ensemble techniques

We must disallow integration of dev-set data into training dataset ; the SQUAD 1.1 datasets must remain unchanged / augmented

We must disallow additional external data integrated into training dataset for this competition because there is not enough time to be able to verify that the dev-set data might inadvertently be part of that acquired dataset augmentation

We allow any hyper-parameters ; ie. learn rate, optimizer, drop-out, etc.
We will also allow setting for random seed. This will reduce the variance between training runs
The F1 score will be used as score for team ranking.

Teams should submit their best 5 runs, please upload your runs in separate folders containing ckpt, logs, etc. - you/we will average top 3 of the 5 f1 scores for your final score

We will use the F1 as the quality metric for score / ranking. We will not round the output score computed from the output of the evaluate-v1.1.py.

The judges will score with standard evaluate-v1.1.py from Squad 1.1 as outlined in section 1.5 of the SQuAD 1.1 with Tensorflow BERT-BASE Guidelines.

We will use the probability score for unseen inference data (as test_input.json) to be provided no later than June 10th, as a secondary ranking in the event of any tie to the f1 average scoring against your top training run.