Owner Yann: (Experimental) Demo Google cloud application flow for covid19 data extraction from PDF's
Aziz Ketari 19c4ad5bfb updated dir | 4 år sedan | |
---|---|---|
content | 4 år sedan | |
notebooks | 4 år sedan | |
scripts | 4 år sedan | |
utils | 4 år sedan | |
.DS_Store | 4 år sedan | |
README.md | 4 år sedan | |
env_variables.sh | 4 år sedan | |
requirements.txt | 4 år sedan |
by the Italian Society of Medical and Interventional Radiology (ISMIR)
This repository contains all the code required to extract relevant information from pdf documents published by ISMIR and store raw data in a relational database and entities in a No-SQL database.
In particular, you will use Google Cloud Vision API and Translation API, before storing the information on BigQuery. Separately, you will also use specific NER models (from Scispacy) to extract (medical) domain specific entities and store them in a NoSQL db (namely Datastore) on Google Cloud Platform.
Looking for more context behind this dataset? Check out this article.
Google Cloud Architecture of the pipeline:
Quick sneak peak on the Entity dataset on Datastore:
Requirements:
Enable APIs
gcloud services enable vision.googleapis.com
gcloud services enable translate.googleapis.com
gcloud services enable datastore.googleapis.com
gcloud services enable bigquery.googleapis.com
Install package requirements:
Note:
You will also need to download a NER model for the second part of this pipeline. See Scispacy full selection of
available models [here](https://allenai.github.io/scispacy/). If you follow this installation guide, the steps
will automatically download a model for you and install it.
## Extracting data
- **Step 0:** Navigate to the cloned repo on your local machine
`cd ~/covid19_ISMIR`
- **Step 1:** Modify the values to each variables in env_variables.sh file then run
> Assumption: You have already created/downloaded the json key to your Google Cloud Service Account. Useful [link](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-python)
./env_variables.sh
- **Step 2:** Download the required files to your bucket and load the required model in your local
(this step will take ~10 min)
> Optional: If you have already downloaded the scispacy model, you should modify the file ./content/download_content.sh to not repeat that step
sh ~/content/download_content.sh pip install -U ./scispacy_models/en_core_sci_lg-0.2.4.tar.gz ```
python3 ./scripts/extraction.py
Following the extraction of text, it's time to translate it from Italian to English and curate it.
python3 ./scripts/preprocessing.py
Following the pre-processing, it's time to store the data in a more searchable format: a data warehouse - BigQuery - for the text, and a No-SQL database - Datastore - for the (UMLS) medical entities.
python3 ./scripts/storing.py
Last but not least, you can query your databases using this script.
python3 ./scripts/retrieving.py
To get started...
Option 1
Option 2