Skip to content
This repository has been archived by the owner on Sep 3, 2023. It is now read-only.

Latest commit

 

History

History
executable file
·
129 lines (107 loc) · 5.04 KB

README.md

File metadata and controls

executable file
·
129 lines (107 loc) · 5.04 KB

find-out

Find Out is the sister project to the open source project Opt Out. It is here to enable the study of different machine learning models and their ability to classify sexual harassment text. The project organization is below.

Project Organization

├── LICENSE
├── Makefile           <- (NOT READY) Makefile with commands like `make data` or `make train`
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
|   ├── benchmark      <- The gold standard dataset.
│   └── raw            <- The original, immutable data dump.
│
├── docs               <- (NOT READY YET) A default Sphinx project; see sphinx-doc.org for details
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── collect
|   |       └── get_model_datasetname.py
|   |   └── preprocess
|   |       └── preprocess_model_datasetname.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── features_model_datasetname.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model_datasetname.py
│   │   └── train_model_datasetname.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize_model_datasetname.py
│
└── .pylintrc           <- pylint file for formatting

Project based on the cookiecutter data science project template. #cookiecutterdatascience

Multiple studies will exist in one repo (to begin with). Standardized naming of files and folders is key to the success of this style of collaboration.

The naming convention is: action_model_datasetname.py

For example, if I have written a script that will train a neural net using the dataset called datazet, the naming convention would be: train_nn_datazet.py

Please follow the example in the repository if you're stuck, to run the tests for the example:

cd find-out
python -m pytest tests/test_nn_dataturks.py

NB. this is not a permanent solution but will enable initial effective collaboration. If you have any thoughts or ideas on how to improve this, just email [email protected]

Project Datasets

The text must be under the column head text and the labels under the column head label. Misogynistic or harassing is always 1 and not 0.

hatespeech - obtained from Zeerak Waseem. 
aws_annotated - our annotations + hatespeech
stanford_hatespeech - stanford (aws_annotated+snorkel labels) + hatespeech
gold - stanford_hatespeech + AMI
metoo - tweet ids from https://github.com/datacamp/datacamp-metoo-analysis
rapeglish - scraped from random rape threat generator by Emma Jane
dataturks - obtained from dataturks crowdsource labeling

Installation

Conda

Create a new Conda environment

conda create -n find-out python=3.7

and activate it with

conda activate find-out

Move to the project root directory (e.g. $ cd find-out/) and run the following command:

pip install -r requirements.txt

Spacy Model

python -m spacy download en_core_web_md

Pre-commit Hooks

pre-commit install

Tests

Tests should be run from the root directory as

python -m pytest