Skip to content
This repository was archived by the owner on May 14, 2020. It is now read-only.
This repository was archived by the owner on May 14, 2020. It is now read-only.

Classification time proportional to ? #20

@letronje

Description

@letronje

I have observed that as we train graphify more and more, the size of the neo4j database on disk keeps increasing and beyond a point, each classification request takes more than a few minutes and makes it almost unusable.

Is there a way to train graphify for more accuracy but at the same time keep the classification time within usable limits ( like say 30 seconds or under a minute ? )

To understand the slowup, could you tell me which of the following parameters affect the classification time for a text given to it and how ?

  • The number of labels/classes already known to graphify from previous training requests
  • The total volume of text that has been given to graphify for training.
  • The amount of text given to graphify for classification

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions