We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
4446bcd
There was an error while loading. Please reload this page.
In this work, we successfully combine a composition model with bi-directional Transformers and make them jointly pre-trainable.
0905f81
Official code for the paper A Multi-Grained Self-Interpretable Symbolic-Neural Model For Single/Multi-Labeled Text Classification
e25f422
The paper version and the corresponding model pretrained on wiki-103.
6edf2ac
The code for paper "R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling"