You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Liu, Y. and Lapata, M., 2019, November. Text Summarization with Pretrained Encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (pp. 3721-3731).
作者你好,我有两个疑问想要询问一下。第一个疑问,我之前并没有接触过bert处理三个句子(或者实体),对于您代码中写的for sequence triples:
# tokens: [CLS] Steve Jobs [SEP] founded [SEP] Apple Inc .[SEP]
# type_ids: 0 0 0 0 1 1 0 0 0 0,我想问一下这个是bert模型里边带有的功能吗?。第二个问题,bert预训练的时候其中一个方案是判断句子一和句子二是否衔接。而三元组显然不符合这个逻辑,用这个方法来微调bert是否有些牵强?而且实体词往往比较短,在短实体词上做mask感觉不是很好。期待您的解答,谢谢!
The text was updated successfully, but these errors were encountered: