{"id":4738,"date":"2024-04-16T06:56:00","date_gmt":"2024-04-15T21:56:00","guid":{"rendered":"https:\/\/devneko.jp\/wordpress\/?p=4738"},"modified":"2024-04-16T06:56:00","modified_gmt":"2024-04-15T21:56:00","slug":"llm2vec","status":"publish","type":"post","link":"https:\/\/devneko.jp\/wordpress\/?p=4738","title":{"rendered":"LLM2Vec"},"content":{"rendered":"\n<ul class=\"wp-block-list\">\n<li><strong>LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders&nbsp;<\/strong>[34.4]<br>\u5927\u898f\u6a21\u30c7\u30b3\u30fc\u30c0\u306e\u307f\u306e\u8a00\u8a9e\u30e2\u30c7\u30eb(LLM)\u306f\u3001\u4eca\u65e5\u306eNLP\u30bf\u30b9\u30af\u3068\u30d9\u30f3\u30c1\u30de\u30fc\u30af\u306e\u307b\u3068\u3093\u3069\u3067\u6700\u5148\u7aef\u306e\u30e2\u30c7\u30eb\u3067\u3042\u308b\u3002 LLM2Vec\u306f\u3001\u4efb\u610f\u306e\u30c7\u30b3\u30fc\u30c0\u306e\u307f\u306eLLM\u3092\u5f37\u529b\u306a\u30c6\u30ad\u30b9\u30c8\u30a8\u30f3\u30b3\u30fc\u30c0\u306b\u5909\u63db\u3059\u308b\u3001\u5358\u7d14\u306a\u6559\u5e2b\u306a\u3057\u30a2\u30d7\u30ed\u30fc\u30c1\u3067\u3042\u308b\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2404.05961v1\">\u8ad6\u6587<\/a>&nbsp;&nbsp;<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2404.05961v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>&nbsp; &nbsp;(Tue, 09 Apr 2024 02:51:05 GMT)<\/li>\n\n\n\n<li>LLM\u3092\u7528\u3044\u305f\u30a8\u30f3\u30d9\u30c7\u30a3\u30f3\u30b0\u3002\u4efb\u610f\u306eCausalLM\u304b\u3089\u57cb\u3081\u8fbc\u307f\u7528\u30e2\u30c7\u30eb\u69cb\u7bc9\u3059\u308b\u624b\u6cd5\u306e\u63d0\u6848\u3002\u512a\u308c\u305f\u7d50\u679c\u3002\u5358\u7d14\u3068\u3044\u3048\u3070\u5358\u7d14\u306a\u30a2\u30d7\u30ed\u30fc\u30c1\u3067\u306f\u3042\u308b\u304c\u3001\u306a\u305c\u3053\u308c\u304c\u52b9\u679c\u7684\u306a\u306e\u304b\u308f\u304b\u308b\u3088\u3046\u306a\u308f\u304b\u3089\u306a\u3044\u3088\u3046\u306a\u3002<\/li>\n\n\n\n<li>\u8ad6\u6587\u4e2d\u306e\u300cBased on these findings (we replicate these results for other inputs and other Mistral models in Appendix F) and the strong unsupervised results for Mistral-7B with bidirectional attention, we speculate that Mistral models are pre-trained with some form bidirectional attention, e g , prefix language modeling (Raffel et al , 2020) \u2013 at least for some parts of its training.\u300d\u304c\u975e\u5e38\u306b\u8208\u5473\u6df1\u3044\u3002<\/li>\n\n\n\n<li>\u30ea\u30dd\u30b8\u30c8\u30ea\u306f<a href=\"https:\/\/github.com\/McGill-NLP\/llm2vec\">McGill-NLP\/llm2vec: Code for &#8216;LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders&#8217; (github.com)<\/a><\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Is Cosine-Similarity of Embeddings Really About Similarity?\u00a0<\/strong>[46.8]<br>\u30b3\u30b5\u30a4\u30f3\u76f8\u4f3c\u6027(Cosine-similarity)\u306f\u30012\u3064\u306e\u30d9\u30af\u30c8\u30eb\u9593\u306e\u89d2\u5ea6\u306e\u30b3\u30b5\u30a4\u30f3\u3001\u3059\u306a\u308f\u3061\u305d\u308c\u3089\u306e\u6b63\u898f\u5316\u306e\u9593\u306e\u30c9\u30c3\u30c8\u7a4d\u3067\u3042\u308b\u3002 \u6b63\u898f\u5316\u7dda\u5f62\u30e2\u30c7\u30eb\u304b\u3089\u5c0e\u304b\u308c\u308b\u57cb\u3081\u8fbc\u307f\u306b\u3064\u3044\u3066\u691c\u8a0e\u3057\u3001\u305d\u3053\u3067\u306f\u9589\u5f62\u5f0f\u89e3\u304c\u89e3\u6790\u7684\u6d1e\u5bdf\u3092\u4fc3\u9032\u3059\u308b\u3002 \u6211\u3005\u306f\u30b3\u30b5\u30a4\u30f3\u76f8\u4f3c\u6027\u304c\u4efb\u610f\u306e\u3001\u3057\u305f\u304c\u3063\u3066\u7121\u610f\u5473\u306a\u985e\u4f3c\u6027\u3092\u3082\u305f\u3089\u3059\u304b\u5206\u6790\u7684\u306b\u5c0e\u51fa\u3059\u308b\u300d\u3002<br><a href=\"http:\/\/arxiv.org\/abs\/2403.05440v1\">\u8ad6\u6587<\/a>\u00a0\u00a0<a href=\"https:\/\/fugumt.com\/fugumt\/paper_check\/2403.05440v1\">\u53c2\u8003\u8a33\uff08\u30e1\u30bf\u30c7\u30fc\u30bf\uff09<\/a>\u00a0 \u00a0(Fri, 8 Mar 2024 16:48:20 GMT)<\/li>\n\n\n\n<li>\u30b3\u30b5\u30a4\u30f3\u985e\u4f3c\u5ea6\u304c\u6700\u5584\u3067\u306a\u3044\u5834\u5408\u3082\u3042\u308b\u3088\u3046\u3060\u304c\u3001\u3053\u306e\u624b\u6cd5\u306f\u3069\u3046\u306a\u3093\u3060\u308d\u3046\u3002<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[4,124,223],"class_list":["post-4738","post","type-post","status-publish","format-standard","hentry","category-arxiv","tag-2vec","tag-embedding","tag-llm"],"_links":{"self":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/4738","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4738"}],"version-history":[{"count":0,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/4738\/revisions"}],"wp:attachment":[{"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4738"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4738"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/devneko.jp\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4738"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}