Open main menu
首页
专栏
课程
分类
归档
Chat
Sci-Hub
谷歌学术
Libgen
GitHub镜像
登录/注册
搜索
关闭
Previous
Previous
Next
Next
OpenAI-ChatGPT最新官方接口《嵌入向量式文本转换》全网最详细中英文实用指南和教程,助你零基础快速轻松掌握全新技术(五)(附源码)
sockstack
/
225
/
2023-11-06 23:54:48
<p><span style="color: red; font-size: 18px">ChatGPT 可用网址,仅供交流学习使用,如对您有所帮助,请收藏并推荐给需要的朋友。</span><br><a href="https://ckai.xyz/?sockstack§ion=detail" target="__blank">https://ckai.xyz</a><br><br></p> <article class="baidu_pl"><div id="article_content" class="article_content clearfix"> <link rel="stylesheet" href="https://csdnimg.cn/release/blogv2/dist/mdeditor/css/editerView/kdoc_html_views-1a98987dfd.css"> <link rel="stylesheet" href="https://csdnimg.cn/release/blogv2/dist/mdeditor/css/editerView/ck_htmledit_views-25cebea3f9.css"> <div id="content_views" class="markdown_views prism-atom-one-dark"> <svg xmlns="http://www.w3.org/2000/svg" style="display: none;"><path stroke-linecap="round" d="M5,0 0,2.5 5,5z" id="raphael-marker-block" style="-webkit-tap-highlight-color: rgba(0, 0, 0, 0);"></path></svg><p></p> <div class="toc"> <h3>Embeddings 嵌入向量式文本转换</h3> <ul> <li>前言</li> <li>Overview 概述</li> <li><ul> <li>What are embeddings? 什么是嵌入?</li> <li>How to get embeddings 如何获取嵌入</li> <li><ul> <li>python代码示例</li> <li>cURL代码示例</li> </ul></li> <li>Embedding models 嵌入模型</li> <li><ul> <li>Second-generation models 第二代模型</li> <li>First-generation models (not recommended) 第一代模型(不推荐)</li> </ul></li> </ul></li> <li>Use cases 用例</li> <li><ul> <li>Obtaining the embeddings 获取嵌入</li> <li>Data visualization in 2D 二维数据可视化</li> <li>Embedding as a text feature encoder for ML algorithms 嵌入作为ML算法的文本特征编码器</li> <li>Regression using the embedding features 使用嵌入特征的回归</li> <li>Classification using the embedding features使用嵌入特征的分类</li> <li>Zero-shot classification 零样本分类</li> <li>Obtaining user and product embeddings for cold-start recommendation 获取用于冷启动推荐的用户和产品嵌入</li> <li>Clustering 聚类</li> <li>Text search using embeddings 使用嵌入的文本搜索</li> <li>Code search using embeddings 使用嵌入的代码搜索</li> <li>Recommendations using embeddings 使用嵌入的推荐</li> </ul></li> <li>Limitations & risks 限制和风险</li> <li><ul> <li>Social bias 社会偏差</li> <li>Blindness to recent events 对最近发生的事件视而不见</li> </ul></li> <li>Frequently asked questions 常见问题</li> <li><ul> <li>How can I tell how many tokens a string has before I embed it? 在嵌入字符串之前,如何知道它有多少个标记?</li> <li>How can I retrieve K nearest embedding vectors quickly? 如何快速检索K个最近的嵌入向量?</li> <li>Which distance function should I use? 我应该使用哪种距离函数?</li> <li>Can I share my embeddings online? 我可以在线分享我的嵌入吗?</li> </ul></li> <li>其它资料下载</li> </ul> </div> <br> <img referrerpolicy="no-referrer" src="https://img-blog.csdnimg.cn/6e398cc11a5f4542a6279531891ab116.png#pic_center" alt="在这里插入图片描述"> <p></p> <h1> <a id="_4"></a>前言</h1> <p>ChatGPT 嵌入能够将文本转换为固定长度的连续向量,允许对文本数据执行分类、主题聚类、搜索和推荐等功能。这样,以前很难被处理的文本数据可以轻松地被处理了。</p> <p>使用 ChatGPT 嵌入可以极大地改善用户体验。它能够帮助聊天机器人更准确地处理文本信息,并实现更有效的文本搜索与推荐、体验更为流畅的交互式会话,从而更好地满足用户需求。</p> <h1> <a id="Overview__9"></a>Overview 概述</h1> <h2> <a id="What_are_embeddings__10"></a>What are embeddings? 什么是嵌入?</h2> <p>OpenAI’s text embeddings measure the relatedness of text strings. Embeddings are commonly used for:<br> OpenAI的文本嵌入测量文本字符串的相关性。嵌入通常用于:</p> <ul> <li> <strong>Search</strong> (where results are ranked by relevance to a query string)<br> <strong>搜索</strong>(其中结果按与查询字符串的相关性进行排名)</li> <li> <strong>Clustering</strong> (where text strings are grouped by similarity)<br> <strong>聚类</strong>(其中文本字符串按相似性分组)</li> <li> <strong>Recommendations</strong> (where items with related text strings are recommended)<br> <strong>建议</strong>(其中建议包含相关文本字符串的项目)</li> <li> <strong>Anomaly detection</strong> (where outliers with little relatedness are identified)<br> <strong>异常检测</strong>(其中识别出相关性很小的离群值)</li> <li> <strong>Diversity measurement</strong> (where similarity distributions are analyzed)<br> <strong>多样性测量</strong>(分析相似性分布)</li> <li> <strong>Classification</strong> (where text strings are classified by their most similar label)<br> <strong>分类</strong>(其中文本字符串按其最相似的标签进行分类)</li> </ul> <p>An embedding is a vector (list) of floating point numbers. The distance between two vectors measures their relatedness. Small distances suggest high relatedness and large distances suggest low relatedness.<br> 嵌入是浮点数的向量(列表)。两个向量之间的距离测量它们的相关性。小的距离表示高相关性,大的距离表示低相关性。</p> <p>Visit our pricing page to learn about Embeddings pricing. Requests are billed based on the number of tokens in the input sent.<br> 请访问我们的定价页面了解嵌入定价。请求根据发送的输入中的标记数计费。</p> <p>To see embeddings in action, check out our code samples<br> 要查看嵌入实际使用,请查看我们的代码示例,<strong>详见博客下部分 Use Cases</strong></p> <ul> <li>Classification</li> <li>Topic clustering</li> <li>Search</li> <li>Recommendations</li> </ul> <h2> <a id="How_to_get_embeddings__41"></a>How to get embeddings 如何获取嵌入</h2> <p>To get an embedding, send your text string to the embeddings API endpoint along with a choice of embedding model ID (e.g., <code>text-embedding-ada-002</code>). The response will contain an embedding, which you can extract, save, and use.<br> 要获取嵌入,请将文本字符串发送到嵌入API端点沿着选择嵌入模型ID(例如, <code>text-embedding-ada-002</code> )。响应将包含一个嵌入,您可以提取、保存和使用它。</p> <p>Example requests: 请求的示例</p> <h3> <a id="python_46"></a>python代码示例</h3> <pre><code class="prism language-python">response <span class="token operator">=</span> openai<span class="token punctuation">.</span>Embedding<span class="token punctuation">.</span>create<span class="token punctuation">(</span><span class="token builtin">input</span><span class="token operator">=</span><span class="token string">"Your text string goes here"</span><span class="token punctuation">,</span>model<span class="token operator">=</span><span class="token string">"text-embedding-ada-002"</span> <span class="token punctuation">)</span> embeddings <span class="token operator">=</span> response<span class="token punctuation">[</span><span class="token string">'data'</span><span class="token punctuation">]</span><span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span><span class="token punctuation">[</span><span class="token string">'embedding'</span><span class="token punctuation">]</span> </code></pre> <h3> <a id="cURL_54"></a>cURL代码示例</h3> <pre><code class="prism language-bash"><span class="token function">curl</span> https://api.openai.com/v1/embeddings <span class="token punctuation">\</span>-H <span class="token string">"Content-Type: application/json"</span> <span class="token punctuation">\</span>-H <span class="token string">"Authorization: Bearer <span class="token variable">$OPENAI_API_KEY</span>"</span> <span class="token punctuation">\</span>-d <span class="token string">'{"input": "Your text string goes here","model": "text-embedding-ada-002"}'</span> </code></pre> <p>Example response: 返回的示例</p> <pre><code class="prism language-yaml"><span class="token punctuation">{<!-- --></span><span class="token key atrule">"data"</span><span class="token punctuation">:</span> <span class="token punctuation">[</span><span class="token punctuation">{<!-- --></span><span class="token key atrule">"embedding"</span><span class="token punctuation">:</span> <span class="token punctuation">[</span><span class="token number">-0.006929283495992422</span><span class="token punctuation">,</span><span class="token number">-0.005336422007530928</span><span class="token punctuation">,</span><span class="token punctuation">...</span><span class="token punctuation">-</span><span class="token number">4.547132266452536e-05</span><span class="token punctuation">,</span><span class="token number">-0.024047505110502243</span><span class="token punctuation">]</span><span class="token punctuation">,</span><span class="token key atrule">"index"</span><span class="token punctuation">:</span> <span class="token number">0</span><span class="token punctuation">,</span><span class="token key atrule">"object"</span><span class="token punctuation">:</span> <span class="token string">"embedding"</span><span class="token punctuation">}</span><span class="token punctuation">]</span><span class="token punctuation">,</span><span class="token key atrule">"model"</span><span class="token punctuation">:</span> <span class="token string">"text-embedding-ada-002"</span><span class="token punctuation">,</span><span class="token key atrule">"object"</span><span class="token punctuation">:</span> <span class="token string">"list"</span><span class="token punctuation">,</span><span class="token key atrule">"usage"</span><span class="token punctuation">:</span> <span class="token punctuation">{<!-- --></span><span class="token key atrule">"prompt_tokens"</span><span class="token punctuation">:</span> <span class="token number">5</span><span class="token punctuation">,</span><span class="token key atrule">"total_tokens"</span><span class="token punctuation">:</span> <span class="token number">5</span><span class="token punctuation">}</span> <span class="token punctuation">}</span> </code></pre> <p>See more Python code examples in the OpenAI Cookbook.<br> 在OpenAI Cookbook中查看更多Python代码示例。</p> <p>When using OpenAI embeddings, please keep in mind their limitations and risks.<br> 当使用OpenAI嵌入式时,请记住它们的局限性和风险。详见博客最后一部分 <strong>limitations and risks</strong>。</p> <h2> <a id="Embedding_models__96"></a>Embedding models 嵌入模型</h2> <p>OpenAI offers one second-generation embedding model (denoted by <code>-002</code> in the model ID) and 16 first-generation models (denoted by <code>-001</code> in the model ID).<br> OpenAI提供了一个第二代嵌入模型(在模型ID中表示为 <code>-002</code> )和16个第一代模型(在模型ID中表示为 <code>-001</code> )。</p> <p>We recommend using <code>text-embedding-ada-002</code> for nearly all use cases. It’s better, cheaper, and simpler to use. Read the blog post announcement.<br> 我们建议在几乎所有用例中使用<code>text-embedding-ada-002</code>。它更好,更便宜,更容易使用。阅读博客文章公告。</p> <table> <thead><tr> <th>MODEL GENERATION 模型版本</th> <th>TOKENIZER 标记</th> <th>MAX INPUT TOKENS 最大输入标记</th> <th>KNOWLEDGE CUTOFF 截止时间</th> </tr></thead> <tbody> <tr> <td>V2</td> <td>cl100k_base</td> <td>8191</td> <td>Sep 2021</td> </tr> <tr> <td>V1</td> <td>GPT-2/GPT-3</td> <td>2046</td> <td>Aug 2020</td> </tr> </tbody> </table> <p>Usage is priced per input token, at a rate of $0.0004 per 1000 tokens, or about ~3,000 pages per US dollar (assuming ~800 tokens per page):<br> 使用量按每个输入标记定价,每1000个标记0.0004美元,或每美元约3,000页(假设每页约800个标记):</p> <table> <thead><tr> <th>MODEL 模型</th> <th>ROUGH PAGES PER DOLLAR 每美元估计页数</th> <th>EXAMPLE PERFORMANCE ON BEIR SEARCH EVAL BEIR搜索评估的示例性能</th> </tr></thead> <tbody> <tr> <td>text-embedding-ada-002</td> <td>3000</td> <td>53.9</td> </tr> <tr> <td>*-davinci-*-001</td> <td>6</td> <td>52.8</td> </tr> <tr> <td>*-curie-*-001</td> <td>60</td> <td>50.9</td> </tr> <tr> <td>*-babbage-*-001</td> <td>240</td> <td>50.4</td> </tr> <tr> <td>*-ada-*-001</td> <td>300</td> <td>49.0</td> </tr> </tbody> </table> <h3> <a id="Secondgeneration_models__119"></a>Second-generation models 第二代模型</h3> <table> <thead><tr> <th>MODEL NAME 模型名称</th> <th>TOKENIZER 标记</th> <th>MAX INPUT TOKENS 最大输入标记</th> <th>OUTPUT DIMENSIONS 输出维度</th> </tr></thead> <tbody><tr> <td>text-embedding-ada-002</td> <td>cl100k_base</td> <td>8191</td> <td>1536</td> </tr></tbody> </table> <h3> <a id="Firstgeneration_models_not_recommended__125"></a>First-generation models (not recommended) 第一代模型(不推荐)</h3> <p>All first-generation models (those ending in -001) use the GPT-3 tokenizer and have a max input of 2046 tokens.<br> 所有第一代模型(以-001结尾的模型)都使用GPT-3标记器,最大输入为2046个标记。</p> <p>由于官方不推荐,故不详细分享其示例,如有需求,请前往地址。</p> <h1> <a id="Use_cases__131"></a>Use cases 用例</h1> <p>Here we show some representative use cases. We will use the Amazon fine-food reviews dataset for the following examples.<br> 在这里,我们展示了一些有代表性的用例。我们将在以下示例中使用Amazon美食评论数据集。</p> <h2> <a id="Obtaining_the_embeddings__135"></a>Obtaining the embeddings 获取嵌入</h2> <p>The dataset contains a total of 568,454 food reviews Amazon users left up to October 2012. We will use a subset of 1,000 most recent reviews for illustration purposes. The reviews are in English and tend to be positive or negative. Each review has a ProductId, UserId, Score, review title (Summary) and review body (Text). For example:<br> 该数据集包含截至2012年10月亚马逊用户留下的总共568,454条食品评论。我们将使用1,000个最新评论的子集进行说明。评论是英语的,往往是正面或负面的。每个评论都有ProductId、UserId、Score、评论标题(摘要)和评论主体(文本)。例如:</p> <table> <thead><tr> <th>PRODUCT ID 产品ID</th> <th>USER ID 用户ID</th> <th>SCORE 得分</th> <th>SUMMARY 摘要</th> <th>TEXT 正文</th> </tr></thead> <tbody> <tr> <td>B001E4KFG0</td> <td>A3SGXH7AUHU8GW</td> <td>5</td> <td>Good Quality Dog Food 优质狗粮</td> <td>I have bought several of the Vitality canned…我已经买了几个Vitality罐头…</td> </tr> <tr> <td>B00813GRG4</td> <td>A1D87F6ZCVE5NK</td> <td>1</td> <td>Not as Advertised 不像宣传的那样</td> <td>Product arrived labeled as Jumbo Salted Peanut…产品到达标签为Jumbo Salted Peanut…</td> </tr> </tbody> </table> <p>We will combine the review summary and review text into a single combined text. The model will encode this combined text and output a single vector embedding.<br> 我们将把评论摘要和评论文本组合成一个单独的合并文本。该模型将对该组合文本进行编码,并输出单个向量嵌入。</p> <p>Obtain_dataset.ipynb</p> <pre><code class="prism language-python"><span class="token keyword">def</span> <span class="token function">get_embedding</span><span class="token punctuation">(</span>text<span class="token punctuation">,</span> model<span class="token operator">=</span><span class="token string">"text-embedding-ada-002"</span><span class="token punctuation">)</span><span class="token punctuation">:</span>text <span class="token operator">=</span> text<span class="token punctuation">.</span>replace<span class="token punctuation">(</span><span class="token string">"\n"</span><span class="token punctuation">,</span> <span class="token string">" "</span><span class="token punctuation">)</span><span class="token keyword">return</span> openai<span class="token punctuation">.</span>Embedding<span class="token punctuation">.</span>create<span class="token punctuation">(</span><span class="token builtin">input</span> <span class="token operator">=</span> <span class="token punctuation">[</span>text<span class="token punctuation">]</span><span class="token punctuation">,</span> model<span class="token operator">=</span>model<span class="token punctuation">)</span><span class="token punctuation">[</span><span class="token string">'data'</span><span class="token punctuation">]</span><span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span><span class="token punctuation">[</span><span class="token string">'embedding'</span><span class="token punctuation">]</span>df<span class="token punctuation">[</span><span class="token string">'ada_embedding'</span><span class="token punctuation">]</span> <span class="token operator">=</span> df<span class="token punctuation">.</span>combined<span class="token punctuation">.</span><span class="token builtin">apply</span><span class="token punctuation">(</span><span class="token keyword">lambda</span> x<span class="token punctuation">:</span> get_embedding<span class="token punctuation">(</span>x<span class="token punctuation">,</span> model<span class="token operator">=</span><span class="token string">'text-embedding-ada-002'</span><span class="token punctuation">)</span><span class="token punctuation">)</span> df<span class="token punctuation">.</span>to_csv<span class="token punctuation">(</span><span class="token string">'output/embedded_1k_reviews.csv'</span><span class="token punctuation">,</span> index<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">)</span> </code></pre> <p>To load the data from a saved file, you can run the following:<br> 要从已保存的文件加载数据,可以运行以下命令:</p> <pre><code class="prism language-python"><span class="token keyword">import</span> pandas <span class="token keyword">as</span> pddf <span class="token operator">=</span> pd<span class="token punctuation">.</span>read_csv<span class="token punctuation">(</span><span class="token string">'output/embedded_1k_reviews.csv'</span><span class="token punctuation">)</span> df<span class="token punctuation">[</span><span class="token string">'ada_embedding'</span><span class="token punctuation">]</span> <span class="token operator">=</span> df<span class="token punctuation">.</span>ada_embedding<span class="token punctuation">.</span><span class="token builtin">apply</span><span class="token punctuation">(</span><span class="token builtin">eval</span><span class="token punctuation">)</span><span class="token punctuation">.</span><span class="token builtin">apply</span><span class="token punctuation">(</span>np<span class="token punctuation">.</span>array<span class="token punctuation">)</span> </code></pre> <h2> <a id="Data_visualization_in_2D__168"></a>Data visualization in 2D 二维数据可视化</h2> <p>Visualizing_embeddings_in_2D.ipynb</p> <p>The size of the embeddings varies with the complexity of the underlying model. In order to visualize this high dimensional data we use the t-SNE algorithm to transform the data into two dimensions.<br> 嵌入的大小随底层模型的复杂性而变化。为了可视化这些高维数据,我们使用t-SNE算法将数据转换为二维。</p> <p>We color the individual reviews based on the star rating which the reviewer has given:<br> 我们会根据点评者给出的星星对每条点评进行着色:</p> <ul> <li>1-star: red 红色</li> <li>2-star: dark orange 橙色</li> <li>3-star: gold 金色</li> <li>4-star: turquoise 蓝绿色</li> <li>5-star: dark green 绿色</li> </ul> <p>Amazon ratings visualized in language using t-SNE 使用t-SNE语言可视化亚马逊评级<br> <img referrerpolicy="no-referrer" src="https://img-blog.csdnimg.cn/043b81c15d2c405fb9c0a4e40a407f7a.png" alt="在这里插入图片描述"></p> <p>The visualization seems to have produced roughly 3 clusters, one of which has mostly negative reviews.<br> 可视化似乎产生了大约3个集群,其中一个大多是负面评论。</p> <pre><code class="prism language-python"><span class="token keyword">import</span> pandas <span class="token keyword">as</span> pd <span class="token keyword">from</span> sklearn<span class="token punctuation">.</span>manifold <span class="token keyword">import</span> TSNE <span class="token keyword">import</span> matplotlib<span class="token punctuation">.</span>pyplot <span class="token keyword">as</span> plt <span class="token keyword">import</span> matplotlibdf <span class="token operator">=</span> pd<span class="token punctuation">.</span>read_csv<span class="token punctuation">(</span><span class="token string">'output/embedded_1k_reviews.csv'</span><span class="token punctuation">)</span> matrix <span class="token operator">=</span> df<span class="token punctuation">.</span>ada_embedding<span class="token punctuation">.</span><span class="token builtin">apply</span><span class="token punctuation">(</span><span class="token builtin">eval</span><span class="token punctuation">)</span><span class="token punctuation">.</span>to_list<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token comment"># Create a t-SNE model and transform the data</span> tsne <span class="token operator">=</span> TSNE<span class="token punctuation">(</span>n_components<span class="token operator">=</span><span class="token number">2</span><span class="token punctuation">,</span> perplexity<span class="token operator">=</span><span class="token number">15</span><span class="token punctuation">,</span> random_state<span class="token operator">=</span><span class="token number">42</span><span class="token punctuation">,</span> init<span class="token operator">=</span><span class="token string">'random'</span><span class="token punctuation">,</span> learning_rate<span class="token operator">=</span><span class="token number">200</span><span class="token punctuation">)</span> vis_dims <span class="token operator">=</span> tsne<span class="token punctuation">.</span>fit_transform<span class="token punctuation">(</span>matrix<span class="token punctuation">)</span>colors <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">"red"</span><span class="token punctuation">,</span> <span class="token string">"darkorange"</span><span class="token punctuation">,</span> <span class="token string">"gold"</span><span class="token punctuation">,</span> <span class="token string">"turquiose"</span><span class="token punctuation">,</span> <span class="token string">"darkgreen"</span><span class="token punctuation">]</span> x <span class="token operator">=</span> <span class="token punctuation">[</span>x <span class="token keyword">for</span> x<span class="token punctuation">,</span>y <span class="token keyword">in</span> vis_dims<span class="token punctuation">]</span> y <span class="token operator">=</span> <span class="token punctuation">[</span>y <span class="token keyword">for</span> x<span class="token punctuation">,</span>y <span class="token keyword">in</span> vis_dims<span class="token punctuation">]</span> color_indices <span class="token operator">=</span> df<span class="token punctuation">.</span>Score<span class="token punctuation">.</span>values <span class="token operator">-</span> <span class="token number">1</span>colormap <span class="token operator">=</span> matplotlib<span class="token punctuation">.</span>colors<span class="token punctuation">.</span>ListedColormap<span class="token punctuation">(</span>colors<span class="token punctuation">)</span> plt<span class="token punctuation">.</span>scatter<span class="token punctuation">(</span>x<span class="token punctuation">,</span> y<span class="token punctuation">,</span> c<span class="token operator">=</span>color_indices<span class="token punctuation">,</span> cmap<span class="token operator">=</span>colormap<span class="token punctuation">,</span> alpha<span class="token operator">=</span><span class="token number">0.3</span><span class="token punctuation">)</span> plt<span class="token punctuation">.</span>title<span class="token punctuation">(</span><span class="token string">"Amazon ratings visualized in language using t-SNE"</span><span class="token punctuation">)</span> </code></pre> <h2> <a id="Embedding_as_a_text_feature_encoder_for_ML_algorithms_ML_213"></a>Embedding as a text feature encoder for ML algorithms 嵌入作为ML算法的文本特征编码器</h2> <p>Regression_using_embeddings.ipynb</p> <p>An embedding can be used as a general free-text feature encoder within a machine learning model. Incorporating embeddings will improve the performance of any machine learning model, if some of the relevant inputs are free text. An embedding can also be used as a categorical feature encoder within a ML model. This adds most value if the names of categorical variables are meaningful and numerous, such as job titles. Similarity embeddings generally perform better than search embeddings for this task.<br> 嵌入可以用作机器学习模型内的一般自由文本特征编码器。如果一些相关的输入是自由文本,那么嵌入将提高任何机器学习模型的性能。嵌入也可以用作ML模型内的分类特征编码器。如果分类变量的名称是有意义的并且数量众多,例如职位,则这会增加最大的价值。相似性嵌入通常比搜索嵌入更好地执行此任务。</p> <p>We observed that generally the embedding representation is very rich and information dense. For example, reducing the dimensionality of the inputs using SVD or PCA, even by 10%, generally results in worse downstream performance on specific tasks.<br> 我们观察到,通常嵌入表示是非常丰富和信息密集的。例如,使用SVD或PCA降低输入的维度,即使降低10%,通常也会导致特定任务的下游性能更差。</p> <p>This code splits the data into a training set and a testing set, which will be used by the following two use cases, namely regression and classification.<br> 这段代码将数据分为训练集和测试集,它们将被以下两个用例使用,即回归和分类。</p> <pre><code class="prism language-python"><span class="token keyword">from</span> sklearn<span class="token punctuation">.</span>model_selection <span class="token keyword">import</span> train_test_splitX_train<span class="token punctuation">,</span> X_test<span class="token punctuation">,</span> y_train<span class="token punctuation">,</span> y_test <span class="token operator">=</span> train_test_split<span class="token punctuation">(</span><span class="token builtin">list</span><span class="token punctuation">(</span>df<span class="token punctuation">.</span>ada_embedding<span class="token punctuation">.</span>values<span class="token punctuation">)</span><span class="token punctuation">,</span>df<span class="token punctuation">.</span>Score<span class="token punctuation">,</span>test_size <span class="token operator">=</span> <span class="token number">0.2</span><span class="token punctuation">,</span>random_state<span class="token operator">=</span><span class="token number">42</span> <span class="token punctuation">)</span> </code></pre> <h2> <a id="Regression_using_the_embedding_features__236"></a>Regression using the embedding features 使用嵌入特征的回归</h2> <p>Embeddings present an elegant way of predicting a numerical value. In this example we predict the reviewer’s star rating, based on the text of their review. Because the semantic information contained within embeddings is high, the prediction is decent even with very few reviews.<br> 嵌入提供了一种预测数值的优雅方式。在这个例子中,我们根据评论的文本预测评论者的星级排名。因为嵌入中包含的语义信息很高,所以即使评论很少,预测也很不错。</p> <p>We assume the score is a continuous variable between 1 and 5, and allow the algorithm to predict any floating point value. The ML algorithm minimizes the distance of the predicted value to the true score, and achieves a mean absolute error of 0.39, which means that on average the prediction is off by less than half a star.<br> 我们假设分数是1到5之间的连续变量,并允许算法预测任何浮点值。ML算法将预测值与真实分数的距离最小化,并实现0.39的平均绝对误差,这意味着平均预测偏差不到半颗星星。</p> <pre><code class="prism language-python"><span class="token keyword">from</span> sklearn<span class="token punctuation">.</span>ensemble <span class="token keyword">import</span> RandomForestRegressorrfr <span class="token operator">=</span> RandomForestRegressor<span class="token punctuation">(</span>n_estimators<span class="token operator">=</span><span class="token number">100</span><span class="token punctuation">)</span> rfr<span class="token punctuation">.</span>fit<span class="token punctuation">(</span>X_train<span class="token punctuation">,</span> y_train<span class="token punctuation">)</span> preds <span class="token operator">=</span> rfr<span class="token punctuation">.</span>predict<span class="token punctuation">(</span>X_test<span class="token punctuation">)</span> </code></pre> <h2> <a id="Classification_using_the_embedding_features_251"></a>Classification using the embedding features使用嵌入特征的分类</h2> <p>Classification_using_embeddings.ipynb</p> <p>This time, instead of having the algorithm predict a value anywhere between 1 and 5, we will attempt to classify the exact number of stars for a review into 5 buckets, ranging from 1 to 5 stars.<br> 这一次,我们不再让算法预测1到5之间的值,而是尝试将评论的确切星级数量分为5个桶,范围从1到5颗星。</p> <p>After the training, the model learns to predict 1 and 5-star reviews much better than the more nuanced reviews (2-4 stars), likely due to more extreme sentiment expression.<br> 在训练之后,模型学习预测1星和5星评论比更细致入微的评论(2-4星)好得多,这可能是由于更极端的情感表达。</p> <pre><code class="prism language-python"><span class="token keyword">from</span> sklearn<span class="token punctuation">.</span>ensemble <span class="token keyword">import</span> RandomForestClassifier <span class="token keyword">from</span> sklearn<span class="token punctuation">.</span>metrics <span class="token keyword">import</span> classification_report<span class="token punctuation">,</span> accuracy_scoreclf <span class="token operator">=</span> RandomForestClassifier<span class="token punctuation">(</span>n_estimators<span class="token operator">=</span><span class="token number">100</span><span class="token punctuation">)</span> clf<span class="token punctuation">.</span>fit<span class="token punctuation">(</span>X_train<span class="token punctuation">,</span> y_train<span class="token punctuation">)</span> preds <span class="token operator">=</span> clf<span class="token punctuation">.</span>predict<span class="token punctuation">(</span>X_test<span class="token punctuation">)</span> </code></pre> <h2> <a id="Zeroshot_classification__268"></a>Zero-shot classification 零样本分类</h2> <p>Zero-shot_classification_with_embeddings.ipynb</p> <p>We can use embeddings for zero shot classification without any labeled training data. For each class, we embed the class name or a short description of the class. To classify some new text in a zero-shot manner, we compare its embedding to all class embeddings and predict the class with the highest similarity.<br> 我们可以在没有任何标记的训练数据的情况下使用嵌入进行零样本分类。对于每个类,我们嵌入类名或类的简短描述。为了对新文本进行零样本分类,我们将新文本的嵌入与所有类的嵌入进行比较,并预测出相似度最高的类。</p> <pre><code class="prism language-python"><span class="token keyword">from</span> openai<span class="token punctuation">.</span>embeddings_utils <span class="token keyword">import</span> cosine_similarity<span class="token punctuation">,</span> get_embeddingdf<span class="token operator">=</span> df<span class="token punctuation">[</span>df<span class="token punctuation">.</span>Score<span class="token operator">!=</span><span class="token number">3</span><span class="token punctuation">]</span> df<span class="token punctuation">[</span><span class="token string">'sentiment'</span><span class="token punctuation">]</span> <span class="token operator">=</span> df<span class="token punctuation">.</span>Score<span class="token punctuation">.</span>replace<span class="token punctuation">(</span><span class="token punctuation">{<!-- --></span><span class="token number">1</span><span class="token punctuation">:</span><span class="token string">'negative'</span><span class="token punctuation">,</span> <span class="token number">2</span><span class="token punctuation">:</span><span class="token string">'negative'</span><span class="token punctuation">,</span> <span class="token number">4</span><span class="token punctuation">:</span><span class="token string">'positive'</span><span class="token punctuation">,</span> <span class="token number">5</span><span class="token punctuation">:</span><span class="token string">'positive'</span><span class="token punctuation">}</span><span class="token punctuation">)</span>labels <span class="token operator">=</span> <span class="token punctuation">[</span><span class="token string">'negative'</span><span class="token punctuation">,</span> <span class="token string">'positive'</span><span class="token punctuation">]</span> label_embeddings <span class="token operator">=</span> <span class="token punctuation">[</span>get_embedding<span class="token punctuation">(</span>label<span class="token punctuation">,</span> model<span class="token operator">=</span>model<span class="token punctuation">)</span> <span class="token keyword">for</span> label <span class="token keyword">in</span> labels<span class="token punctuation">]</span><span class="token keyword">def</span> <span class="token function">label_score</span><span class="token punctuation">(</span>review_embedding<span class="token punctuation">,</span> label_embeddings<span class="token punctuation">)</span><span class="token punctuation">:</span><span class="token keyword">return</span> cosine_similarity<span class="token punctuation">(</span>review_embedding<span class="token punctuation">,</span> label_embeddings<span class="token punctuation">[</span><span class="token number">1</span><span class="token punctuation">]</span><span class="token punctuation">)</span> <span class="token operator">-</span> cosine_similarity<span class="token punctuation">(</span>review_embedding<span class="token punctuation">,</span> label_embeddings<span class="token punctuation">[</span><span class="token number">0</span><span class="token punctuation">]</span><span class="token punctuation">)</span>prediction <span class="token operator">=</span> <span class="token string">'positive'</span> <span class="token keyword">if</span> label_score<span class="token punctuation">(</span><span class="token string">'Sample Review'</span><span class="token punctuation">,</span> label_embeddings<span class="token punctuation">)</span> <span class="token operator">></span> <span class="token number">0</span> <span class="token keyword">else</span> <span class="token string">'negative'</span> </code></pre> <h2> <a id="Obtaining_user_and_product_embeddings_for_coldstart_recommendation__290"></a>Obtaining user and product embeddings for cold-start recommendation 获取用于冷启动推荐的用户和产品嵌入</h2> <p>User_and_product_embeddings.ipynb</p> <p>We can obtain a user embedding by averaging over all of their reviews. Similarly, we can obtain a product embedding by averaging over all the reviews about that product. In order to showcase the usefulness of this approach we use a subset of 50k reviews to cover more reviews per user and per product.<br> 我们可以通过对所有评论进行平均来获得用户嵌入。类似地,我们可以通过对关于该产品的所有评论进行平均来获得产品嵌入。为了展示这种方法的有用性,我们使用了50k评论的子集来覆盖每个用户和每个产品的更多评论。</p> <p>We evaluate the usefulness of these embeddings on a separate test set, where we plot similarity of the user and product embedding as a function of the rating. Interestingly, based on this approach, even before the user receives the product we can predict better than random whether they would like the product.<br> 我们在一个单独的测试集上评估这些嵌入的有用性,在那里我们绘制用户和产品嵌入的相似性作为评级的函数。有趣的是,基于这种方法,即使在用户收到产品之前,我们也可以比随机预测更好地预测他们是否喜欢该产品。</p> <p>Boxplot grouped by Score 箱线图按分数分组<br> <img referrerpolicy="no-referrer" src="https://img-blog.csdnimg.cn/48b50bebc8794ae89dc4e467db6d402b.png" alt="在这里插入图片描述"></p> <pre><code class="prism language-python">user_embeddings <span class="token operator">=</span> df<span class="token punctuation">.</span>groupby<span class="token punctuation">(</span><span class="token string">'UserId'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>ada_embedding<span class="token punctuation">.</span><span class="token builtin">apply</span><span class="token punctuation">(</span>np<span class="token punctuation">.</span>mean<span class="token punctuation">)</span> prod_embeddings <span class="token operator">=</span> df<span class="token punctuation">.</span>groupby<span class="token punctuation">(</span><span class="token string">'ProductId'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>ada_embedding<span class="token punctuation">.</span><span class="token builtin">apply</span><span class="token punctuation">(</span>np<span class="token punctuation">.</span>mean<span class="token punctuation">)</span> </code></pre> <h2> <a id="Clustering__307"></a>Clustering 聚类</h2> <p>Clustering.ipynb</p> <p>Clustering is one way of making sense of a large volume of textual data. Embeddings are useful for this task, as they provide semantically meaningful vector representations of each text. Thus, in an unsupervised way, clustering will uncover hidden groupings in our dataset.<br> 聚类是理解大量文本数据的一种方法。嵌入对于这项任务很有用,因为它们提供了每个文本的语义上有意义的向量表示。因此,以无监督的方式,聚类将揭示数据集中隐藏的分组。</p> <p>In this example, we discover four distinct clusters: one focusing on dog food, one on negative reviews, and two on positive reviews.<br> 在这个例子中,我们发现了四个不同的集群:一个关注狗粮,一个关注负面评价,两个关注正面评价。</p> <p>Clusters identified visualized in language 2d using t-SNE 使用t-SNE在语言2d中可视化地识别集群<br> <img referrerpolicy="no-referrer" src="https://img-blog.csdnimg.cn/44eea94cbc094237ae6cff3daa5f5c3a.png" alt="在这里插入图片描述"></p> <pre><code class="prism language-python"><span class="token keyword">import</span> numpy <span class="token keyword">as</span> np <span class="token keyword">from</span> sklearn<span class="token punctuation">.</span>cluster <span class="token keyword">import</span> KMeansmatrix <span class="token operator">=</span> np<span class="token punctuation">.</span>vstack<span class="token punctuation">(</span>df<span class="token punctuation">.</span>ada_embedding<span class="token punctuation">.</span>values<span class="token punctuation">)</span> n_clusters <span class="token operator">=</span> <span class="token number">4</span>kmeans <span class="token operator">=</span> KMeans<span class="token punctuation">(</span>n_clusters <span class="token operator">=</span> n_clusters<span class="token punctuation">,</span> init<span class="token operator">=</span><span class="token string">'k-means++'</span><span class="token punctuation">,</span> random_state<span class="token operator">=</span><span class="token number">42</span><span class="token punctuation">)</span> kmeans<span class="token punctuation">.</span>fit<span class="token punctuation">(</span>matrix<span class="token punctuation">)</span> df<span class="token punctuation">[</span><span class="token string">'Cluster'</span><span class="token punctuation">]</span> <span class="token operator">=</span> kmeans<span class="token punctuation">.</span>labels_ </code></pre> <h2> <a id="Text_search_using_embeddings__331"></a>Text search using embeddings 使用嵌入的文本搜索</h2> <p>Semantic_text_search_using_embeddings.ipynb</p> <p>To retrieve the most relevant documents we use the cosine similarity between the embedding vectors of the query and each document, and return the highest scored documents.<br> 为了检索最相关的文档,我们使用查询和每个文档的嵌入向量之间的余弦相似度,并返回得分最高的文档。</p> <pre><code class="prism language-python"><span class="token keyword">from</span> openai<span class="token punctuation">.</span>embeddings_utils <span class="token keyword">import</span> get_embedding<span class="token punctuation">,</span> cosine_similarity<span class="token keyword">def</span> <span class="token function">search_reviews</span><span class="token punctuation">(</span>df<span class="token punctuation">,</span> product_description<span class="token punctuation">,</span> n<span class="token operator">=</span><span class="token number">3</span><span class="token punctuation">,</span> pprint<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span><span class="token punctuation">:</span>embedding <span class="token operator">=</span> get_embedding<span class="token punctuation">(</span>product_description<span class="token punctuation">,</span> model<span class="token operator">=</span><span class="token string">'text-embedding-ada-002'</span><span class="token punctuation">)</span>df<span class="token punctuation">[</span><span class="token string">'similarities'</span><span class="token punctuation">]</span> <span class="token operator">=</span> df<span class="token punctuation">.</span>ada_embedding<span class="token punctuation">.</span><span class="token builtin">apply</span><span class="token punctuation">(</span><span class="token keyword">lambda</span> x<span class="token punctuation">:</span> cosine_similarity<span class="token punctuation">(</span>x<span class="token punctuation">,</span> embedding<span class="token punctuation">)</span><span class="token punctuation">)</span>res <span class="token operator">=</span> df<span class="token punctuation">.</span>sort_values<span class="token punctuation">(</span><span class="token string">'similarities'</span><span class="token punctuation">,</span> ascending<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">)</span><span class="token punctuation">.</span>head<span class="token punctuation">(</span>n<span class="token punctuation">)</span><span class="token keyword">return</span> resres <span class="token operator">=</span> search_reviews<span class="token punctuation">(</span>df<span class="token punctuation">,</span> <span class="token string">'delicious beans'</span><span class="token punctuation">,</span> n<span class="token operator">=</span><span class="token number">3</span><span class="token punctuation">)</span> </code></pre> <h2> <a id="Code_search_using_embeddings__350"></a>Code search using embeddings 使用嵌入的代码搜索</h2> <p>Code_search.ipynb</p> <p>Code search works similarly to embedding-based text search. We provide a method to extract Python functions from all the Python files in a given repository. Each function is then indexed by the <code>text-embedding-ada-002</code> model.<br> 代码搜索的工作原理与基于嵌入的文本搜索类似。我们提供了一种方法,可以从给定存储库中的所有Python文件中提取Python函数。然后每个函数由 <code>text-embedding-ada-002</code> 模型索引。</p> <p>To perform a code search, we embed the query in natural language using the same model. Then we calculate cosine similarity between the resulting query embedding and each of the function embeddings. The highest cosine similarity results are most relevant.<br> 为了执行代码搜索,我们使用相同的模型将查询嵌入到自然语言中。然后,我们计算得到的查询嵌入和每个函数嵌入之间的余弦相似度。最高余弦相似性结果是最相关的。</p> <pre><code class="prism language-python"><span class="token keyword">from</span> openai<span class="token punctuation">.</span>embeddings_utils <span class="token keyword">import</span> get_embedding<span class="token punctuation">,</span> cosine_similaritydf<span class="token punctuation">[</span><span class="token string">'code_embedding'</span><span class="token punctuation">]</span> <span class="token operator">=</span> df<span class="token punctuation">[</span><span class="token string">'code'</span><span class="token punctuation">]</span><span class="token punctuation">.</span><span class="token builtin">apply</span><span class="token punctuation">(</span><span class="token keyword">lambda</span> x<span class="token punctuation">:</span> get_embedding<span class="token punctuation">(</span>x<span class="token punctuation">,</span> model<span class="token operator">=</span><span class="token string">'text-embedding-ada-002'</span><span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token keyword">def</span> <span class="token function">search_functions</span><span class="token punctuation">(</span>df<span class="token punctuation">,</span> code_query<span class="token punctuation">,</span> n<span class="token operator">=</span><span class="token number">3</span><span class="token punctuation">,</span> pprint<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">,</span> n_lines<span class="token operator">=</span><span class="token number">7</span><span class="token punctuation">)</span><span class="token punctuation">:</span>embedding <span class="token operator">=</span> get_embedding<span class="token punctuation">(</span>code_query<span class="token punctuation">,</span> model<span class="token operator">=</span><span class="token string">'text-embedding-ada-002'</span><span class="token punctuation">)</span>df<span class="token punctuation">[</span><span class="token string">'similarities'</span><span class="token punctuation">]</span> <span class="token operator">=</span> df<span class="token punctuation">.</span>code_embedding<span class="token punctuation">.</span><span class="token builtin">apply</span><span class="token punctuation">(</span><span class="token keyword">lambda</span> x<span class="token punctuation">:</span> cosine_similarity<span class="token punctuation">(</span>x<span class="token punctuation">,</span> embedding<span class="token punctuation">)</span><span class="token punctuation">)</span>res <span class="token operator">=</span> df<span class="token punctuation">.</span>sort_values<span class="token punctuation">(</span><span class="token string">'similarities'</span><span class="token punctuation">,</span> ascending<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">)</span><span class="token punctuation">.</span>head<span class="token punctuation">(</span>n<span class="token punctuation">)</span><span class="token keyword">return</span> res res <span class="token operator">=</span> search_functions<span class="token punctuation">(</span>df<span class="token punctuation">,</span> <span class="token string">'Completions API tests'</span><span class="token punctuation">,</span> n<span class="token operator">=</span><span class="token number">3</span><span class="token punctuation">)</span> </code></pre> <h2> <a id="Recommendations_using_embeddings__373"></a>Recommendations using embeddings 使用嵌入的推荐</h2> <p>Recommendation_using_embeddings.ipynb</p> <p>Because shorter distances between embedding vectors represent greater similarity, embeddings can be useful for recommendation.<br> 因为嵌入向量之间的较短距离表示较大的相似性,所以嵌入对于推荐是有用的。</p> <p>Below, we illustrate a basic recommender. It takes in a list of strings and one ‘source’ string, computes their embeddings, and then returns a ranking of the strings, ranked from most similar to least similar. As a concrete example, the linked notebook below applies a version of this function to the AG news dataset (sampled down to 2,000 news article descriptions) to return the top 5 most similar articles to any given source article.<br> 下面,我们展示一个基本的推荐器。它接受一个字符串列表和一个“源”字符串,计算它们的嵌入,然后返回字符串的排序,从最相似到最不相似排序。作为一个具体的例子,下面的链接笔记本将此函数的一个版本应用于AG新闻数据集(采样到2,000篇新闻文章描述),以返回与任何给定源文章最相似的前5篇文章。</p> <pre><code class="prism language-python"><span class="token keyword">def</span> <span class="token function">recommendations_from_strings</span><span class="token punctuation">(</span>strings<span class="token punctuation">:</span> List<span class="token punctuation">[</span><span class="token builtin">str</span><span class="token punctuation">]</span><span class="token punctuation">,</span>index_of_source_string<span class="token punctuation">:</span> <span class="token builtin">int</span><span class="token punctuation">,</span>model<span class="token operator">=</span><span class="token string">"text-embedding-ada-002"</span><span class="token punctuation">,</span> <span class="token punctuation">)</span> <span class="token operator">-</span><span class="token operator">></span> List<span class="token punctuation">[</span><span class="token builtin">int</span><span class="token punctuation">]</span><span class="token punctuation">:</span><span class="token triple-quoted-string string">"""Return nearest neighbors of a given string."""</span><span class="token comment"># get embeddings for all strings</span>embeddings <span class="token operator">=</span> <span class="token punctuation">[</span>embedding_from_string<span class="token punctuation">(</span>string<span class="token punctuation">,</span> model<span class="token operator">=</span>model<span class="token punctuation">)</span> <span class="token keyword">for</span> string <span class="token keyword">in</span> strings<span class="token punctuation">]</span><span class="token comment"># get the embedding of the source string</span>query_embedding <span class="token operator">=</span> embeddings<span class="token punctuation">[</span>index_of_source_string<span class="token punctuation">]</span><span class="token comment"># get distances between the source embedding and other embeddings (function from embeddings_utils.py)</span>distances <span class="token operator">=</span> distances_from_embeddings<span class="token punctuation">(</span>query_embedding<span class="token punctuation">,</span> embeddings<span class="token punctuation">,</span> distance_metric<span class="token operator">=</span><span class="token string">"cosine"</span><span class="token punctuation">)</span><span class="token comment"># get indices of nearest neighbors (function from embeddings_utils.py)</span>indices_of_nearest_neighbors <span class="token operator">=</span> indices_of_nearest_neighbors_from_distances<span class="token punctuation">(</span>distances<span class="token punctuation">)</span><span class="token keyword">return</span> indices_of_nearest_neighbors </code></pre> <h1> <a id="Limitations__risks__405"></a>Limitations & risks 限制和风险</h1> <p>Our embedding models may be unreliable or pose social risks in certain cases, and may cause harm in the absence of mitigations.<br> 我们的嵌入模型在某些情况下可能不可靠或构成社会风险,并且在缺乏缓解措施的情况下可能造成伤害。</p> <h2> <a id="Social_bias__409"></a>Social bias 社会偏差</h2> <blockquote> <p>Limitation: The models encode social biases, e.g. via stereotypes or negative sentiment towards certain groups.<br> 局限性:这些模型对社会偏差进行了编码,例如通过对某些群体的刻板印象或负面情绪。</p> </blockquote> <p>We found evidence of bias in our models via running the SEAT (May et al, 2019) and the Winogender (Rudinger et al, 2018) benchmarks. Together, these benchmarks consist of 7 tests that measure whether models contain implicit biases when applied to gendered names, regional names, and some stereotypes.<br> 我们通过运行SEAT(May et al,2019)和Winogender(Rudinger et al,2018)基准测试发现了模型中存在偏差的证据。这些基准包括7个测试,用于衡量模型在应用于性别名称,地区名称和一些刻板印象时是否包含隐含的偏见。</p> <p>For example, we found that our models more strongly associate (a) European American names with positive sentiment, when compared to African American names, and (b) negative stereotypes with black women.<br> 例如,我们发现,我们的模型更强烈地将(a)欧洲裔美国人的名字与积极情绪相关联,与非洲裔美国人的名字相比,以及(B)与黑人女性的负面刻板印象。</p> <p>These benchmarks are limited in several ways: (a) they may not generalize to your particular use case, and (b) they only test for a very small slice of possible social bias.<br> 这些基准在几个方面受到限制:(a)它们可能不会推广到您的特定用例,(B)它们只测试可能的社会偏见的一小部分。</p> <p><strong>These tests are preliminary, and we recommend running tests for your specific use cases.</strong> These results should be taken as evidence of the existence of the phenomenon, not a definitive characterization of it for your use case. Please see our usage policies for more details and guidance.<br> 这些测试是初步的,我们建议您针对特定用例运行测试。这些结果应该被视为现象存在的证据,而不是对您的用例的明确描述。请参阅我们的使用政策了解更多详情和指导。</p> <p>Please contact our support team via chat if you have any questions; we are happy to advise on this.<br> 如果您有任何问题,请通过聊天联系我们的支持团队;我们很乐意就此提供意见。</p> <h2> <a id="Blindness_to_recent_events__429"></a>Blindness to recent events 对最近发生的事件视而不见</h2> <blockquote> <p>Limitation: Models lack knowledge of events that occurred after August 2020.<br> 局限性:模型缺乏对2020年8月之后发生的事件的了解。</p> </blockquote> <p>Our models are trained on datasets that contain some information about real world events up until 8/2020. If you rely on the models representing recent events, then they may not perform well.<br> 我们的模型是在包含一些真实的世界事件信息的数据集上训练的,直到2020年8月。如果你依赖于代表最近事件的模型,那么它们可能表现不佳。</p> <h1> <a id="Frequently_asked_questions__437"></a>Frequently asked questions 常见问题</h1> <h2> <a id="How_can_I_tell_how_many_tokens_a_string_has_before_I_embed_it__438"></a>How can I tell how many tokens a string has before I embed it? 在嵌入字符串之前,如何知道它有多少个标记?</h2> <p>In Python, you can split a string into tokens with OpenAI’s <code>tokenizer</code> tiktoken.<br> 在Python中,你可以使用OpenAI的标记器 <code>tiktoken</code> 将一个字符串分割成多个标记。</p> <p>Example code:</p> <pre><code class="prism language-python"><span class="token keyword">import</span> tiktoken<span class="token keyword">def</span> <span class="token function">num_tokens_from_string</span><span class="token punctuation">(</span>string<span class="token punctuation">:</span> <span class="token builtin">str</span><span class="token punctuation">,</span> encoding_name<span class="token punctuation">:</span> <span class="token builtin">str</span><span class="token punctuation">)</span> <span class="token operator">-</span><span class="token operator">></span> <span class="token builtin">int</span><span class="token punctuation">:</span><span class="token triple-quoted-string string">"""Returns the number of tokens in a text string."""</span>encoding <span class="token operator">=</span> tiktoken<span class="token punctuation">.</span>get_encoding<span class="token punctuation">(</span>encoding_name<span class="token punctuation">)</span>num_tokens <span class="token operator">=</span> <span class="token builtin">len</span><span class="token punctuation">(</span>encoding<span class="token punctuation">.</span>encode<span class="token punctuation">(</span>string<span class="token punctuation">)</span><span class="token punctuation">)</span><span class="token keyword">return</span> num_tokensnum_tokens_from_string<span class="token punctuation">(</span><span class="token string">"tiktoken is great!"</span><span class="token punctuation">,</span> <span class="token string">"cl100k_base"</span><span class="token punctuation">)</span> </code></pre> <p>For second-generation embedding models like <code>text-embedding-ada-002</code>, use the <code>cl100k_base</code> encoding.<br> 对于像 <code>text-embedding-ada-002</code> 这样的第二代嵌入模型,请使用 <code>cl100k_base</code> 编码。</p> <p>More details and example code are in the OpenAI Cookbook guide how to count tokens with tiktoken.<br> 更多细节和示例代码在OpenAI Cookbook指南中如何使用tiktoken进行标记计数。</p> <h2> <a id="How_can_I_retrieve_K_nearest_embedding_vectors_quickly_K_462"></a>How can I retrieve K nearest embedding vectors quickly? 如何快速检索K个最近的嵌入向量?</h2> <p>For searching over many vectors quickly, we recommend using a vector database. You can find examples of working with vector databases and the OpenAI API in our Cookbook on GitHub.<br> 为了快速搜索多个矢量,我们建议使用矢量数据库。您可以在GitHub上的Cookbook中找到使用矢量数据库和OpenAI API的示例。</p> <p>Vector database options include: 矢量数据库选项包括:</p> <ul> <li>Pinecone, a fully managed vector database<br> Pinecone,一个完全管理的矢量数据库</li> <li>Weaviate, an open-source vector search engine<br> Weaviate,一个开源矢量搜索引擎</li> <li>Redis as a vector database<br> Redis作为矢量数据库</li> <li>Qdrant, a vector search engine<br> Qdrant,一个矢量搜索引擎</li> <li>Milvus, a vector database built for scalable similarity search</li> <li>Milvus,一个为可扩展的相似性搜索而构建的矢量数据库</li> <li>Chroma, an open-source embeddings store<br> Chroma,一个开源嵌入式商店</li> <li>Typesense, fast open source vector search<br> Typesense,快速开源矢量搜索</li> </ul> <h2> <a id="Which_distance_function_should_I_use__483"></a>Which distance function should I use? 我应该使用哪种距离函数?</h2> <p>We recommend cosine similarity. The choice of distance function typically doesn’t matter much.<br> 我们推荐余弦相似性。距离函数的选择通常并不重要。</p> <p>OpenAI embeddings are normalized to length 1, which means that:<br> OpenAI嵌入规范化为长度1,这意味着:</p> <ul> <li>Cosine similarity can be computed slightly faster using just a dot product<br> 仅使用点积可以稍微更快地计算余弦相似性</li> <li>Cosine similarity and Euclidean distance will result in the identical rankings<br> 余弦相似度和欧氏距离将导致相同的排名</li> </ul> <h2> <a id="Can_I_share_my_embeddings_online__494"></a>Can I share my embeddings online? 我可以在线分享我的嵌入吗?</h2> <p>Customers own their input and output from our models, including in the case of embeddings. You are responsible for ensuring that the content you input to our API does not violate any applicable law or our Terms of Use.<br> 客户拥有他们的输入和我们模型的输出,包括嵌入的情况。您有责任确保您输入我们API的内容不违反任何适用法律或我们的使用条款。</p> <h1> <a id="_498"></a>其它资料下载</h1> <p>如果大家想继续了解人工智能相关学习路线和知识体系,欢迎大家翻阅我的另外一篇博客《重磅 | 完备的人工智能AI 学习——基础知识学习路线,所有资料免关注免套路直接网盘下载》<br> 这篇博客参考了Github知名开源平台,AI技术平台以及相关领域专家:Datawhale,ApacheCN,AI有道和黄海广博士等约有近100G相关资料,希望能帮助到所有小伙伴们。</p> </div> <link href="https://csdnimg.cn/release/blogv2/dist/mdeditor/css/editerView/markdown_views-0407448025.css" rel="stylesheet"> <link href="https://csdnimg.cn/release/blogv2/dist/mdeditor/css/style-c216769e99.css" rel="stylesheet"> </div> <div id="treeSkill"></div> </article>
OpenAI-ChatGPT最新官方接口《嵌入向量式文本转换》全网最详细中英文实用指南和教程,助你零基础快速轻松掌握全新技术(五)(附源码)
作者
sockstack
许可协议
CC BY 4.0
发布于
2023-11-06
修改于
2024-12-22
上一篇:软件:常用 Linux 软件汇总,值得收藏
下一篇:体验联网版 ChatGPT:优点和缺点同样明显,还藏着无限可能
尚未登录
登录 / 注册
文章分类
博客重构之路
5
Spring Boot简单入门
4
k8s 入门教程
0
MySQL 知识
1
NSQ 消息队列
0
ThinkPHP5 源码分析
5
使用 Docker 从零开始搭建私人代码仓库
3
日常开发汇总
4
标签列表
springboot
hyperf
swoole
webman
php
多线程
数据结构
docker
k8s
thinkphp
mysql
tailwindcss
flowbite
css
前端