开源LLM翻译能力简单评测

SYuan03 Lv4

主要依托siliconflow平台的免费金,想看看哪个模型翻译的好点

先看下目前账号可用的模型列表

https://docs.siliconflow.cn/reference/retrieve-a-list-of-models-1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
{
"object": "list",
"data": [
{
"id": "stabilityai/stable-diffusion-xl-base-1.0",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "TencentARC/PhotoMaker",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "InstantX/InstantID",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "mistralai/Mixtral-8x7B-Instruct-v0.1",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "mistralai/Mistral-7B-Instruct-v0.2",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "stabilityai/stable-diffusion-2-1",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "stabilityai/sd-turbo",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "stabilityai/sdxl-turbo",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "ByteDance/SDXL-Lightning",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "deepseek-ai/deepseek-llm-67b-chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Qwen/Qwen1.5-14B-Chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "mixtralai/Mixtral-8x22B-Instruct-v0.1",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "meta-llama/Meta-Llama-3-70B-Instruct",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "meta-llama/Meta-Llama-3-8B-Instruct",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Qwen/Qwen1.5-7B-Chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Qwen/Qwen1.5-110B-Chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Qwen/Qwen1.5-32B-Chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "01-ai/Yi-1.5-6B-Chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "01-ai/Yi-1.5-9B-Chat-16K",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "01-ai/Yi-1.5-34B-Chat-16K",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "THUDM/chatglm3-6b",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "deepseek-ai/DeepSeek-V2-Chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "THUDM/glm-4-9b-chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Qwen/Qwen2-72B-Instruct",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Qwen/Qwen2-7B-Instruct",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Qwen/Qwen2-57B-A14B-Instruct",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "stabilityai/stable-diffusion-3-medium",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "deepseek-ai/DeepSeek-Coder-V2-Instruct",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Qwen/Qwen2-1.5B-Instruct",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "google/gemma-2-9b-it",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "google/gemma-2-27b-it",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "internlm/internlm2_5-7b-chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "BAAI/bge-large-en-v1.5",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "BAAI/bge-large-zh-v1.5",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/Qwen/Qwen2-7B-Instruct",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/Qwen/Qwen2-1.5B-Instruct",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/Qwen/Qwen1.5-7B-Chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/THUDM/glm-4-9b-chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/THUDM/chatglm3-6b",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/01-ai/Yi-1.5-9B-Chat-16K",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/01-ai/Yi-1.5-6B-Chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/google/gemma-2-9b-it",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/internlm/internlm2_5-7b-chat",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/meta-llama/Meta-Llama-3-8B-Instruct",
"object": "model",
"created": 0,
"owned_by": ""
},
{
"id": "Pro/mistralai/Mistral-7B-Instruct-v0.2",
"object": "model",
"created": 0,
"owned_by": ""
}
]
}

计费规则

img

待测文本

1
2
Early neural language models (NLMs) [13], [14], [15], [16] deal with data sparsity by mapping words to low-dimensional continuous vectors (embedding vectors) and predict the next word based on the aggregation of the embedding vectors of its proceeding words using neural networks. The embedding vectors learned by NLMs define a hidden space where the semantic similarity between vectors can be readily computed as their distance. This opens the door to computing semantic similarity of any two inputs regardless their forms (e.g., queries vs. documents in Web search [17], [18], sentences in different languages in machine translation [19], [20]) or modalities (e.g., image and text in image captioning [21], [22]). Early NLMs are task-specific models, in that they are trained on task-specific data and their learned hidden space is task-specific.  
Pre-trained language models (PLMs), unlike early NLMs, are task-agnostic. This generality also extends to the learned hidden embedding space. The training and inference of PLMs follows the pre-training and fine-tuning paradigm, where language models with recurrent neural networks [23] or transformers [24], [25], [26] are pre-trained on Web-scale unlabeled text corpora for general tasks such as word prediction, and then finetuned to specific tasks using small amounts of (labeled) task-specific data. Recent surveys on PLMs include [8], [27], [28].

Qwen/Qwen2-57B-A14B-Instruct

主打一个快,还算可以

image-20240724162019227

meta-llama/Meta-Llama-3-70B-Instruct

image-20240724162348536

翻译的更精准一些,比如train没翻译成学习,task-agnostic就直接直译了

速度稍慢于1,且会有网络问题

meta-llama/Meta-Llama-3-8B-Instruct

有点重量级

image-20240724162755569

单词还拼错了,有的不是特定名词也直接不翻了

Qwen/Qwen2-72B-Instruct

目前最慢的一个

image-20240724163035608

“他们不针对具体任务”,虽然是意译,但其实我不太喜欢

GPT-3.5-turbo-16K(非开源模型)

image-20240724163459667

感觉还得是gpt的

  • 标题: 开源LLM翻译能力简单评测
  • 作者: SYuan03
  • 创建于 : 2024-07-24 16:11:28
  • 更新于 : 2024-07-24 16:35:10
  • 链接: https://bblog.031105.xyz/posts/杂记/开源llm翻译能力简单评测.html
  • 版权声明: 本文章采用 CC BY-NC-SA 4.0 进行许可。
评论