问题内容 我正在运行 python 3.11 和最新版本的 llama-cpp-Python 以及 一个 gguf 模型 我希望代码像聊天机器人一样正常运行,但我得到了这个错误: t
我正在运行 python 3.11 和最新版本的 llama-cpp-Python 以及
一个 gguf
模型
我希望代码像聊天机器人一样正常运行,但我得到了这个错误:
traceback (most recent call last):
file "d:\ai custom\ai arush\server.py", line 223, in
init()
file "d:\ai custom\ai arush\server.py", line 57, in init
m_eval(model, m_tokenize(model, prompt_init, true), false, "starting up...")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
file "d:\ai custom\ai arush\server.py", line 182, in m_tokenize
n_tokens = llama_cpp.llama_tokenize(
^^^^^^^^^^^^^^^^^^^^^^^^^
typeerror: llama_tokenize() missing 2 required positional arguments: 'add_bos' and 'special'
这是我的标记化代码:
def m_tokenize(model: llama_cpp.Llama, text: bytes, add_bos=False, special=False):
assert model.ctx is not None
n_ctx = llama_cpp.llama_n_ctx(model.ctx)
tokens = (llama_cpp.llama_token * int(n_ctx))()
n_tokens = llama_cpp.llama_tokenize(
model.ctx,
text,
tokens,
n_ctx,
llama_cpp.c_bool(add_bos),
)
if int(n_tokens) < 0:
raise RuntimeError(f'Failed to tokenize: text="{text}" n_tokens={n_tokens}')
return list(tokens[:n_tokens])
typeerror: llama_tokenize() missing 2 required positional arguments: 'add_bos' and 'special'
要解决该错误,您需要将参数 add_bos
和 special
包含到 llama_tokenize()
函数中。
def m_tokenize(model: llama_cpp.llama, text: bytes, add_bos=false, special=false):
assert model.ctx is not none
n_ctx = llama_cpp.llama_n_ctx(model.ctx)
tokens = (llama_cpp.llama_token * int(n_ctx))()
# include the missing arguments in the function call
n_tokens = llama_cpp.llama_tokenize(
model.ctx,
text,
tokens,
n_ctx,
# you should check if llama_cpp.c_bool(add_bos) is returning a c_boo value also you have the arguments add_bos=false and special=false in this function
# if i am right all you need is:
add_bos
# not
# llama_cpp.c_bool(add_bos),
# you should check if llama_cpp.c_bool(special) is returning a c_boo value
# if i am right all you need is:
special # include the special argument
# not
# llama_cpp.c_bool(special)
)
if int(n_tokens) < 0:
raise runtimeerror(f'failed to tokenize: text="{text}" n_tokens={n_tokens}')
return list(tokens[:n_tokens])
来自 llama_cpp.py (GitHub) a>,从 1817 开始的代码行
def llama_tokenize(
model: llama_model_p,
text: bytes,
text_len: Union[c_int, int],
tokens, # type: Array[llama_token]
n_max_tokens: Union[c_int, int],
add_bos: Union[c_bool, bool],
special: Union[c_bool, bool],
) -> int:
"""Convert the provided text into tokens."""
return _lib.llama_tokenize(
model, text, text_len, tokens, n_max_tokens, add_bos, special
)
_lib.llama_tokenize.argtypes = [
llama_model_p,
c_char_p,
c_int32,
llama_token_p,
c_int32,
c_bool,
c_bool,
]
_lib.llama_tokenize.restype = c_int32
以上就是类型错误:llama_tokenize() 缺少 2 个必需的位置参数:“add_bos”和“special”的详细内容,更多请关注编程网其它相关文章!
--结束END--
本文标题: 类型错误:llama_tokenize() 缺少 2 个必需的位置参数:“add_bos”和“special”
本文链接: https://lsjlt.com/news/562638.html(转载时请注明来源链接)
有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
回答
回答
回答
回答
回答
回答
回答
回答
回答
回答
0