Readme
Files and versions
Updated 2 years ago
LLM
OpenAI Chat Completion
author: Jael
Description
A LLM operator generates answer given prompt in messages using a large language model or service. This operator is implemented with Chat Completion method from OpenAI. Please note you need an OpenAI API key to access OpenAI.
Code Example
Use the default model to continue the conversation from given messages.
Write a pipeline with explicit inputs/outputs name specifications:
from towhee import pipe, ops
p = (
pipe.input('messages')
.map('messages', 'answer', ops.LLM.OpenAI(api_key=OPENAI_API_KEY))
.output('messages', 'answer')
)
messages=[
{'question': 'Who won the world series in 2020?', 'answer': 'The Los Angeles Dodgers won the World Series in 2020.'},
{'question': 'Where was it played?'}
]
answer = p(messages)
Factory Constructor
Create the operator via the following factory method:
chatbot.openai(model_name: str, api_key: str)
Parameters:
model_name: str
The model name in string, defaults to 'gpt-3.5-turbo'. Supported model names:
- gpt-3.5-turbo
- pt-3.5-turbo-0301
api_key: str=None
The OpenAI API key in string, defaults to None.
**kwargs
Other OpenAI parameters such as max_tokens, stream, temperature, etc.
Interface
The operator takes a piece of text in string as input. It returns answer in json.
__call__(txt)
Parameters:
messages: list
A list of messages to set up chat. Must be a list of dictionaries with key value from "system", "question", "answer". For example, [{"question": "a past question?", "answer": "a past answer."}, {"question": "current question?"}]
Returns:
answer: str
The next answer generated by role "assistant".
| 4 Commits | ||
---|---|---|---|
|
1.1 KiB
|
2 years ago | |
|
1.9 KiB
|
2 years ago | |
|
106 B
|
2 years ago | |
|
2.8 KiB
|
2 years ago | |
|
12 B
|
2 years ago |