logo
OpenAI
repo-copy-icon

copied

You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

133 lines
4.7 KiB

# OpenAI Chat Completion
*author: Jael*
<br />
## Description
A LLM operator generates answer given prompt in messages using a large language model or service.
This operator is implemented with Chat Completion method from [OpenAI](https://platform.openai.com/docs/guides/chat).
Please note you need an [OpenAI API key](https://platform.openai.com/account/api-keys) to access OpenAI.
<br />
## Code Example
Use the default model to continue the conversation from given messages.
*Write a pipeline with explicit inputs/outputs name specifications:*
```python
from towhee import pipe, ops
p = (
pipe.input('messages')
.map('messages', 'answer', ops.LLM.OpenAI(api_key=OPENAI_API_KEY))
.output('messages', 'answer')
)
messages=[
{'question': 'Who won the world series in 2020?', 'answer': 'The Los Angeles Dodgers won the World Series in 2020.'},
{'question': 'Where was it played?'}
]
answer = p(messages).get()[0]
```
*Write a [retrieval-augmented generation pipeline](https://towhee.io/tasks/detail/pipeline/retrieval-augmented-generation) with explicit inputs/outputs name specifications:*
```python
from towhee import pipe, ops
temp = '''Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Helpful Answer:
'''
docs = ['You can install towhee via command `pip install towhee`.']
history = [
('What is Towhee?', 'Towhee is an open-source machine learning pipeline that helps you encode your unstructured data into embeddings.')
]
question = 'How to install it?'
p = (
pipe.input('question', 'docs', 'history')
.map(('question', 'docs', 'history'), 'prompt', ops.prompt.template(temp, ['question', 'context']))
.map('prompt', 'answer',
ops.LLM.OpenAI(api_key=OPENAI_API_KEY, temperature=0.5, max_tokens=100)
)
.output('answer')
)
answer = p(question, docs, history).get()[0]
```
<br />
## Factory Constructor
Create the operator via the following factory method:
***LLM.OpenAI(model_name: str, api_key: str)***
**Parameters:**
***model_name***: *str*
The model name in string, defaults to 'gpt-3.5-turbo'. Supported model names:
- gpt-3.5-turbo
- pt-3.5-turbo-0301
***api_key***: *str=None*
The OpenAI API key in string, defaults to None.
***\*\*kwargs***
Other OpenAI parameters such as max_tokens, stream, temperature, etc.
<br />
## Interface
The operator takes a piece of text in string as input.
It returns answer in json.
***\_\_call\_\_(txt)***
**Parameters:**
***messages***: *list*
​ A list of messages to set up chat.
Must be a list of dictionaries with key value from "system", "question", "answer". For example, [{"question": "a past question?", "answer": "a past answer."}, {"question": "current question?"}]
**Returns**:
*answer: str*
​ The next answer generated by role "assistant".
<br />
1 year ago
# More Resources
- [ChatGPT+ Vector database + prompt-as-code - The CVP Stack - Zilliz blog](https://zilliz.com/blog/ChatGPT-VectorDB-Prompt-as-code): Extend the capability of ChatGPT with a Vector database and prompts-as-code
- [OpenAI's ChatGPT - Zilliz blog](https://zilliz.com/learn/ChatGPT-Vector-Database-Prompt-as-code): A guide to the new AI Stack - ChatGPT, your Vector Database, and Prompt as code
- [LLama2 vs ChatGPT: How They Perform in Question Answering - Zilliz blog](https://zilliz.com/blog/comparing-meta-ai-Llama2-openai-chatgpt): What is Llama 2, and how does it perform in question answering compared to ChatGPT?
- [Building a Chatbot for Toward the Science with Zilliz Cloud (Part I) - Zilliz blog](https://zilliz.com/blog/chat-towards-data-science-building-chatbot-with-zilliz-cloud): Building a chatbot for the Towards Data Science publication using the Zilliz vector database
- [Improving ChatGPT’s Ability to Understand Ambiguous Prompts - Zilliz blog](https://zilliz.com/blog/improving-chatgpts-ability-to-understand-ambiguous-prompts): Prompt engineering technique helps large language models (LLMs) handle pronouns and other complex coreferences in retrieval augmented generation (RAG) systems.
- [Prompting in LangChain - Zilliz blog](https://zilliz.com/blog/prompting-langchain): Prompting is one of today's most popular and important tasks in AI app building. Learn how to use LangChain for more complex prompts.
- [OpenAI Whisper: Transforming Speech-to-Text with Advanced AI - Zilliz blog](https://zilliz.com/learn/open-ai-whisper-transforming-speech-to-text-with-advanced-ai): Understand Open AI Whisper and follow this step-by-step article to implement it in projects that can significantly enhance the efficiency of speech-to-text tasks.