logo
OpenAI
repo-copy-icon

copied

Browse Source

Add files

Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
main
Jael Gu 1 year ago
parent
commit
189aae5ed2
  1. 85
      README.md
  2. 5
      __init__.py
  3. 77
      openai_chat.py
  4. 1
      requirements.txt

85
README.md

@ -1,2 +1,85 @@
# OpenAI
# OpenAI Chat Completion
*author: Jael*
<br />
## Description
A LLM operator generates answer given prompt in messages using a large language model or service.
This operator is implemented with Chat Completion method from [OpenAI](https://platform.openai.com/docs/guides/chat).
Please note you need an [OpenAI API key](https://platform.openai.com/account/api-keys) to access OpenAI.
<br />
## Code Example
Use the default model to continue the conversation from given messages.
*Write a pipeline with explicit inputs/outputs name specifications:*
```python
from towhee import pipe, ops
p = (
pipe.input('messages')
.map('messages', 'answer', ops.LLM.OpenAI(api_key=OPENAI_API_KEY))
.output('messages', 'answer')
)
messages=[
{'question': 'Who won the world series in 2020?', 'answer': 'The Los Angeles Dodgers won the World Series in 2020.'},
{'question': 'Where was it played?'}
]
answer = p(messages)
```
<br />
## Factory Constructor
Create the operator via the following factory method:
***chatbot.openai(model_name: str, api_key: str)***
**Parameters:**
***model_name***: *str*
The model name in string, defaults to 'gpt-3.5-turbo'. Supported model names:
- gpt-3.5-turbo
- pt-3.5-turbo-0301
***api_key***: *str=None*
The OpenAI API key in string, defaults to None.
***\*\*kwargs***
Other OpenAI parameters such as max_tokens, stream, temperature, etc.
<br />
## Interface
The operator takes a piece of text in string as input.
It returns answer in json.
***\_\_call\_\_(txt)***
**Parameters:**
***messages***: *list*
​ A list of messages to set up chat.
Must be a list of dictionaries with key value from "system", "question", "answer". For example, [{"question": "a past question?", "answer": "a past answer."}, {"question": "current question?"}]
**Returns**:
*answer: str*
​ The next answer generated by role "assistant".
<br />

5
__init__.py

@ -0,0 +1,5 @@
from .openai_chat import OpenAI
def OpenAI(*args, **kwargs):
return OpenAI(*args, **kwargs)

77
openai_chat.py

@ -0,0 +1,77 @@
# Copyright 2021 Zilliz. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from typing import List
import openai
from towhee.operator.base import PyOperator
class OpenAI(PyOperator):
'''Wrapper of OpenAI Chat API'''
def __init__(self,
model_name: str = 'gpt-3.5-turbo',
api_key: str = None,
**kwargs
):
openai.api_key = os.getenv('OPENAI_API_KEY', api_key)
self._model = model_name
self.kwargs = kwargs
def __call__(self, messages: List[dict]):
messages = self.parse_inputs(messages)
response = openai.ChatCompletion.create(
model=self._model,
messages=messages,
n=1,
**self.kwargs
)
if self.kwargs.get('stream'):
for chunk in response:
ans = chunk['choices'][0]['delta']
yield ans
else:
answer = response['choices'][0]['message']['content']
return answer
def parse_inputs(self, messages: List[dict]):
assert isinstance(messages, list), \
'Inputs must be a list of dictionaries with keys from ["system", "question", "answer"].'
new_messages = []
for m in messages:
if ('role' and 'content' in m) and (m['role'] in ['system', 'assistant', 'user']):
new_messages.append(m)
else:
for k, v in m.items():
if k == 'question':
new_m = {'role': 'user', 'content': v}
elif k == 'answer':
new_m = {'role': 'assistant', 'content': v}
elif k == 'system':
new_m = {'role': 'system', 'content': v}
else:
'Invalid message key: only accept key value from ["system", "question", "answer"].'
new_messages.append(new_m)
return new_messages
@staticmethod
def supported_model_names():
model_list = [
'gpt-3.5-turbo',
'gpt-3.5-turbo-0301'
]
model_list.sort()
return model_list

1
requirements.txt

@ -0,0 +1 @@
openai>=0.27
Loading…
Cancel
Save