logo
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Readme
Files and versions

3.4 KiB

OpenAI Chat Completion

author: David Wang


Description

A LLM operator generates answer given prompt in messages using a large language model or service. This operator is implemented with Chat Completion method from Azure OpenAI. Please note you need an OpenAI API key to access OpenAI.


Code Example

Use the default model to continue the conversation from given messages.

Write a pipeline with explicit inputs/outputs name specifications:

from towhee import pipe, ops

p = (
    pipe.input('messages')
        .map('messages', 'answer', ops.LLM.Azure_OpenAI(api_key=OPENAI_API_KEY, api_base=OPENAI_API_BASE))
        .output('messages', 'answer')
)

messages=[
        {'question': 'Who won the world series in 2020?', 'answer': 'The Los Angeles Dodgers won the World Series in 2020.'},
        {'question': 'Where was it played?'}
    ]
answer = p(messages).get()[0]

Write a retrieval-augmented generation pipeline with explicit inputs/outputs name specifications:

from towhee import pipe, ops


temp = '''Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.

{context}

Question: {question}

Helpful Answer:
'''


docs = ['You can install towhee via command `pip install towhee`.']
history = [
    ('What is Towhee?', 'Towhee is an open-source machine learning pipeline that helps you encode your unstructured data into embeddings.')
]
question = 'How to install it?'

p = (
    pipe.input('question', 'docs', 'history')
        .map(('question', 'docs', 'history'), 'prompt', ops.prompt.template(temp, ['question', 'context']))
        .map('prompt', 'answer',
             ops.LLM.Azure_OpenAI(api_key=OPENAI_API_KEY, api_base=OPENAI_API_BASE, temperature=0.5, max_tokens=100)
             )
        .output('answer')
)

answer = p(question, docs, history).get()[0]


Factory Constructor

Create the operator via the following factory method:

LLM.OpenAI(model_name: str, api_key: str)

Parameters:

model_name: str

The model name in string, defaults to 'gpt-3.5-turbo'. Supported model names:

  • gpt-3.5-turbo
  • gpt-3.5-turbo-16k
  • gpt-3.5-turbo-instruct
  • gpt-3.5-turbo-0613
  • gpt-3.5-turbo-16k-0613

api_type: str='azure'

The OpenAI API type in string, defaults to None.

api_version: str='2023-07-01-preview'

The OpenAI API version in string, defaults to None.

api_key: str=None

The OpenAI API key in string, defaults to None.

api_base: str=None

The OpenAI API base in string, defaults to None.

**kwargs

Other OpenAI parameters such as max_tokens, stream, temperature, etc.


Interface

The operator takes a piece of text in string as input. It returns answer in json.

__call__(txt)

Parameters:

messages: list

​ A list of messages to set up chat. Must be a list of dictionaries with key value from "system", "question", "answer". For example, [{"question": "a past question?", "answer": "a past answer."}, {"question": "current question?"}]

Returns:

answer: str

​ The next answer generated by role "assistant".


3.4 KiB

OpenAI Chat Completion

author: David Wang


Description

A LLM operator generates answer given prompt in messages using a large language model or service. This operator is implemented with Chat Completion method from Azure OpenAI. Please note you need an OpenAI API key to access OpenAI.


Code Example

Use the default model to continue the conversation from given messages.

Write a pipeline with explicit inputs/outputs name specifications:

from towhee import pipe, ops

p = (
    pipe.input('messages')
        .map('messages', 'answer', ops.LLM.Azure_OpenAI(api_key=OPENAI_API_KEY, api_base=OPENAI_API_BASE))
        .output('messages', 'answer')
)

messages=[
        {'question': 'Who won the world series in 2020?', 'answer': 'The Los Angeles Dodgers won the World Series in 2020.'},
        {'question': 'Where was it played?'}
    ]
answer = p(messages).get()[0]

Write a retrieval-augmented generation pipeline with explicit inputs/outputs name specifications:

from towhee import pipe, ops


temp = '''Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.

{context}

Question: {question}

Helpful Answer:
'''


docs = ['You can install towhee via command `pip install towhee`.']
history = [
    ('What is Towhee?', 'Towhee is an open-source machine learning pipeline that helps you encode your unstructured data into embeddings.')
]
question = 'How to install it?'

p = (
    pipe.input('question', 'docs', 'history')
        .map(('question', 'docs', 'history'), 'prompt', ops.prompt.template(temp, ['question', 'context']))
        .map('prompt', 'answer',
             ops.LLM.Azure_OpenAI(api_key=OPENAI_API_KEY, api_base=OPENAI_API_BASE, temperature=0.5, max_tokens=100)
             )
        .output('answer')
)

answer = p(question, docs, history).get()[0]


Factory Constructor

Create the operator via the following factory method:

LLM.OpenAI(model_name: str, api_key: str)

Parameters:

model_name: str

The model name in string, defaults to 'gpt-3.5-turbo'. Supported model names:

  • gpt-3.5-turbo
  • gpt-3.5-turbo-16k
  • gpt-3.5-turbo-instruct
  • gpt-3.5-turbo-0613
  • gpt-3.5-turbo-16k-0613

api_type: str='azure'

The OpenAI API type in string, defaults to None.

api_version: str='2023-07-01-preview'

The OpenAI API version in string, defaults to None.

api_key: str=None

The OpenAI API key in string, defaults to None.

api_base: str=None

The OpenAI API base in string, defaults to None.

**kwargs

Other OpenAI parameters such as max_tokens, stream, temperature, etc.


Interface

The operator takes a piece of text in string as input. It returns answer in json.

__call__(txt)

Parameters:

messages: list

​ A list of messages to set up chat. Must be a list of dictionaries with key value from "system", "question", "answer". For example, [{"question": "a past question?", "answer": "a past answer."}, {"question": "current question?"}]

Returns:

answer: str

​ The next answer generated by role "assistant".