# Azure OpenAI Chat Completion
*author: David Wang*
## Description
A LLM operator generates answer given prompt in messages using a large language model or service.
This operator is implemented with Chat Completion method from [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions).
Please note you need an [OpenAI API key](https://platform.openai.com/account/api-keys) to access OpenAI.
## Code Example
Use the default model to continue the conversation from given messages.
*Write a pipeline with explicit inputs/outputs name specifications:*
```python
from towhee import pipe, ops
p = (
pipe.input('messages')
.map('messages', 'answer', ops.LLM.Azure_OpenAI(api_key=OPENAI_API_KEY, api_base=OPENAI_API_BASE))
.output('messages', 'answer')
)
messages=[
{'question': 'Who won the world series in 2020?', 'answer': 'The Los Angeles Dodgers won the World Series in 2020.'},
{'question': 'Where was it played?'}
]
answer = p(messages).get()[0]
```
*Write a [retrieval-augmented generation pipeline](https://towhee.io/tasks/detail/pipeline/retrieval-augmented-generation) with explicit inputs/outputs name specifications:*
```python
from towhee import pipe, ops
temp = '''Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Helpful Answer:
'''
docs = ['You can install towhee via command `pip install towhee`.']
history = [
('What is Towhee?', 'Towhee is an open-source machine learning pipeline that helps you encode your unstructured data into embeddings.')
]
question = 'How to install it?'
p = (
pipe.input('question', 'docs', 'history')
.map(('question', 'docs', 'history'), 'prompt', ops.prompt.template(temp, ['question', 'context']))
.map('prompt', 'answer',
ops.LLM.Azure_OpenAI(api_key=OPENAI_API_KEY, api_base=OPENAI_API_BASE, temperature=0.5, max_tokens=100)
)
.output('answer')
)
answer = p(question, docs, history).get()[0]
```
## Factory Constructor
Create the operator via the following factory method:
***LLM.OpenAI(deployment_name: str, api_key: str)***
**Parameters:**
***deployment_name***: *str*
Deployments provide endpoints to the Azure CpenAl base model, or your ine-tuned models, conioured with setings to meet your needs,
***api_type***: *str='azure'*
The OpenAI API type in string, defaults to None.
***api_version***: *str='2023-07-01-preview'*
The OpenAI API version in string, defaults to None.
***api_key***: *str=None*
The OpenAI API key in string, defaults to None.
***api_base***: *str=None*
The OpenAI API base in string, defaults to None.
***\*\*kwargs***
Other OpenAI parameters such as max_tokens, stream, temperature, etc.
## Interface
The operator takes a piece of text in string as input.
It returns answer in json.
***\_\_call\_\_(txt)***
**Parameters:**
***messages***: *list*
A list of messages to set up chat.
Must be a list of dictionaries with key value from "system", "question", "answer". For example, [{"question": "a past question?", "answer": "a past answer."}, {"question": "current question?"}]
**Returns**:
*answer: str*
The next answer generated by role "assistant".
# More Resources
- [ChatGPT+ Vector database + prompt-as-code - The CVP Stack - Zilliz blog](https://zilliz.com/blog/ChatGPT-VectorDB-Prompt-as-code): Extend the capability of ChatGPT with a Vector database and prompts-as-code
- [OpenAI's ChatGPT - Zilliz blog](https://zilliz.com/learn/ChatGPT-Vector-Database-Prompt-as-code): A guide to the new AI Stack - ChatGPT, your Vector Database, and Prompt as code
- [OpenAI Whisper: Transforming Speech-to-Text with Advanced AI - Zilliz blog](https://zilliz.com/learn/open-ai-whisper-transforming-speech-to-text-with-advanced-ai): Understand Open AI Whisper and follow this step-by-step article to implement it in projects that can significantly enhance the efficiency of speech-to-text tasks.
- [OpenAI RAG vs. Your Customized RAG: Which One Is Better? - Zilliz blog](https://zilliz.com/blog/openai-rag-vs-customized-rag-which-one-is-better): Comparing the performance of the OpenAI Assistants-enabled RAG system and the Milvus vector database-powered customized RAG system.
- [Prompting in LangChain - Zilliz blog](https://zilliz.com/blog/prompting-langchain): Prompting is one of today's most popular and important tasks in AI app building. Learn how to use LangChain for more complex prompts.
- [Improving ChatGPTâs Ability to Understand Ambiguous Prompts - Zilliz blog](https://zilliz.com/blog/improving-chatgpts-ability-to-understand-ambiguous-prompts): Prompt engineering technique helps large language models (LLMs) handle pronouns and other complex coreferences in retrieval augmented generation (RAG) systems.
- [How to Integrate OpenAI Embedding API with Zilliz Cloud - Zilliz blog](https://zilliz.com/blog/how-to-integrate-openai-embedding-api-with-zilliz-cloud): We are proud to announce that we'll be providing embedding model integrations - a way to connect your Milvus and/or Zilliz Cloud database to open source or paid embedding models.