# OpenAI Chat Completion *author: Jael*
## Description A LLM operator generates answer given prompt in messages using a large language model or service. This operator uses a pretrained [Dolly](https://github.com/databrickslabs/dolly) to generate response. It will download model from [HuggingFace Models](https://huggingface.co/models).
## Code Example Use the default model to continue the conversation from given messages. *Write a pipeline with explicit inputs/outputs name specifications:* ```python from towhee import pipe, ops p = ( pipe.input('messages') .map('messages', 'answer', ops.LLM.Dolly()) .output('messages', 'answer') ) messages=[ {'question': 'Who won the world series in 2020?', 'answer': 'The Los Angeles Dodgers won the World Series in 2020.'}, {'question': 'Where was it played?'} ] answer = p(messages) ```
## Factory Constructor Create the operator via the following factory method: ***LLM.Dolly(model_name: str)*** **Parameters:** ***model_name***: *str* The model name in string, defaults to 'databricks/dolly-v2-12b'. Supported model names: - databricks/dolly-v2-12b - databricks/dolly-v2-7b - databricks/dolly-v2-3b - databricks/dolly-v1-6b ***\*\*kwargs*** Other Dolly model parameters such as device_map.
## Interface The operator takes a piece of text in string as input. It returns answer in json. ***\_\_call\_\_(txt)*** **Parameters:** ***messages***: *list* ​ A list of messages to set up chat. Must be a list of dictionaries with key value from "system", "question", "answer". For example, [{"question": "a past question?", "answer": "a past answer."}, {"question": "current question?"}] **Returns**: *answer: str* ​ The answer generated.