Readme
Files and versions
Updated 2 years ago
LLM
OpenAI Chat Completion
author: Jael
Description
A LLM operator generates answer given prompt in messages using a large language model or service. This operator uses a pretrained Dolly to generate response. It will download model from HuggingFace Models.
Code Example
Use the default model to continue the conversation from given messages.
Write a pipeline with explicit inputs/outputs name specifications:
from towhee import pipe, ops
p = (
    pipe.input('messages')
        .map('messages', 'answer', ops.LLM.Dolly())
        .output('messages', 'answer')
)
messages=[
        {'question': 'Who won the world series in 2020?', 'answer': 'The Los Angeles Dodgers won the World Series in 2020.'},
        {'question': 'Where was it played?'}
    ]
answer = p(messages)
Factory Constructor
Create the operator via the following factory method:
LLM.Dolly(model_name: str)
Parameters:
model_name: str
The model name in string, defaults to 'databricks/dolly-v2-12b'. Supported model names:
- databricks/dolly-v2-12b
- databricks/dolly-v2-7b
- databricks/dolly-v2-3b
- databricks/dolly-v1-6b
**kwargs
Other Dolly model parameters such as device_map.
Interface
The operator takes a piece of text in string as input. It returns answer in json.
__call__(txt)
Parameters:
messages: list
 A list of messages to set up chat. Must be a list of dictionaries with key value from "system", "question", "answer". For example, [{"question": "a past question?", "answer": "a past answer."}, {"question": "current question?"}]
Returns:
answer: str
 The answer generated.
|  | 2 Commits | ||
|---|---|---|---|
|  | 
												1.1 KiB
											 | 2 years ago | |
|  | 
												1.7 KiB
											 | 2 years ago | |
|  | 
												114 B
											 | 2 years ago | |
|  | 
												2.1 KiB
											 | 2 years ago | |
|  | 
												48 B
											 | 2 years ago | |
