copied
Readme
Files and versions
Updated 2 months ago
image-text-embedding
Japanese Image-Text Retrieval Embdding with CLIP
author: David Wang
Description
This operator extracts features for image or text with Japanese-CLIP developed by rinna Co., Ltd, which can generate embeddings for Japanese text and image by jointly training an image encoder and text encoder to maximize the cosine similarity.
Code Example
Load an image from path './teddy.jpg' to generate an image embedding.
Read the text 'スケートボードに乗っているテディベア。' to generate an text embedding.
Write a same pipeline with explicit inputs/outputs name specifications:
from towhee import pipe, ops, DataCollection
img_pipe = (
pipe.input('url')
.map('url', 'img', ops.image_decode.cv2_rgb())
.map('img', 'vec', ops.image_text_embedding.japanese_clip(model_name='japanese-clip-vit-b-16', modality='image'))
.output('img', 'vec')
)
text_pipe = (
pipe.input('text')
.map('text', 'vec', ops.image_text_embedding.japanese_clip(model_name='japanese-clip-vit-b-16', modality='text'))
.output('text', 'vec')
)
DataCollection(img_pipe('./teddy.jpg')).show()
DataCollection(text_pipe('スケートボードに乗っているテディベア。')).show()
Factory Constructor
Create the operator via the following factory method
japanese_clip(model_name, modality)
Parameters:
model_name: str
The model name of Japanese CLIP. Supported model names:
- japanese-clip-vit-b-16
- japanese-cloob-vit-b-16
modality: str
Which modality(image or text) is used to generate the embedding.
Interface
An image-text embedding operator takes a towhee image or string as input and generate an embedding in ndarray.
Parameters:
data: towhee.types.Image (a sub-class of numpy.ndarray) or str
The data (image or text based on specified modality) to generate embedding.
Returns: numpy.ndarray
The data embedding extracted by model.
More Resources
- From Text to Image: Fundamentals of CLIP - Zilliz blog: Search algorithms rely on semantic similarity to retrieve the most relevant results. With the CLIP model, the semantics of texts and images can be connected in a high-dimensional vector space. Read this simple introduction to see how CLIP can help you build a powerful text-to-image service.
- Supercharged Semantic Similarity Search in Production - Zilliz blog: Building a Blazing Fast, Highly Scalable Text-to-Image Search with CLIP embeddings and Milvus, the most advanced open-source vector database.
- The guide to clip-vit-base-patch32 | OpenAI: clip-vit-base-patch32: a CLIP multimodal model variant by OpenAI for image and text embedding.
- The guide to jina-embeddings-v2-base-en | Jina AI: jina-embeddings-v2-base-en: specialized embedding model for English text and long documents; support sequences of up to 8192 tokens
- An LLM Powered Text to Image Prompt Generation with Milvus - Zilliz blog: An interesting LLM project powered by the Milvus vector database for generating more efficient text-to-image prompts.
- Image Embeddings for Enhanced Image Search - Zilliz blog: Image Embeddings are the core of modern computer vision algorithms. Understand their implementation and use cases and explore different image embedding models.
- Training Text Embeddings with Jina AI - Zilliz blog: In a recent talk by Bo Wang, he discussed the creation of Jina text embeddings for modern vector search and RAG systems. He also shared methodologies for training embedding models that effectively encode extensive information, along with guidance o
Jael Gu
b94d30de7a
| 8 Commits | ||
---|---|---|---|
japanese_clip | 2 years ago | ||
.gitattributes |
1.1 KiB
|
2 years ago | |
.gitignore |
6 B
|
2 years ago | |
README.md |
4.3 KiB
|
2 months ago | |
__init__.py |
711 B
|
2 years ago | |
jclip.py |
3.0 KiB
|
2 years ago | |
requirements.txt |
14 B
|
2 years ago | |
tabular1.png |
174 KiB
|
2 years ago | |
tabular2.png |
19 KiB
|
2 years ago | |
vec1.png |
9.5 KiB
|
2 years ago | |
vec2.png |
9.6 KiB
|
2 years ago |