From 20f35513567d07a10f74ae6fd7ff10b6b93bbb99 Mon Sep 17 00:00:00 2001 From: Jael Gu Date: Wed, 18 Sep 2024 13:30:20 +0800 Subject: [PATCH] Add more resources Signed-off-by: Jael Gu --- README.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/README.md b/README.md index 478456c..1fdb40e 100644 --- a/README.md +++ b/README.md @@ -68,3 +68,12 @@ An image captioning operator takes a [towhee image](link/to/towhee/image/api/doc ​ The caption generated by model. + + +# More Resources + +- [Multimodal RAG locally with CLIP and Llama3 - Zilliz blog](https://zilliz.com/blog/multimodal-RAG-with-CLIP-Llama3-and-milvus): A tutorial walks you through how to build a multimodal RAG with CLIP, Llama3, and Milvus. +- [Supercharged Semantic Similarity Search in Production - Zilliz blog](https://zilliz.com/learn/supercharged-semantic-similarity-search-in-production): Building a Blazing Fast, Highly Scalable Text-to-Image Search with CLIP embeddings and Milvus, the most advanced open-source vector database. +- [The guide to clip-vit-base-patch32 | OpenAI](https://zilliz.com/ai-models/clip-vit-base-patch32): clip-vit-base-patch32: a CLIP multimodal model variant by OpenAI for image and text embedding. +- [An LLM Powered Text to Image Prompt Generation with Milvus - Zilliz blog](https://zilliz.com/blog/llm-powered-text-to-image-prompt-generation-with-milvus): An interesting LLM project powered by the Milvus vector database for generating more efficient text-to-image prompts. +- [From Text to Image: Fundamentals of CLIP - Zilliz blog](https://zilliz.com/blog/fundamentals-of-clip): Search algorithms rely on semantic similarity to retrieve the most relevant results. With the CLIP model, the semantics of texts and images can be connected in a high-dimensional vector space. Read this simple introduction to see how CLIP can help you build a powerful text-to-image service. \ No newline at end of file