使用LangChain、LLM和Streamlit构建用于复杂SQL数据库交互的聊天应用程序
In this article we will see how we can use large language models (LLMs) to interact with a complex database using Langchain
agents and tools, and then deploying the chat application using Streamlit
.
🐍 Python开发者震惊了!多线程前进!
If you gave up on multithreading in Python I have good news. It’s very likely it will change and you’ll be able to disable GIL very soon, but there’s a catch
Python, what a name for a programming language. It could be named anything ranging from Lizard to Labrador, but it was called after a snake*.
使用LLM从非结构化文本中提取结构化数据
This is Part 1 of my “Understanding Unstructured Data” series. Part 2 focuses on analyzing structured data extracted from unstructured text with a LangChain agent.
Apache Kafka+矢量数据库+LLM=实时GenAI
Generative AI (GenAI) enables advanced AI use cases and innovation but also changes how the enterprise architecture looks like. Large Language Models (LLM), Vector Databases, and Retrieval Augmentation Generation (RAG) require new data integration patterns and data engineering best practices.
使用PySide6构建您的第一个桌面应用程序[数据科学家版]
如何在LLM应用程序中提高RAG结果:从基础到高级
If you’re building any meaningful product/feature with LLMs (large language models), you’ll probably use the technique called RAG (retrieval-augmented generation). It can allow you to integrate external data that was not available in the LLM’s training data into the LLM’s text generation process, which can greatly reduce the nightmare of hallucination and improve the relevance of the text responses.
告别Python中的循环,欢迎矢量化!
如何使用RAG改进LLM
This article is part of a larger series on using large language models in practice. In the previous post, we fine-tuned Mistral-7b-Instruct to respond to YouTube comments using QLoRA.
如何制作RAG系统以获得对您数据的强大访问
A RAG system is an innovative approach to information retrieval. It utilizes traditional information retrieval approaches like vector similarity search combined with state-of-the-art large language model technology. Combined, these technologies make up a robust system that can access vast amounts of information from a simple prompt.