Skip to content

khulnasoft/GPT-DB

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

GPT-DB: AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents

What is GPT-DB?

🤖 GPT-DB is an open source AI native data app development framework with AWEL(Agentic Workflow Expression Language) and agents.

The purpose is to build infrastructure in the field of large models, through the development of multiple technical capabilities such as multi-model management (SMMF), Text2SQL effect optimization, RAG framework and optimization, Multi-Agents framework collaboration, AWEL (agent workflow orchestration), etc. Which makes large model applications with data simpler and more convenient.

🚀 In the Data 3.0 era, based on models and databases, enterprises and developers can build their own bespoke applications with less code.

DISCKAIMER

AI-Native Data App



app_chat_v0 6

app_manage_chat_data_v0 6

chat_dashboard_display_v0 6

agent_prompt_awel_v0 6

Contents

Introduction

The architecture of GPT-DB is shown in the following figure:

The core capabilities include the following parts:

  • RAG (Retrieval Augmented Generation): RAG is currently the most practically implemented and urgently needed domain. GPT-DB has already implemented a framework based on RAG, allowing users to build knowledge-based applications using the RAG capabilities of GPT-DB.

  • GBI (Generative Business Intelligence): Generative BI is one of the core capabilities of the GPT-DB project, providing the foundational data intelligence technology to build enterprise report analysis and business insights.

  • Fine-tuning Framework: Model fine-tuning is an indispensable capability for any enterprise to implement in vertical and niche domains. GPT-DB provides a complete fine-tuning framework that integrates seamlessly with the GPT-DB project. In recent fine-tuning efforts, an accuracy rate based on the Spider dataset has been achieved at 82.5%.

  • Data-Driven Multi-Agents Framework: GPT-DB offers a data-driven self-evolving multi-agents framework, aiming to continuously make decisions and execute based on data.

  • Data Factory: The Data Factory is mainly about cleaning and processing trustworthy knowledge and data in the era of large models.

  • Data Sources: Integrating various data sources to seamlessly connect production business data to the core capabilities of GPT-DB.

SubModule

  • GPT-DB-Hub Text-to-SQL workflow with high performance by applying Supervised Fine-Tuning (SFT) on Large Language Models (LLMs).

  • gptdbs gptdbs is the official repository which contains some data apps、AWEL operators、AWEL workflow templates and agents which build upon GPT-DB.

Text2SQL Finetune

  • support llms

    • LLaMA
    • LLaMA-2
    • BLOOM
    • BLOOMZ
    • Falcon
    • Baichuan
    • Baichuan2
    • InternLM
    • Qwen
    • XVERSE
    • ChatGLM2
  • SFT Accuracy As of October 10, 2023, through the fine-tuning of an open-source model with 13 billion parameters using this project, we have achieved execution accuracy on the Spider dataset that surpasses even GPT-4!

More Information about Text2SQL finetune

Install

Docker Linux macOS Windows

Usage Tutorial

Features

At present, we have introduced several key features to showcase our current capabilities:

Image

🌐 AutoDL Image

Language Switching

In the .env configuration file, modify the LANGUAGE parameter to switch to different languages. The default is English (Chinese: zh, English: en, other languages to be added later).

Contribution

Contributors Wall

Licence

The MIT License (MIT)

Citation

If you want to understand the overall architecture of GPT-DB, please cite paper and Paper

If you want to learn about using GPT-DB for Agent development, please cite the paper

@article{xue2023gptdb,
      title={GPT-DB: Empowering Database Interactions with Private Large Language Models}, 
      author={Siqiao Xue and Caigao Jiang and Wenhui Shi and Fangyin Cheng and Keting Chen and Hongjun Yang and Zhiping Zhang and Jianshan He and Hongyang Zhang and Ganglin Wei and Wang Zhao and Fan Zhou and Danrui Qi and Hong Yi and Shaodong Liu and Faqiang Chen},
      year={2023},
      journal={arXiv preprint arXiv:2312.17449},
      url={https://arxiv.org/abs/2312.17449}
}
@misc{huang2024romasrolebasedmultiagentdatabase,
      title={ROMAS: A Role-Based Multi-Agent System for Database monitoring and Planning}, 
      author={Yi Huang and Fangyin Cheng and Fan Zhou and Jiahui Li and Jian Gong and Hongjun Yang and Zhidong Fan and Caigao Jiang and Siqiao Xue and Faqiang Chen},
      year={2024},
      eprint={2412.13520},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2412.13520}, 
}
@inproceedings{xue2024demonstration,
      title={Demonstration of GPT-DB: Next Generation Data Interaction System Empowered by Large Language Models}, 
      author={Siqiao Xue and Danrui Qi and Caigao Jiang and Wenhui Shi and Fangyin Cheng and Keting Chen and Hongjun Yang and Zhiping Zhang and Jianshan He and Hongyang Zhang and Ganglin Wei and Wang Zhao and Fan Zhou and Hong Yi and Shaodong Liu and Hongjun Yang and Faqiang Chen},
      year={2024},
      booktitle = "Proceedings of the VLDB Endowment",
      url={https://arxiv.org/abs/2404.10209}
}

Contact Information

We are working on building a community, if you have any ideas for building the community, feel free to contact us.

Star History Chart