oobabooga
UserOn the leaderboard
| Rank | Repository | Stars |
|---|---|---|
| 457 | oobabooga/text-generation-webui | 46,402 |
Top repositories by stars
- oobabooga/text-generation-webui(on leaderboard)
The definitive Web UI for local AI, with powerful features and easy setup.
Python46,047 - oobabooga/one-click-installers
Simplified installers for oobabooga/text-generation-webui.
Python564 - oobabooga/GPTQ-for-LLaMa
4 bits quantization of LLaMa using GPTQ
Python131 - oobabooga/flash-attention
Fast and memory-efficient exact attention - Windows wheels
Python36 - oobabooga/llama-cpp-python-cuBLAS-wheels
Wheels for llama-cpp-python compiled with cuBLAS support
HTML27 - oobabooga/stable-diffusion-ui
Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image.
JavaScript25 - oobabooga/llm-tools
Various scripts for working with local LLMs
Python16 - Jupyter Notebook13
- oobabooga/SillyTavern
LLM Frontend for Power Users.
JavaScript12 - oobabooga/EasyLM
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
Python11 - oobabooga/stable-diffusion-automatic
Stable Diffusion web UI
Python11 - oobabooga/llama-cpp-binaries
llama.cpp server in a Python wheel.
Python10 - oobabooga/roop
one-click deepfake (face swap)
Python10 - oobabooga/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Python9 - oobabooga/exllamav2
A fast inference library for running LLMs locally on modern consumer-class GPUs
Python8 - oobabooga/FlexGen
Running large language models on a single GPU for throughput-oriented scenarios.
Python8 - oobabooga/llama-cpp-python-basic
Python bindings for llama.cpp
Python7 - Python7
- oobabooga/character-editor
Create, edit and convert AI character files for CharacterAI, Pygmalion, Text Generation, KoboldAI and TavernAI
HTML7 - Python6
- oobabooga/chatbot-ui
An open source ChatGPT UI.
TypeScript6 - oobabooga/simple_memory
An extension to Oobabooga to add a simple memory function for chat
Python6 - oobabooga/gradio
Create UIs for your machine learning model in Python in 3 minutes
Python6 - oobabooga/bitsandbytes-windows-webui
Windows compile of bitsandbytes for use in text-generation-webui.
HTML5 - oobabooga/whisper
Robust Speech Recognition via Large-Scale Weak Supervision
Python5 - Python5
- oobabooga/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
Python5 - oobabooga/alpaca-lora
Instruct-tuning LLaMA on consumer hardware
Jupyter Notebook5 - oobabooga/ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
Python5 - oobabooga/stable-diffusion-webui
Stable Diffusion web UI
Python5 - oobabooga/AutoGPTQ
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
Python4 - Python4
- oobabooga/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Python4 - oobabooga/oobabooga-one-click-bandaid
A simple batch file to make the oobabooga one click installer compatible with llama 4bit models and able to run on cuda
Batchfile4 - oobabooga/llama
Inference code for LLaMA models
Python4 - oobabooga/pytorch-to-safetensor-converter
A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.
Python4 - oobabooga/AutoAWQ
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
Python3 - oobabooga/GPTQ-for-LLaMa-CUDA
A combination of Oobabooga's fork and the main cuda branch of GPTQ-for-LLaMa in a package format.
Python3 - oobabooga/llama.cpp
Port of Facebook's LLaMA model in C/C++
C++3 - oobabooga/gptq
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
Python3 - oobabooga/exllamav3
An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs
Python2 - oobabooga/BlockMerge_Gradient
Merge Transformers language models by use of gradient parameters.
Python2 - oobabooga/GPTQ-for-LLaMa-Wheels
Precompiled Wheels for GPTQ-for-LLaMa
2 - oobabooga/llamacpp-python
Python bindings for llama.cpp
C++2 - oobabooga/mtj-softtuner
Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab TPU instance
Python2 - oobabooga/g-diffuser-bot
Discord bot for diffusers (Stable Diffusion)
Python2 - Jupyter Notebook2
- oobabooga/bitsandbytes
8-bit CUDA functions for PyTorch
Python1 - oobabooga/pygments-css
css files created from pygment's built-in styles
CSS0