weekly

GitHub All Languages Trending

The latest build: 2024-06-14Source of data: GitHubTrendingRSS

Translate the video from one language to another and add dubbing.


English Readme / / Discord / pyvideotrans

faster-whisperopenai-whisperGoogleSpeechzh_recogn.

|Google|||ChatGPT|AzureAI|Gemini|DeepL|DeepLX|OTT

Microsoft Edge ttsGoogle ttsAzure AI TTSOpenai TTSElevenlabs TTSTTSapiGPT-SoVITSclone-voiceChatTTS-ui

(uvr5)

srt

srt

srt

youtube


https://github.com/jianchang512/pyvideotrans/assets/3378335/3811217a-26c8-4084-ba24-7a95d2e13d58

(win10/win11MacOS/Linux)

pyinstaller

  1. [, sp.exe (https://github.com/jianchang512/pyvideotrans/releases)

  2. sp.exe ()

  3. sp.exe

MacOS

  1. Homebrew Homebrew,

    Homebrew /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

    eval $(brew --config)

    brew install libsndfilebrew install ffmpegbrew install gitbrew install [email protected]

    export PATH="/usr/local/opt/[email protected]/bin:$PATH"source ~/.bash_profile source ~/.zshrc
  2. git clone https://github.com/jianchang512/pyvideotrans

  3. cd pyvideotrans

  4. python -m venv venv

  5. source ./venv/bin/activate(venv),(venv)

  6. pip install -r requirements.txt --no-deps2pip

    pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/pip config set install.trusted-host mirrors.aliyun.com

    pip install -r requirements.txt --ignore-installed --no-deps

  7. python sp.py

Mac

Linux

  1. CentOS/RHEL python3.10
sudo yum updatesudo yum groupinstall "Development Tools"sudo yum install openssl-devel bzip2-devel libffi-develcd /tmpwget https://www.python.org/ftp/python/3.10.4/Python-3.10.4.tgztar xzf Python-3.10.4.tgzcd Python-3.10.4./configure enable-optimizationssudo make && sudo make installsudo alternatives install /usr/bin/python3 python3 /usr/local/bin/python3.10sudo yum install -y ffmpeg
  1. Ubuntu/Debianpython3.10
apt update && apt upgrade -yapt install software-properties-common -yadd-apt-repository ppa:deadsnakes/ppaapt updatesudo apt-get install libxcb-cursor0apt install python3.10curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10pip 23.2.1 from /usr/local/lib/python3.10/site-packages/pip (python 3.10)sudo update-alternatives --install /usr/bin/python python /usr/local/bin/python3.10 sudo update-alternatives --config pythonapt-get install ffmpeg

python3 -V 3.10.4

  1. git clone https://github.com/jianchang512/pyvideotrans

  2. cd pyvideotrans

  3. python -m venv venv

  4. source ./venv/bin/activate(venv),(venv)

  5. pip install -r requirements.txt --no-deps2pip

    pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/pip config set install.trusted-host mirrors.aliyun.com

    , pip install -r requirements.txt --ignore-installed --no-deps

  6. CUDA

    pip uninstall -y torch torchaudio

    pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118

    pip install nvidia-cublas-cu11 nvidia-cudnn-cu11

  7. linux cudaCUDA11.8+, "Linux CUDA "

  8. python sp.py

Window10/11

  1. https://www.python.org/downloads/ windows3.10nextAdd to PATH

    cmd python -V3.10.4, Add to PATH,

  2. https://github.com/git-for-windows/git/releases/download/v2.45.0.windows.1/Git-2.45.0-64-bit.exe git

  3. cmd

  4. git clone https://github.com/jianchang512/pyvideotrans

  5. cd pyvideotrans

  6. python -m venv venv

  7. .\venv\scripts\activate,(venv),

  8. pip install -r requirements.txt --no-deps2pip

    pip config set global.index-url https://mirrors.aliyun.com/pypi/simple/pip config set install.trusted-host mirrors.aliyun.com

    , pip install -r requirements.txt --ignore-installed --no-deps

  9. CUDA

    pip uninstall -y torch torchaudio

    pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118

  10. windows cudaCUDA11.8+ CUDA

  11. ffmpeg.zip ffmepg ffmpeg.exe ffprobe.exe ytwin32.exe,

  12. python sp.py

  1. ctranslate24.xCUDA12.xcuda12cuda12.xctranslate2
pip uninstall -y ctranslate2pip install ctranslate2==3.24.0
  1. xx module not found requirements.txt xx xx ==

https://pyvideotrans.com/guide.html

:

https://pyvideotrans.com/model.html

https://pyvideotrans.com/02.html

()

Mac/b

Gemini Api /b

image

OTT:

:

:

:

GPT-SoVITSapi.py

  1. ffmpeg
  2. PySide6
  3. edge-tts
  4. faster-whisper
  5. openai-whisper
  6. pydub

Self-hosted game stream host for Moonlight.


Overview

LizardByte has the full documentation hosted on Read the Docs <https://sunshinestream.readthedocs.io/>__.

About

Sunshine is a self-hosted game stream host for Moonlight. Offering low latency, cloud gaming server capabilities with support for AMD, Intel, and Nvidia GPUs for hardware encoding. Software encoding is also available. You can connect to Sunshine from any Moonlight client on a variety of devices. A web UI is provided to allow configuration, and client pairing, from your favorite web browser. Pair from the local server or any mobile device.

System Requirements

.. warning:: This table is a work in progress. Do not purchase hardware based on this.

Minimum Requirements

.. csv-table:: :widths: 15, 60

"GPU", "AMD: VCE 1.0 or higher, see: obs-amd hardware support <https://github.com/obsproject/obs-amd-encoder/wiki/Hardware-Support>" "", "Intel: VAAPI-compatible, see: VAAPI hardware support <https://www.intel.com/content/www/us/en/developer/articles/technical/linuxmedia-vaapi.html>" "", "Nvidia: NVENC enabled cards, see: nvenc support matrix <https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new>_" "CPU", "AMD: Ryzen 3 or higher" "", "Intel: Core i3 or higher" "RAM", "4GB or more" "OS", "Windows: 10+ (Windows Server does not support virtual gamepads)" "", "macOS: 12+" "", "Linux/Debian: 11 (bullseye)" "", "Linux/Fedora: 39+" "", "Linux/Ubuntu: 22.04+ (jammy)" "Network", "Host: 5GHz, 802.11ac" "", "Client: 5GHz, 802.11ac"

4k Suggestions

.. csv-table:: :widths: 15, 60

"GPU", "AMD: Video Coding Engine 3.1 or higher" "", "Intel: HD Graphics 510 or higher" "", "Nvidia: GeForce GTX 1080 or higher" "CPU", "AMD: Ryzen 5 or higher" "", "Intel: Core i5 or higher" "Network", "Host: CAT5e ethernet or better" "", "Client: CAT5e ethernet or better"

HDR Suggestions

.. csv-table:: :widths: 15, 60

"GPU", "AMD: Video Coding Engine 3.4 or higher" "", "Intel: UHD Graphics 730 or higher" "", "Nvidia: Pascal-based GPU (GTX 10-series) or higher" "CPU", "AMD: todo" "", "Intel: todo" "Network", "Host: CAT5e ethernet or better" "", "Client: CAT5e ethernet or better"

Integrations

.. image:: https://img.shields.io/github/actions/workflow/status/lizardbyte/sunshine/CI.yml.svg?branch=master&label=CI%20build&logo=github&style=for-the-badge :alt: GitHub Workflow Status (CI) :target: https://github.com/LizardByte/Sunshine/actions/workflows/CI.yml?query=branch%3Amaster

.. image:: https://img.shields.io/github/actions/workflow/status/lizardbyte/sunshine/localize.yml.svg?branch=master&label=localize%20build&logo=github&style=for-the-badge :alt: GitHub Workflow Status (localize) :target: https://github.com/LizardByte/Sunshine/actions/workflows/localize.yml?query=branch%3Amaster

.. image:: https://img.shields.io/readthedocs/sunshinestream.svg?label=Docs&style=for-the-badge&logo=readthedocs :alt: Read the Docs :target: http://sunshinestream.readthedocs.io/

.. image:: https://img.shields.io/codecov/c/gh/LizardByte/Sunshine?token=SMGXQ5NVMJ&style=for-the-badge&logo=codecov&label=codecov :alt: Codecov :target: https://codecov.io/gh/LizardByte/Sunshine

Support

Our support methods are listed in our LizardByte Docs <https://lizardbyte.readthedocs.io/en/latest/about/support.html>__.

Downloads

.. image:: https://img.shields.io/github/downloads/lizardbyte/sunshine/total.svg?style=for-the-badge&logo=github :alt: GitHub Releases :target: https://github.com/LizardByte/Sunshine/releases/latest

.. image:: https://img.shields.io/docker/pulls/lizardbyte/sunshine.svg?style=for-the-badge&logo=docker :alt: Docker :target: https://hub.docker.com/r/lizardbyte/sunshine

.. image:: https://img.shields.io/badge/dynamic/json.svg?color=orange&label=Winget&style=for-the-badge&prefix=v&query=pageProps.app.latestVersion&url=https%3A%2F%2Fwinstall.app%2F_next%2Fdata%2FixSYALJOWdJEOGpVihkFS%2Fapps%2FLizardByte.Sunshine.json&logo=microsoft :alt: Winget Version :target: https://github.com/microsoft/winget-pkgs/tree/master/manifests/l/LizardByte/Sunshine

Stats

.. image:: https://img.shields.io/github/stars/lizardbyte/sunshine.svg?logo=github&style=for-the-badge :alt: GitHub stars :target: https://github.com/LizardByte/Sunshine

Pretrain, finetune, deploy 20+ LLMs on your own data. Uses state-of-the-art techniques: flash attention, FSDP, 4-bit, LoRA, and more.


LitGPT

Pretrain, finetune, evaluate, and deploy 20+ LLMs on your own data

Uses the latest state-of-the-art techniques:

 flash attention fp4/8/16/32 LoRA, QLoRA, Adapter FSDP 1-1000+ GPUs/TPUs 20+ LLMs 

PyPI - Python Versioncpu-testslicenseDiscord

Lightning AIModelsQuick startInferenceFinetunePretrainDeployFeaturesTraining recipes (YAML)

  LitGPT steps  

Finetune, pretrain and deploy LLMs Lightning fast

LitGPT is a command-line tool designed to easily finetune, pretrain, evaluate, and deploy20+ LLMson your own data. It features highly-optimized training recipes for the world's most powerful open-source large language models (LLMs).

We reimplemented all model architectures and training recipes from scratch for 4 reasons:

  1. Remove all abstraction layers and have single file implementations.
  2. Guarantee Apache 2.0 compliance to enable enterprise use without limits.
  3. Optimized each model's architectural detail to maximize performance, reduce costs, and speed up training.
  4. Highly-optimized recipe configs we have tested at enterprise scale.

 

Choose from 20+ LLMs

LitGPT has custom, from-scratch implementations of 20+ LLMs without layers of abstraction:

ModelModel sizeAuthorReference
Llama 38B, 70BMeta AIMeta AI 2024
Llama 27B, 13B, 70BMeta AITouvron et al. 2023
Code Llama7B, 13B, 34B, 70BMeta AIRozière et al. 2023
Mixtral MoE8x7BMistral AIMistral AI 2023
Mistral7BMistral AIMistral AI 2023
CodeGemma7BGoogleGoogle Team, Google Deepmind
............
See full list of 20+ LLMs

 

All models

ModelModel sizeAuthorReference
CodeGemma7BGoogleGoogle Team, Google Deepmind
Code Llama7B, 13B, 34B, 70BMeta AIRozière et al. 2023
Danube21.8BH2O.aiH2O.ai
Dolly3B, 7B, 12BDatabricksConover et al. 2023
Falcon7B, 40B, 180BTII UAETII 2023
FreeWilly2 (Stable Beluga 2)70BStability AIStability AI 2023
Function Calling Llama 27BTrelisTrelis et al. 2023
Gemma2B, 7BGoogleGoogle Team, Google Deepmind
Llama 27B, 13B, 70BMeta AITouvron et al. 2023
Llama 38B, 70BMeta AIMeta AI 2024
LongChat7B, 13BLMSYSLongChat Team 2023
MicroLlama300MKen WangMicroLlama repo
Mixtral MoE8x7BMistral AIMistral AI 2023
Mistral7BMistral AIMistral AI 2023
Nous-Hermes7B, 13B, 70BNousResearchOrg page
OpenLLaMA3B, 7B, 13BOpenLM ResearchGeng & Liu 2023
Phi1.3B, 2.7BMicrosoft ResearchLi et al. 2023
Platypus7B, 13B, 70BLee et al.Lee, Hunter, and Ruiz 2023
Pythia{14,31,70,160,410}M, {1,1.4,2.8,6.9,12}BEleutherAIBiderman et al. 2023
RedPajama-INCITE3B, 7BTogetherTogether 2023
StableCode3BStability AIStability AI 2023
StableLM3B, 7BStability AIStability AI 2023
StableLM Zephyr3BStability AIStability AI 2023
TinyLlama1.1BZhang et al.Zhang et al. 2023
Vicuna7B, 13B, 33BLMSYSLi et al. 2023

Tip: You can list all available models by running the litgpt download list command.

 

Install LitGPT

Install LitGPT with all dependencies (including CLI, quantization, tokenizers for all models, etc.):

pip install 'litgpt[all]'
Advanced install options

 

Install from source:

git clone https://github.com/Lightning-AI/litgptcd litgptpip install -e '.[all]'

 

Quick start

After installing LitGPT, select the model and action you want to take on that model (finetune, pretrain, evaluate, deploy, etc...):

# ligpt [action] [model]litgpt download meta-llama/Meta-Llama-3-8B-Instructlitgpt chat meta-llama/Meta-Llama-3-8B-Instructlitgpt finetune meta-llama/Meta-Llama-3-8B-Instructlitgpt pretrain meta-llama/Meta-Llama-3-8B-Instructlitgpt serve meta-llama/Meta-Llama-3-8B-Instruct

 

Use an LLM for inference

Use LLMs for inference to test its chatting capabilities, run evaluations, or extract embeddings, etc. Here's an example showing how to use the Phi-2 LLM.

Open In Studio

 

# 1) List all available models in litgptlitgpt download list# 2) Download a pretrained modellitgpt download microsoft/phi-2# 3) Chat with the modellitgpt chat microsoft/phi-2>> Prompt: What do Llamas eat?

The download of certain models requires an additional access token. You can read more about this in the download documentation. For more information on the different inference options, refer to the inference tutorial.

 

Finetune an LLM

Finetune a model to specialize it on your own custom dataset:

Open In Studio

 

# 1) Download a pretrained modellitgpt download microsoft/phi-2# 2) Finetune the modelcurl -L https://huggingface.co/datasets/ksaw008/finance_alpaca/resolve/main/finance_alpaca.json -o my_custom_dataset.jsonlitgpt finetune microsoft/phi-2 \ --data JSON \ --data.json_path my_custom_dataset.json \ --data.val_split_fraction 0.1 \ --out_dir out/custom-model# 3) Chat with the modellitgpt chat out/custom-model/final

 

Pretrain an LLM

Train an LLM from scratch on your own data via pretraining:

Open In Studio

 

mkdir -p custom_textscurl https://www.gutenberg.org/cache/epub/24440/pg24440.txt --output custom_texts/book1.txtcurl https://www.gutenberg.org/cache/epub/26393/pg26393.txt --output custom_texts/book2.txt# 1) Download a tokenizerlitgpt download EleutherAI/pythia-160m \ --tokenizer_only True# 2) Pretrain the modellitgpt pretrain EleutherAI/pythia-160m \ --tokenizer_dir EleutherAI/pythia-160m \ --data TextFiles \ --data.train_data_path "custom_texts/" \ --train.max_tokens 10_000_000 \ --out_dir out/custom-model# 3) Chat with the modellitgpt chat out/custom-model/final

 

Continue pretraining an LLM

This is another way of finetuning that specializes an already pretrained model by training on custom data:

Open In Studio

 

mkdir -p custom_textscurl https://www.gutenberg.org/cache/epub/24440/pg24440.txt --output custom_texts/book1.txtcurl https://www.gutenberg.org/cache/epub/26393/pg26393.txt --output custom_texts/book2.txt# 1) Download a pretrained modellitgpt download EleutherAI/pythia-160m# 2) Continue pretraining the modellitgpt pretrain EleutherAI/pythia-160m \ --tokenizer_dir EleutherAI/pythia-160m \ --initial_checkpoint_dir EleutherAI/pythia-160m \ --data TextFiles \ --data.train_data_path "custom_texts/" \ --train.max_tokens 10_000_000 \ --out_dir out/custom-model# 3) Chat with the modellitgpt chat out/custom-model/final

 

Deploy an LLM

Once you're ready to deploy a finetuned LLM, run this command:

Open In Studio

 

# locate the checkpoint to your finetuned or pretrained model and call the `serve` command:litgpt serve microsoft/phi-2# Alternative: if you haven't finetuned, download any checkpoint to deploy it:litgpt download microsoft/phi-2litgpt serve microsoft/phi-2

Test the server in a separate terminal and integrate the model API into your AI product:

# 3) Use the server (in a separate Python session)import requests, jsonresponse = requests.post( "http://127.0.0.1:8000/predict", json={"prompt": "Fix typos in the following sentence: Exampel input"})print(response.json()["output"])

 

[!NOTE] Read the full docs.

 


State-of-the-art features

 State-of-the-art optimizations: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, optional CPU offloading, and TPU and XLA support.

 Pretrain, finetune, and deploy

 Reduce compute requirements with low-precision settings: FP16, BF16, and FP16/FP32 mixed.

 Lower memory requirements with quantization: 4-bit floats, 8-bit integers, and double quantization.

 Configuration files for great out-of-the-box performance.

 Parameter-efficient finetuning: LoRA, QLoRA, Adapter, and Adapter v2.

 Exporting to other popular model weight formats.

 Many popular datasets for pretraining and finetuning, and support for custom datasets.

 Readable and easy-to-modify code to experiment with the latest research ideas.

 


Training recipes

LitGPT comes with validated recipes (YAML configs) to train models under different conditions. We've generated these recipes based on the parameters we found to perform the best for different training conditions.

Browse all training recipes here.

Example

litgpt finetune \ --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml

What is a config

Configs let you customize training for all granular parameters like:

# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)out_dir: out/finetune/qlora-llama2-7b# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)precision: bf16-true...
Example: LoRA finetuning config

 

# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)out_dir: out/finetune/qlora-llama2-7b# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)precision: bf16-true# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)quantize: bnb.nf4# How many devices/GPUs to use. (type: Union[int, str], default: 1)devices: 1# The LoRA rank. (type: int, default: 8)lora_r: 32# The LoRA alpha. (type: int, default: 16)lora_alpha: 16# The LoRA dropout value. (type: float, default: 0.05)lora_dropout: 0.05# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)lora_query: true# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)lora_key: false# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)lora_value: true# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)lora_projection: false# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)lora_mlp: false# Whether to apply LoRA to output head in GPT. (type: bool, default: False)lora_head: false# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.data: class_path: litgpt.data.Alpaca2k init_args: mask_prompt: false val_split_fraction: 0.05 prompt_style: alpaca ignore_index: -100 seed: 42 num_workers: 4 download_dir: data/alpaca2k# Training-related arguments. See ``litgpt.args.TrainArgs`` for detailstrain: # Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000) save_interval: 200 # Number of iterations between logging calls (type: int, default: 1) log_interval: 1 # Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128) global_batch_size: 8 # Number of samples per data-parallel rank (type: int, default: 4) micro_batch_size: 2 # Number of iterations with learning rate warmup active (type: int, default: 100) lr_warmup_steps: 10 # Number of epochs to train on (type: Optional[int], default: 5) epochs: 4 # Total number of tokens to train on (type: Optional[int], default: null) max_tokens: # Limits the number of optimizer steps to run (type: Optional[int], default: null) max_steps: # Limits the length of samples (type: Optional[int], default: null) max_seq_length: 512 # Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null) tie_embeddings: # (type: float, default: 0.0003) learning_rate: 0.0002 # (type: float, default: 0.02) weight_decay: 0.0 # (type: float, default: 0.9) beta1: 0.9 # (type: float, default: 0.95) beta2: 0.95 # (type: Optional[float], default: null) max_norm: # (type: float, default: 6e-05) min_lr: 6.0e-05# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for detailseval: # Number of optimizer steps between evaluation calls (type: int, default: 100) interval: 100 # Number of tokens to generate (type: Optional[int], default: 100) max_new_tokens: 100 # Number of iterations (type: int, default: 100) max_iters: 100# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)logger_name: csv# The random seed to use for reproducibility. (type: int, default: 1337)seed: 1337

Override config params via CLI

Override any parameter in the CLI:

litgpt finetune \ --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml \ --lora_r 4

 

Community

Get involved!

We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the GitHub Issue tracker.

We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.

 

[!TIP] Unsure about contributing? Check out our How to Contribute to LitGPT guide.

If you have general questions about building with LitGPT, please join our Discord.

 

Tutorials, how-to guides, and docs

[!NOTE] We recommend starting with the Zero to LitGPT: Getting Started with Pretraining, Finetuning, and Using LLMs if you are looking to get started with using LitGPT.

Tutorials and in-depth feature documentation can be found below:

 

XLA

Lightning AI has partnered with Google to add first-class support for Cloud TPUs in Lightning's frameworks and LitGPT, helping democratize AI for millions of developers and researchers worldwide.

Using TPUs with Lightning is as straightforward as changing one line of code.

We provide scripts fully optimized for TPUs in the XLA directory.

 

Acknowledgements

This implementation extends on Lit-LLaMA and nanoGPT, and it's powered by Lightning Fabric.

 

Community showcase

Check out the projects below that use and build on LitGPT. If you have a project you'd like to add to this section, please don't hesitate to open a pull request.

 

NeurIPS 2023 Large Language Model Efficiency Challenge: 1 LLM + 1 GPU + 1 Day

The LitGPT repository was the official starter kit for the NeurIPS 2023 LLM Efficiency Challenge, which is a competition focused on finetuning an existing non-instruction tuned LLM for 24 hours on a single GPU.

 

TinyLlama: An Open-Source Small Language Model

LitGPT powered the TinyLlama project and TinyLlama: An Open-Source Small Language Model research paper.

 

MicroLlama: MicroLlama-300M

MicroLlama is a 300M Llama model pretrained on 50B tokens powered by TinyLlama and LitGPT.

 

Pre-training Small Base LMs with Fewer Tokens

The research paper "Pre-training Small Base LMs with Fewer Tokens", which utilizes LitGPT, develops smaller base language models by inheriting a few transformer blocks from larger models and training on a tiny fraction of the data used by the larger models. It demonstrates that these smaller models can perform comparably to larger models despite using significantly less training data and resources.

 

Citation

If you use LitGPT in your research, please cite the following work:

@misc{litgpt-2023, author = {Lightning AI}, title = {LitGPT}, howpublished = {\url{https://github.com/Lightning-AI/litgpt}}, year = {2023},}

 

License

LitGPT is released under the Apache 2.0 license.