Skip to content
On this page

OpenAI `gpt-4o-mini` Model「default」Node.js >= 16.5.0+

Let the AI generate your git commit message subject (short description)

demo-gif

Have A Try

You can have a try without setup your token to local in any of your projects
Quick experience interaction. API Key: https://platform.openai.com/account/api-keys

sh
CZ_OPENAI_API_KEY="sk-xxxxx" npx czg ai
CZ_OPENAI_API_KEY="sk-xxxxx" npx czg ai
sh
CZ_OPENAI_API_KEY="sk-xxxxx" bunx czg ai
CZ_OPENAI_API_KEY="sk-xxxxx" bunx czg ai

Setup OpenAI token

  1. https://platform.openai.com/account/api-keys
    Login and create your API secret key, which usually starts with sk-
  2. Run command npx czg --api-key=<API secret key> and input your key to setup your token save to local
sh
npx czg --api-key=sk-xxxxx
npx czg --api-key=sk-xxxxx
sh
bunx czg --api-key=sk-xxxxx
bunx czg --api-key=sk-xxxxx
sh
czg --api-key=sk-xxxxx
czg --api-key=sk-xxxxx
Setup GitHub Models
  1. Join the GitHub Models waitlist
  2. Get GitHub personal access tokens
  3. Choose the model you want to use from the Models Marketplace and get the model name (click the Get started button to view information)
  4. Run the command to configure
    sh
    npx czg --api-key="ghp_xxxxxx" --api-endpoint="https://models.inference.ai.azure.com" --api-model="gpt-4o-mini"
    npx czg --api-key="ghp_xxxxxx" --api-endpoint="https://models.inference.ai.azure.com" --api-model="gpt-4o-mini"
Setup Ollama
  1. Install Ollama and start the service
  2. Choose and pull the model
    sh
    # Using gemma2 model as an example
    ollama pull gemma2
    # Confirm if the model is successfully pulled
    ollama ls
    # Using gemma2 model as an example
    ollama pull gemma2
    # Confirm if the model is successfully pulled
    ollama ls
  3. Run the command to configure
    sh
    npx czg --api-key=" " --api-endpoint="http://localhost:11434/v1" --api-model="gemma2"
    npx czg --api-key=" " --api-endpoint="http://localhost:11434/v1" --api-model="gemma2"

As global usage

sh
npm install -g czg
npm install -g czg
sh
brew install czg
brew install czg
sh
# setup your token `czg --api-key=sk-xxxxx`
# Run the following command in any of your projects after setup OpenAI token
czg ai
# Return multiple subjects, and choose the suitable answer
git czg ai -N=5
# setup your token `czg --api-key=sk-xxxxx`
# Run the following command in any of your projects after setup OpenAI token
czg ai
# Return multiple subjects, and choose the suitable answer
git czg ai -N=5

As a dev dependency usage

sh
npm install -D czg
npm install -D czg
sh
yarn add -D czg
yarn add -D czg
sh
pnpm install -D czg
pnpm install -D czg
  1. Add script in package.json
  2. Try run npm cz ai | yarn cz ai | pnpm cz ai after setup token
json
{
  "scripts": {
    "cz": "czg"
  }
}
{
  "scripts": {
    "cz": "czg"
  }
}

npx usage

  • Run the following command in any of your projects after setup OpenAI token
sh
npx czg ai
npx czg ai
sh
bunx czg ai
bunx czg ai

Return multiple subjects, and choose the suitable answer

sh
npx czg ai -N=5
npx czg ai -N=5
sh
bunx czg ai -N=5
bunx czg ai -N=5

Commitizen CLI + cz-git usage

If you are currently using Commitizen CLI with the cz-git adapter:

There are three ways to configure the OpenAI API Key:

  1. Run czg to configure it: npx czg --api-key=sk-xxxxx
  2. Pass it as an environment variable and start: CZ_OPENAI_API_KEY="sk-xxxxx" czai=1 cz
  3. Configure it in an environment variable in your rc file: Add export CZ_OPENAI_API_KEY="sk-xxxxx" to .zshrc or .bashrc.

There are two ways to turn on AI mode:

  1. Pass czai=1 as an environment variable and start: czai=1 cz
  2. Enable AI mode in the configuration file: useAI: true

Configure

  • If you configure useAI to true, and you want to switch to normal mode not AI prompt mode in this session
    • czg CLI: czg --no-ai
    • Commitizen CLI + cz-git: no_czai=1 cz
  • If you want to customize the prompt words sent to OpenAI (like support i18n), you can use aiQuestionCB option
  • The AI configure options information see : Options - AI Related
  • About project or global support configure file information see: Configure Template

How it works

  • Run git diff to obtain difference code information, combine prompt task, send them to OpenAI API - /chat/completions, Return the subjects information generated by AI.
  • 💡 Inspired by aicommits and modified part of the code

I just try my best to make thing well, Could you give a star ⭐