Introduction
Chat Gipitty (Chat Get Information, Print Information TTY) is a command line client primarily intended for the official OpenAI Chat Completions API. It allows you to chat with language models in a terminal and even pipe output into it. While optimized for OpenAI's ChatGPT (with GPT-4 as the default model), it can also work with other providers that expose OpenAI-compatible endpoints.
What is Chat GipiTTY?
Chat GipiTTY is designed to make AI assistance seamlessly available in your terminal workflow. Whether you're debugging code, processing text, analyzing images, or generating speech, cgip brings powerful AI capabilities directly to your command line.
Key Features
- Universal AI Access: Works with OpenAI, local models via Ollama, Google Gemini, Mistral AI, Anthropic Claude, and any OpenAI-compatible provider
- Intelligent Piping: Pipe command output directly to AI models for instant analysis
- Multi-Modal Support: Text, image analysis, and text-to-speech capabilities
- Session Management: Maintain conversation context across terminal sessions
- Web Search: Get up-to-date information from the internet
- Agentic Workflows: Let the AI execute shell commands to accomplish tasks
- Flexible Configuration: Extensive customization options and provider support
Quick Examples
Debugging Output
Debug a Rust compilation error by piping the build output directly to ChatGPT:
cargo build 2>&1 | cgip "give me a short summary of the kind of error this is"
This results in something like:
❯ cargo build 2>&1 | cgip 'give me a short summary of the kind of error this is'
The error you're encountering is a **lifetime error** in Rust, specifically an issue with **borrowed values not living long enough**.
Prototyping New Command Line Programs
You can create useful command line utilities by combining cgip with shell aliases:
Language Translation Utility
# Set up the alias
alias translate='cgip --system-prompt "You are a translator, you translate text to Spanish"'
# Use it
echo "Hello, world!" | translate
echo "Good morning" | translate
Code Review Assistant
# Set up the alias
alias review='cgip --system-prompt "You are a senior developer" "review this code for bugs and improvements"'
# Use it
git diff | review
cat src/main.py | review
Supported Providers
Chat GipiTTY works with any service that implements the OpenAI Chat Completions API standard:
- OpenAI (ChatGPT, GPT-4, GPT-3.5, etc.)
- Local models via Ollama
- Google Gemini (via OpenAI-compatible endpoints)
- Mistral AI (via OpenAI-compatible endpoints)
- Anthropic Claude (via OpenAI-compatible endpoints)
- Any other provider implementing the OpenAI Chat Completions API standard
Custom OPENAI_BASE_URL
values can point to these or other OpenAI-compatible endpoints, though compatibility cannot be guaranteed for all providers.
Philosophy
Chat GipiTTY is built around the Unix philosophy of doing one thing well and composing with other tools. It seamlessly integrates AI capabilities into your existing terminal workflow without forcing you to change how you work.
Getting Started
Welcome to Chat GipiTTY! This guide will help you get up and running quickly with cgip, from installation to your first AI-powered terminal interactions.
What You'll Learn
In this section, you'll learn how to:
- Install Chat GipiTTY on your system
- Set up your API credentials
- Configure custom API endpoints (optional)
- Run your first commands
- Understand the basic workflow
Prerequisites
Before you begin, make sure you have:
- A POSIX-compliant system (Linux, macOS, or Windows with WSL)
- An OpenAI API key (or credentials for another supported provider)
- Rust and Cargo installed (recommended installation method)
Quick Start
If you're eager to get started, here's the fastest path:
-
Install via Cargo (recommended):
cargo install cgip
-
Set your API key:
export OPENAI_API_KEY=your_key_here
-
Test it out:
echo "Hello, AI!" | cgip "What can you tell me about this greeting?"
Installation Methods
Chat GipiTTY offers several installation options to suit different preferences and environments. The recommended method is via Cargo for the latest version and best compatibility.
Recommended: Cargo Installation
The most reliable way to install and keep Chat GipiTTY updated.
Alternative Methods
Other installation options including manual builds and package managers.
Next Steps
Once you have Chat GipiTTY installed:
- Set up your environment - Configure API keys and optional settings
- Learn basic usage - Master the fundamental commands and patterns
- Explore features - Discover advanced capabilities like sessions and web search
Getting Help
If you run into issues during installation or setup:
- Check the Troubleshooting section
- Run
cgip --help
for command-line assistance - Visit the project repository for community support
Ready to begin? Let's start with Installation.
Installation
Chat GipiTTY is designed to be run on POSIX compliant systems. This page covers all available installation methods, with the recommended approach being installation via Cargo.
Recommended: Install from crates.io with Cargo
The recommended way to install Chat GipiTTY is via Cargo. This ensures you always get the latest version directly from crates.io, with minimal dependencies and maximum compatibility.
If you do not already have Cargo (the Rust package manager) installed, please visit rustup.rs for instructions on installing Rust and Cargo.
cargo install cgip
Upgrading
To upgrade to the latest release, you can use the built-in upgrade command:
cgip upgrade
Alternatively, you can reinstall via Cargo:
cargo install cgip --force
Alternative Installation Methods
Other installation methods are available for convenience, but may not always provide the latest version:
Manual Installation
If you prefer to build from source:
-
Clone the repository:
git clone https://github.com/divanvisagie/chat-gipitty.git cd chat-gipitty
-
Install using make:
sudo make install
Development Setup
For development purposes, you can run Chat GipiTTY directly from the source:
git clone https://github.com/divanvisagie/chat-gipitty.git
cd chat-gipitty
cargo run -- --help
Platform-Specific Notes
Ubuntu/Debian
On Ubuntu and Debian systems, some additional packages may be required if you plan to build the deb package:
sudo apt-get install build-essential dh-make debhelper devscripts
macOS
No additional setup required beyond having Cargo installed.
Windows
Chat GipiTTY is designed for POSIX systems. On Windows, use WSL (Windows Subsystem for Linux) for the best experience.
Verification
After installation, verify that Chat GipiTTY is working correctly:
cgip --version
You should see version information displayed. If you get a "command not found" error, make sure Cargo's bin directory is in your PATH:
export PATH="$HOME/.cargo/bin:$PATH"
Add this line to your shell profile (.bashrc
, .zshrc
, etc.) to make it permanent.
Next Steps
Once installed, you'll need to set up your environment with API credentials and any custom configuration.
Setup
After installing Chat GipiTTY, you'll need to configure it with your API credentials and any custom settings. This page covers the essential setup steps to get you started.
API Key Configuration
Chat GipiTTY requires an API key to communicate with AI services. The setup process depends on which provider you're using.
OpenAI API Key
For OpenAI services (recommended), set up your API key as an environment variable:
export OPENAI_API_KEY=your_key_here
To make this permanent, add the export statement to your shell profile:
# For bash users
echo 'export OPENAI_API_KEY=your_key_here' >> ~/.bashrc
source ~/.bashrc
# For zsh users
echo 'export OPENAI_API_KEY=your_key_here' >> ~/.zshrc
source ~/.zshrc
Getting an OpenAI API Key
- Visit OpenAI's website
- Sign up for an account or log in
- Navigate to the API section
- Generate a new API key
- Copy the key and use it in the export command above
Security Note: Keep your API key secure and never commit it to version control. The environment variable approach keeps your key out of your code and configuration files.
Custom API Endpoints
Chat GipiTTY supports any OpenAI-compatible API provider. You can specify a custom API endpoint by setting the OPENAI_BASE_URL
environment variable.
Default Behavior
If not set, Chat GipiTTY uses the default OpenAI endpoint:
https://api.openai.com
Provider Examples
Local Ollama Instance
export OPENAI_BASE_URL=http://localhost:11434/v1
Google Gemini (via OpenAI-compatible proxy)
export OPENAI_BASE_URL=https://generativelanguage.googleapis.com/v1beta
Mistral AI
export OPENAI_BASE_URL=https://api.mistral.ai/v1
Other OpenAI-compatible Services
export OPENAI_BASE_URL=https://your-provider.com/v1
Custom Endpoint Patterns
# If your provider uses a different endpoint pattern
export OPENAI_BASE_URL=https://custom-api.com/v2/chat/completions
URL Construction Logic
Chat GipiTTY intelligently constructs the API endpoint:
- If your base URL already contains
/chat/completions
, it uses it as-is - If your base URL ends with
/v1
(or similar version pattern), it appends/chat/completions
- Otherwise, it appends
/v1/chat/completions
(standard OpenAI pattern)
Basic Configuration
Default Model
Set your preferred default model:
cgip config --set model=gpt-4o
View Current Configuration
Check your current settings:
cgip config --get model
Available Configuration Options
Common configuration options include:
model
: Default AI model to usesystem_prompt
: Default system prompt for conversationsmax_tokens
: Maximum response lengthtemperature
: Response creativity (0.0 to 2.0)
Session Management Setup
To enable session management across terminal sessions, set up a session name:
Unique Session Per Terminal
export CGIP_SESSION_NAME=$(uuid)
Daily Sessions (shared across terminals)
export CGIP_SESSION_NAME=$(date -I)
Custom Session Management
# Custom session based on your needs
export CGIP_SESSION_NAME="my-project-session"
Add your chosen session configuration to your shell profile for persistence.
Verification
Test your setup with a simple command:
echo "Hello, world!" | cgip "What can you tell me about this text?"
If everything is configured correctly, you should receive a response from the AI model.
Common Issues
"API key not found" error: Make sure you've exported the OPENAI_API_KEY
environment variable in your current shell session.
"Model not found" error: Check that your configured model is available with your API provider.
Network errors: Verify your OPENAI_BASE_URL
is correct and accessible.
Next Steps
Now that Chat GipiTTY is set up, you're ready to:
- Learn basic usage patterns
- Explore core features like sessions and web search
- Try different subcommands for specialized tasks
Advanced Configuration
For more detailed configuration options, see:
- Environment Variables - Complete list of configuration options
- Custom API Endpoints - Detailed provider setup guides
- Configuration - Advanced configuration management
Basic Usage
This page covers the fundamental usage patterns of Chat GipiTTY. Once you understand these core concepts, you'll be able to effectively integrate AI assistance into your terminal workflow.
Core Concepts
Chat GipiTTY follows a simple input priority system:
- stdin - Input piped from other commands
- command arguments - Text provided as arguments
- files - Content read from files with
-f
flag
This priority system allows for flexible composition with other Unix tools.
Basic Command Structure
The general syntax for Chat GipiTTY is:
cgip [OPTIONS] [QUERY] [COMMAND]
Simplest Usage
Ask a direct question:
cgip "What is the capital of France?"
Piping Input
Pipe output from other commands for AI analysis:
ls -la | cgip "What can you tell me about these files?"
ps aux | cgip "Are there any processes that look suspicious?"
Using Files as Context
Include file content in your query:
cgip "explain this code" -f src/main.rs
Combining Inputs
You can combine multiple input sources:
# Pipe + argument + file
cat error.log | cgip "analyze this error" -f config.yaml
Common Usage Patterns
Debugging and Development
Debug compilation errors:
cargo build 2>&1 | cgip "what's wrong with this code?"
Analyze git logs:
git log --oneline -10 | cgip "summarize these recent changes"
Review code:
cgip "review this function for potential issues" -f utils.py
System Administration
Analyze system resources:
df -h | cgip "are there any disk space concerns?"
Review logs:
tail -n 50 /var/log/syslog | cgip "any critical issues in these logs?"
Network diagnostics:
netstat -tuln | cgip "explain what services are running"
Text Processing
Convert formats:
cgip "convert this JSON to YAML" -f data.json
Summarize content:
cgip "provide a brief summary" -f long-document.md
Generate documentation:
cgip "create documentation for this function" -f my-function.py
Data Analysis
Analyze CSV data:
head -20 data.csv | cgip "what patterns do you see in this data?"
Process command output:
find . -name "*.py" | wc -l | cgip "what does this number tell us about the project?"
Working with Different Input Types
Standard Input (stdin)
When you pipe data to cgip, it becomes the primary context:
echo "Hello World" | cgip "translate this to French"
Command Arguments
Direct text queries:
cgip "write a Python function to calculate fibonacci numbers"
File Input
Reference file contents:
cgip "optimize this SQL query" -f slow-query.sql
Combining All Three
# Error from compilation + question + config file context
cargo test 2>&1 | cgip "why is this test failing?" -f Cargo.toml
Output Modes
Standard Output
By default, cgip outputs the AI response to stdout:
cgip "hello" > response.txt
Show Context
See what context is being sent to the AI:
cgip "test" --show-context
Markdown Formatting
Display context in a human-readable table:
cgip "test" --markdown --show-context
Model Selection
Using Different Models
Specify a model for your query:
cgip --model gpt-4o "complex reasoning task"
List Available Models
See what models are available:
cgip --list-models
Set Default Model
Configure your preferred model:
cgip config --set model=gpt-4o
System Prompts
Custom System Prompt
Provide context about how the AI should respond:
cgip --system-prompt "You are a expert Python developer" "review this code" -f script.py
Set Default System Prompt
Configure a default system prompt:
cgip config --set system_prompt="Always provide concise, technical answers"
Progress and Debugging
Show Progress
Display a progress indicator (useful for long operations):
cgip --show-progress "analyze this large dataset" -f big-data.csv
Context Inspection
When things aren't working as expected, examine the context being sent:
echo "test input" | cgip "query" --show-context --markdown
Error Handling
Stderr Redirection
Since cgip reads from stdin, redirect stderr to stdout for error analysis:
cargo build 2>&1 | cgip "what's the error?"
Network Issues
If you encounter API errors, check your configuration:
cgip config --get model
echo $OPENAI_API_KEY | cut -c1-10 # Check if key is set (shows first 10 chars)
Best Practices
Be Specific
Instead of:
cgip "fix this"
Try:
cgip "identify and suggest fixes for any Python syntax errors" -f script.py
Use Context Effectively
Provide relevant context for better results:
# Good: includes error and relevant file
python script.py 2>&1 | cgip "debug this error" -f script.py
# Less effective: only the error
python script.py 2>&1 | cgip "fix this"
Combine with Unix Tools
Leverage the Unix philosophy:
# Filter and analyze
grep "ERROR" app.log | tail -20 | cgip "what's causing these errors?"
# Process and summarize
find . -name "*.js" -exec wc -l {} + | cgip "analyze the code distribution"
Next Steps
Now that you understand basic usage, explore:
- Core Features - Sessions, web search, and advanced features
- Subcommands - Specialized commands for images, TTS, and more
- Examples - Real-world usage scenarios and workflows
Core Features
Chat Gipitty is a command-line tool that leverages OpenAI's models to address and respond to user queries. It provides several core features that make it powerful and flexible for various use cases.
Context Compilation
Chat Gipitty compiles context for queries by prioritizing input in a specific order:
- stdin - Input piped from other commands
- Command-line arguments - Direct text provided as arguments
- Files - Content from files specified with the
-f
flag
This ordering allows for flexible composition of prompts from multiple sources.
Model Support
While optimized for OpenAI's ChatGPT (with GPT-4 as the default model), Chat Gipitty works with multiple providers through OpenAI-compatible endpoints:
- OpenAI (ChatGPT, GPT-4, GPT-3.5, etc.)
- Local models via Ollama
- Google Gemini (via OpenAI-compatible endpoints)
- Mistral AI (via OpenAI-compatible endpoints)
- Anthropic Claude (via OpenAI-compatible endpoints)
- Any other provider implementing the OpenAI Chat Completions API standard
Session Management
Chat Gipitty supports continuous chat sessions that persist across multiple interactions. Sessions are managed through the CGIP_SESSION_NAME
environment variable, giving users control over session uniqueness and persistence.
Web Search Integration
The /search
command prefix enables web search functionality:
- For GPT models: Automatically switches to
gpt-4o-search-preview
for optimal search results - For non-GPT models: Adds web search capabilities while maintaining your configured model
Multimodal Capabilities
Image Analysis
- Analyze images using vision-capable models (GPT-4o)
- Extract text from images
- Describe visual content
- Supports JPEG, PNG, GIF, and WebP formats
Text-to-Speech
- Convert text to high-quality audio using OpenAI's TTS models
- Multiple voice options (alloy, echo, fable, onyx, nova, shimmer)
- Various audio formats (MP3, OPUS, AAC, FLAC)
- Customizable speed and instructions
Embedding Generation
- Generate embedding vectors for text using OpenAI's embedding models
- Support for different embedding models
- Output to file or stdout
Agentic Capabilities
The agent subcommand provides an agentic workflow that lets the model:
- Execute shell commands in a specified directory
- Receive command output as feedback
- Iterate through multiple commands to complete complex tasks
- Maintain safety by restricting operations to a chosen directory
Flexible Input/Output
- Piping support: Seamlessly pipe output from other commands
- File input: Read content from files with the
-f
flag - Combined input: Mix stdin, arguments, and file content
- Progress indicators: Optional progress display
- Context viewing: Inspect the full context being sent to the model
- Markdown formatting: Human-readable output formatting
Piping and Context
One of Chat Gipitty's most powerful features is its ability to seamlessly integrate with shell pipelines and compile context from multiple sources. This makes it incredibly useful for debugging, analysis, and processing command output.
Context Priority Order
Chat Gipitty compiles context for queries by prioritizing input in this specific order:
- stdin (highest priority)
- Command-line arguments
- Files (specified with
-f
flag, lowest priority)
This ordering allows you to build complex prompts by combining different input sources.
Piping Examples
Basic Piping
Pipe command output directly to Chat Gipitty:
# Debug compilation errors
cargo build 2>&1 | cgip "explain this error"
# Analyze log files
tail -f /var/log/app.log | cgip "summarize the errors"
# Process directory listings
ls -la | cgip "what can you tell me about these files?"
Converting stderr to stdout
Many commands output errors to stderr. Use 2>&1
to redirect stderr to stdout so it can be piped:
# Capture both stdout and stderr
cargo build 2>&1 | cgip "give me a short summary of the kind of error this is"
# Debug failed tests
cargo test 2>&1 | cgip "what tests are failing and why?"
Complex Pipeline Examples
# Analyze system processes
ps aux | head -20 | cgip "which processes are using the most resources?"
# Git log analysis
git log --oneline -10 | cgip "summarize the recent changes"
# Network analysis
netstat -an | cgip "are there any suspicious network connections?"
# File system analysis
du -sh * | cgip "which directories are taking up the most space?"
Combining Input Sources
stdin + Arguments
# Pipe input and add additional context
ls -la | cgip "analyze these files and suggest cleanup actions"
# Process command output with specific instructions
docker ps | cgip "which containers might have issues?"
stdin + Files
# Combine piped input with file content
cat error.log | cgip -f config.yaml "analyze this error in context of the config"
Arguments + Files
# Combine direct text with file content
cgip "convert this to python" -f src/main.rs
# Add context to file analysis
cgip "explain this code and suggest improvements" -f script.sh
Context Viewing
Use the --show-context
or -c
flag to see exactly what context is being sent:
ls | cgip -c "what files are here?"
This will show you the full context including:
- The piped input (ls output)
- Your query
- Any session history
- System prompts
Advanced Context Options
Markdown Output
Use -m
or --markdown
to format context in a human-readable table:
ps aux | cgip -m "analyze these processes"
No Session Context
Use -n
or --no-session
to exclude session history from the context:
sensitive_command | cgip -n "analyze this output"
Progress Indicators
Use -p
or --show-progress
to see progress (note: this might interfere with stdout):
large_command | cgip -p "process this data"
Best Practices
1. Error Handling
Always use 2>&1
when you want to capture error output:
# Good: Captures both success and error output
command 2>&1 | cgip "analyze the result"
# Limited: Only captures success output
command | cgip "analyze the result"
2. Data Size Considerations
Be mindful of large outputs that might exceed token limits:
# Good: Limit output size
head -100 large_file.log | cgip "analyze these log entries"
# Potentially problematic: Entire large file
cat huge_file.log | cgip "analyze this"
3. Structured Output
Some commands have structured output that works well with Chat Gipitty:
# JSON output
kubectl get pods -o json | cgip "which pods are not running?"
# CSV data
cat data.csv | cgip "find anomalies in this data"
# YAML configuration
cat config.yaml | cgip "check this configuration for issues"
4. Real-time Processing
For real-time log analysis:
# Monitor logs in real-time
tail -f /var/log/app.log | cgip "alert me to any errors"
# Watch system resources
watch -n 5 'ps aux --sort=-%cpu | head -10' | cgip "monitor for resource issues"
Integration with Development Workflow
Code Analysis
# Check code quality
eslint src/ | cgip "what are the main code quality issues?"
# Analyze test failures
npm test 2>&1 | cgip "why are these tests failing?"
System Administration
# Check system health
systemctl status | cgip "are there any service issues?"
# Analyze disk usage
df -h | cgip "do I have any disk space issues?"
Data Processing
# Process CSV data
cat sales_data.csv | cgip "calculate the total revenue by region"
# Analyze API responses
curl -s https://api.example.com/data | cgip "extract key insights from this API response"
Session Management
Chat Gipitty provides powerful session management capabilities that allow for continuous conversations across multiple interactions. Sessions persist context between commands, enabling more natural and contextual conversations.
How Sessions Work
Sessions store the conversation history (both user messages and assistant responses) and reuse this context in subsequent interactions. This allows the model to:
- Remember previous questions and answers
- Build upon earlier context
- Maintain conversation flow across commands
- Provide more coherent and contextually aware responses
Enabling Sessions
Sessions are controlled by the CGIP_SESSION_NAME
environment variable. Without this variable, Chat Gipitty operates in stateless mode.
# Enable sessions with a unique session ID
export CGIP_SESSION_NAME=$(uuidgen)
# Or use any custom session name
export CGIP_SESSION_NAME="my-coding-session"
Session Naming Strategies
The uniqueness and persistence of your session depends on how you set the CGIP_SESSION_NAME
variable:
Terminal-Specific Sessions
Each new terminal gets its own session:
# Add to ~/.bashrc or ~/.zshrc
export CGIP_SESSION_NAME=$(uuidgen)
Daily Sessions
Same session for the entire day:
export CGIP_SESSION_NAME=$(date -I) # 2024-01-15
Weekly Sessions
Sessions that persist for a week:
export CGIP_SESSION_NAME=$(date +%Y-W%U) # 2024-W03
Project-Based Sessions
Different sessions for different projects:
# Use current directory name
export CGIP_SESSION_NAME="project-$(basename $PWD)"
# Use git repository name if in a git repo
export CGIP_SESSION_NAME="git-$(git rev-parse --show-toplevel | xargs basename 2>/dev/null || echo 'no-git')"
Custom Session Names
Manually managed sessions:
export CGIP_SESSION_NAME="debugging-api-issue"
export CGIP_SESSION_NAME="code-review-session"
export CGIP_SESSION_NAME="learning-rust"
Session Workflow Examples
Continuous Development Session
# First interaction
cgip "I'm working on a Rust web API project"
# Later, without needing to re-explain context
cgip "How should I structure the authentication middleware?"
# Even later
cgip "Can you help me write tests for the authentication we discussed?"
Debugging Session
# Start debugging session
cargo build 2>&1 | cgip "This build is failing, what's wrong?"
# Continue debugging with context
cgip "How can I fix the lifetime issues you mentioned?"
# Test the fix
cargo build 2>&1 | cgip "Is this error related to our previous discussion?"
Learning Session
# Initial question
cgip "I'm learning about async programming in Rust"
# Follow-up questions maintain context
cgip "Can you give me a practical example?"
# Build on previous examples
cgip "How would I modify that example to handle errors?"
Session Management Commands
The session
subcommand provides tools for managing your current session:
View Session Context
cgip session --view
Shows all messages in the current session, allowing you to see what context will be included in future queries.
Clear Session
cgip session --clear
Removes all stored context from the current session, starting fresh while keeping the same session name.
Session Best Practices
1. Choose Appropriate Session Scope
- Terminal sessions (
uuidgen
): Best for isolated work sessions - Daily sessions (
date -I
): Good for ongoing projects - Project sessions: Best for long-term project work
2. Clear Sessions When Needed
Clear sessions when:
- Context becomes too long and irrelevant
- Switching to a completely different topic
- The model becomes confused by accumulated context
- Working with sensitive information
cgip session --clear
3. Use Session-Aware Queries
Take advantage of session context:
# Instead of repeating context
cgip "In the React project I mentioned earlier, how should I..."
# You can simply say
cgip "How should I implement user authentication in this project?"
4. Bypass Sessions When Needed
Use the --no-session
flag for one-off queries:
# Quick lookup that doesn't need session context
cgip -n "What's the syntax for Python list comprehensions?"
# Sensitive query that shouldn't be stored
sensitive_command | cgip -n "analyze this output"
Session Storage
Sessions are stored locally on your system in the Chat Gipitty configuration directory. The exact location depends on your operating system:
- Linux:
~/.config/cgip/sessions/
- macOS:
~/Library/Application Support/cgip/sessions/
- Windows:
%APPDATA%\cgip\sessions\
Each session is stored as a separate file named after your CGIP_SESSION_NAME
.
Privacy and Security
Session Privacy
- Sessions are stored locally and never sent to external servers
- Only the session content (messages) is sent to the API, not metadata
- Session files are readable only by your user account
Managing Sensitive Sessions
For sensitive work:
# Use temporary session names
export CGIP_SESSION_NAME="temp-$(date +%s)"
# Clear after sensitive work
cgip session --clear
# Or use no-session mode
cgip -n "sensitive query"
Advanced Session Patterns
Conditional Session Management
# Different session strategies based on directory
if [[ "$PWD" == *"work"* ]]; then
export CGIP_SESSION_NAME="work-$(date -I)"
else
export CGIP_SESSION_NAME="personal-$(date -I)"
fi
Session Inheritance
# Base session for project
export CGIP_BASE_SESSION="project-myapp"
# Feature-specific sessions that inherit context
export CGIP_SESSION_NAME="${CGIP_BASE_SESSION}-auth"
export CGIP_SESSION_NAME="${CGIP_BASE_SESSION}-frontend"
Multi-User Environments
# Include username in session names
export CGIP_SESSION_NAME="$(whoami)-$(date -I)"
# Or use more specific naming
export CGIP_SESSION_NAME="$(whoami)-$(basename $PWD)-$(date +%Y%m%d)"
Troubleshooting Sessions
Session Not Working
If sessions aren't working:
-
Check if
CGIP_SESSION_NAME
is set:echo $CGIP_SESSION_NAME
-
Verify session storage permissions:
ls -la ~/.config/cgip/sessions/ # Linux ls -la ~/Library/Application\ Support/cgip/sessions/ # macOS
-
Clear and restart session:
cgip session --clear
Session Too Long
If sessions become too long and unwieldy:
- Clear the session:
cgip session --clear
- Use more specific session names for different topics
- Use
--no-session
for unrelated queries
Context Confusion
If the model seems confused by session context:
- View the session:
cgip session --view
- Clear irrelevant context:
cgip session --clear
- Start a new session with a different name
Web Search
Chat Gipitty supports web search functionality through the /search
command prefix or the --search
flag. When you start your message with /search
or pass the --search
option, the application will enable web search capabilities to provide you with up-to-date information from the internet.
How Web Search Works
The web search feature adapts based on your configured model:
- For GPT models (models starting with "gpt"): The application automatically switches to the
gpt-4o-search-preview
model and enables web search options for optimal search results. - For non-GPT models (like Claude, Llama, or other custom models): The application keeps your configured model and adds web search options to the request.
Basic Usage
To use web search, you can either prefix your query with /search
or use the --search
flag:
# Using the /search prefix
cgip "/search What are the latest developments in AI?"
# Using the --search flag
cgip --search "What are the latest developments in AI?"
The /search
prefix will be automatically removed from your message before it's sent to the model. Using --search
applies the same behavior without needing the prefix.
Usage Examples
Current Events
# Search for recent news
cgip --search "What are the latest developments in renewable energy?"
# Get current market information
cgip --search "What is the current price of Bitcoin?"
# Find recent technology updates
cgip --search "What are the new features in the latest Python release?"
Technical Information
# Search for current best practices
cgip --search "What are the current best practices for React performance optimization?"
# Find up-to-date documentation
cgip --search "How to configure Docker containers for production in 2024?"
# Get current software versions
echo "What is the current stable version of Rust?" | cgip --search
Research and Analysis
# Market research
cgip --search "What are the current trends in mobile app development?"
# Academic research
cgip --search "What are the latest findings on climate change mitigation?"
# Competitive analysis
cgip --search "What are the main competitors to OpenAI in the AI space?"
Combining Web Search with Other Features
Web Search with File Input
You can combine web search with file analysis:
# Search for context about your code
cgip --search "How can I optimize this code for performance?" -f my_script.py
# Get current information about technologies in your project
cgip --search "What are the latest security best practices for this framework?" -f package.json
Web Search with Piped Input
Web search works with piped input as well:
# Search for solutions to error messages
command_that_fails 2>&1 | cgip "/search How to fix this error?"
# Get current information about command output
ps aux | cgip "/search What do these system processes indicate about performance?"
Web Search in Sessions
Web search results become part of your session context:
# First query with search
cgip "/search What are the current JavaScript frameworks for 2024?"
# Follow-up question using search results
cgip "Which of these would be best for a small team project?"
Model Behavior
GPT Models
When using GPT models with web search:
- Automatically switches to
gpt-4o-search-preview
- Provides real-time web search results
- Cites sources when possible
- Combines web information with the model's knowledge
Non-GPT Models
When using other models (Claude, Llama, etc.):
- Keeps your configured model
- Adds web search capabilities to the request
- May have varying levels of web search integration depending on the provider
Best Practices
1. Be Specific
More specific search queries yield better results:
# Good: Specific query
cgip "/search React 18 performance optimization techniques 2024"
# Less effective: Vague query
cgip "/search React performance"
2. Use Current Context
Include temporal context when relevant:
# Good: Includes timeframe
cgip "/search Current cybersecurity threats in 2024"
# Good: Includes version
cgip "/search Python 3.12 new features and changes"
3. Combine with Local Context
Use web search to enhance local analysis:
# Analyze local file with current best practices
cgip "/search Current Node.js security best practices" -f package.json
4. Follow-up Questions
Use session context to build on search results:
# Initial search
cgip "/search Latest trends in machine learning deployment"
# Follow-up without search (uses previous context)
cgip "Which of these trends would be most relevant for a startup?"
Limitations and Considerations
Rate Limits
- Web search may be subject to additional rate limits
- Consider the cost implications of web search requests
Accuracy
- Always verify important information from multiple sources
- Web search results reflect current information but may not always be accurate
Privacy
- Web search queries may be logged by the API provider
- Be mindful of sensitive information in search queries
Model Compatibility
- Web search effectiveness varies by model and provider
- Some custom endpoints may not support web search features
Troubleshooting
Search Not Working
If web search doesn't seem to be working:
-
Check your model configuration:
cgip config --get model
-
Verify your API endpoint supports web search:
cgip config --get base_url
-
Try with a GPT model explicitly:
cgip -M gpt-4o "/search test query"
Limited Results
If you're getting limited search results:
- Try rephrasing your query
- Be more specific about what you're looking for
- Check if your API provider has web search enabled
Subcommands
Chat GipiTTY provides several specialized subcommands for different types of AI-powered tasks. Each subcommand is optimized for specific use cases, from image analysis to text-to-speech generation.
Overview of Available Subcommands
Subcommand | Purpose | Key Features |
---|---|---|
view | Context inspection | View context without API calls |
config | Configuration management | Get/set configuration values |
session | Session management | View and clear conversation history |
image | Image analysis | Analyze images with vision models |
tts | Text-to-speech | Convert text to high-quality audio |
embedding | Text embeddings | Generate vector representations |
agent | Autonomous execution | Let AI execute shell commands |
upgrade | Software updates | Upgrade to latest version |
Subcommand Categories
Core Utilities
- view: Debug and inspect context before sending to AI
- config: Manage your Chat GipiTTY configuration
- session: Control conversation context and history
AI Capabilities
- image: Multi-modal image understanding and analysis
- tts: High-quality voice synthesis
- embedding: Vector generation for semantic search
- agent: Autonomous task execution with tool calling
Maintenance
- upgrade: Keep Chat GipiTTY up to date
Common Usage Patterns
Inspection and Debugging
Use view
to understand what context will be sent to the AI:
echo "test data" | cgip view
Configuration Management
Set up your preferred defaults:
cgip config --set model=gpt-4o
cgip config --set system_prompt="Be concise and technical"
Multi-Modal Tasks
Analyze images alongside text:
cgip image --file screenshot.png "What errors do you see in this code?"
Voice Output
Convert AI responses to speech:
cgip "explain quantum computing" | cgip tts --voice nova --output explanation.mp3
Autonomous Workflows
Let the AI execute commands to complete tasks:
cgip agent ~/project "run tests and fix any failing ones"
Subcommand Help
Each subcommand has its own help system:
cgip <subcommand> --help
For example:
cgip image --help
cgip tts --help
cgip agent --help
Combining Subcommands
While subcommands are typically used independently, you can combine them in powerful ways:
Image Analysis to Speech
cgip image --file chart.png "describe this data" | cgip tts --voice alloy
Code Analysis to Documentation
cgip "document this function" -f code.py > docs.md
cgip tts --voice fable < docs.md
Session Management with Analysis
# Start a session, analyze, then view history
cgip "analyze this error" -f error.log
cgip session --view
Next Steps
Explore each subcommand in detail:
- Start with view to understand context inspection
- Learn config for personalized setup
- Try image for visual AI capabilities
- Experiment with tts for voice synthesis
- Discover agent for autonomous workflows
Each subcommand page includes detailed examples, options, and best practices for that specific functionality.
View Command
The view
command allows you to inspect the context that would be sent to the AI model without actually making an API call. This is invaluable for debugging, understanding how your inputs are processed, and optimizing your queries.
Overview
The view
command renders the complete context that Chat GipiTTY would send to the AI, including:
- Input from stdin
- Command line arguments
- File contents (when using
-f
) - Session history (if enabled)
- System prompts
Basic Usage
cgip view
Command Help
Render the context without running a query against the model
Usage: cgip view
Options:
-h, --help Print help
-V, --version Print version
Examples
Basic Context Inspection
View what gets sent when combining different inputs:
echo "Here is some data" | cgip view "What can you tell me about this?"
File Content Preview
See how file content is included in context:
cgip view "Analyze this code" -f src/main.rs
Complex Context
Inspect complex multi-source contexts:
grep "ERROR" app.log | cgip view "Analyze these errors" -f config.yaml
Session Context
View current session context (when sessions are enabled):
export CGIP_SESSION_NAME=$(date -I)
cgip "Hello, I'm working on a project"
cgip view "What were we discussing?"
Use Cases
Debugging Input Processing
When your AI responses aren't what you expect, use view
to understand what context is actually being sent:
# Check if your pipe is working correctly
ps aux | head -10 | cgip view "analyze these processes"
Optimizing Context Size
Large contexts can be expensive and slow. Use view
to see how much context you're sending:
# Check context size before sending
cat large-file.txt | cgip view "summarize this"
Understanding File Processing
See exactly how files are being read and included:
cgip view "explain this configuration" -f docker-compose.yml
Session Context Review
Before important queries, review what conversation history will be included:
cgip view "based on our previous discussion, what should I do next?"
Output Formatting
The view
command outputs the context in a structured format showing:
- System Messages: Any system prompts or instructions
- User Messages: Your input, including stdin, arguments, and files
- Assistant Messages: Previous AI responses (if session is active)
- Context Metadata: Information about the context structure
Example Output
=== CONTEXT VIEW ===
System Message:
You are a helpful AI assistant.
User Message:
[stdin]: Here is some error output
[argument]: What caused this error?
[file: config.yaml]:
database:
host: localhost
port: 5432
Assistant Message (from session):
Based on the previous error, it seems like a connection issue...
=== END CONTEXT ===
Working with Other Options
The view
command respects the same options as regular queries:
Custom System Prompt
cgip view --system-prompt "You are a senior developer" "review this code" -f app.py
Different Models
cgip view --model gpt-4o "complex analysis task"
Show Context Options
Combine with context display options:
cgip view --show-context --markdown "test query"
Best Practices
Before Expensive Queries
Always use view
before sending large or complex contexts:
# Preview first
find . -name "*.py" -exec cat {} \; | cgip view "analyze all Python files"
# If context looks good, run the actual query
find . -name "*.py" -exec cat {} \; | cgip "analyze all Python files"
Debugging Unexpected Results
When AI responses don't match expectations:
# See what context was actually sent
cat data.csv | cgip view "analyze this data"
Context Size Management
Monitor context size for cost and performance:
# Check context before sending
tail -n 1000 huge-log.txt | cgip view "find critical errors"
Session State Verification
Verify session state before important queries:
cgip view "continue our previous analysis"
Troubleshooting
Empty Context
If view
shows no context:
- Check that your input methods (stdin, files) are working
- Verify file paths are correct
- Make sure you're not in a new session when expecting history
Unexpected Context
If context includes unexpected content:
- Check for active sessions that might include previous conversation
- Verify file contents are what you expect
- Look for hidden characters in piped input
Missing Context
If expected context is missing:
- Ensure stdin is properly piped (use
2>&1
for stderr) - Verify file permissions and paths
- Check that session is properly configured
Next Steps
- Learn about session management to control conversation history
- Explore configuration options to customize default behavior
- Try basic usage patterns with the confidence of knowing your context
Config Command
The config
command allows you to set and retrieve default configuration values for Chat GipiTTY. This eliminates the need to specify common options repeatedly and provides a centralized way to manage your preferences.
Overview
The configuration system uses a TOML file located in your system's config directory under cgip/config.toml
. The config command provides a convenient interface to manage these settings without manually editing the file.
Basic Usage
# Set a configuration value
cgip config --set key=value
# Get a configuration value
cgip config --get key
Command Help
Set or get default configuration values with your config.toml
Usage: cgip config [OPTIONS]
Options:
-s, --set <SET> Set a configuration value. Use the format key=value. `cgip config --set model=gpt-4-turbo`
-g, --get <GET> Get your current configuration value. `cgip config --get model`
-h, --help Print help
-V, --version Print version
Configuration Options
Model Settings
Set your preferred default model:
cgip config --set model=gpt-4o
Get your current model:
cgip config --get model
System Prompt
Configure a default system prompt:
cgip config --set system_prompt="You are a helpful coding assistant. Provide concise, accurate answers."
API Settings
Configure API-related settings:
# Set custom base URL (alternatively use OPENAI_BASE_URL env var)
cgip config --set base_url=http://localhost:11434/v1
# Set default max tokens
cgip config --set max_tokens=2000
Response Settings
Control response behavior:
# Set creativity level (0.0 to 2.0)
cgip config --set temperature=0.7
# Set response diversity
cgip config --set top_p=0.9
Common Configuration Examples
For Development Work
cgip config --set model=gpt-4o
cgip config --set system_prompt="You are an expert software engineer. Provide technical, detailed responses with code examples when relevant."
cgip config --set temperature=0.3
For Creative Tasks
cgip config --set model=gpt-4
cgip config --set system_prompt="You are a creative writing assistant. Be imaginative and expressive."
cgip config --set temperature=1.2
For Local AI Models
cgip config --set base_url=http://localhost:11434/v1
cgip config --set model=llama3
cgip config --set system_prompt="You are a helpful assistant running locally."
Viewing All Configuration
To see all your current configuration settings, you can manually inspect the config file:
# On Linux/macOS
cat ~/.config/cgip/config.toml
# On macOS (alternative location)
cat ~/Library/Application\ Support/cgip/config.toml
Configuration File Format
The configuration file uses TOML format:
model = "gpt-4o"
system_prompt = "You are a helpful assistant."
temperature = 0.7
max_tokens = 1500
base_url = "https://api.openai.com"
Environment Variables vs Configuration
Configuration precedence (highest to lowest):
- Command-line options (e.g.,
--model gpt-4
) - Environment variables (e.g.,
OPENAI_BASE_URL
) - Configuration file (set via
cgip config
) - Built-in defaults
This means you can override configuration file settings with environment variables or command-line options when needed.
Common Configuration Patterns
Project-Specific Configuration
For different projects, you might want different settings:
# Web development project
cgip config --set system_prompt="You are a web development expert. Focus on modern JavaScript, React, and Node.js."
# Data science project
cgip config --set system_prompt="You are a data science expert. Focus on Python, pandas, and machine learning."
Provider-Specific Settings
When switching between different AI providers:
# OpenAI setup
cgip config --set base_url=https://api.openai.com
cgip config --set model=gpt-4o
# Local Ollama setup
cgip config --set base_url=http://localhost:11434/v1
cgip config --set model=llama3
Resetting Configuration
To reset a configuration value to its default:
# This will remove the setting from your config file
cgip config --set model=
Or manually edit the config file to remove unwanted settings.
Troubleshooting
Configuration Not Applied
If your configuration changes aren't taking effect:
- Check the config file location and permissions
- Verify the TOML syntax is correct
- Remember that command-line options override config file settings
- Check for environment variables that might override config
Invalid Configuration Values
If you set an invalid value:
# Invalid model name
cgip config --set model=invalid-model
You'll get an error when trying to use Chat GipiTTY. Use cgip config --get model
to verify your settings.
Config File Location
If you're unsure where your config file is located:
# The config command will create the file if it doesn't exist
cgip config --set temp_key=temp_value
cgip config --set temp_key= # Remove the temporary key
Check your system's standard config directory:
- Linux:
~/.config/cgip/config.toml
- macOS:
~/Library/Application Support/cgip/config.toml
or~/.config/cgip/config.toml
Best Practices
Start with Essentials
Set up the most important settings first:
cgip config --set model=gpt-4o
cgip config --set system_prompt="Be concise and technical"
Use Descriptive System Prompts
Create system prompts that clearly define the AI's role:
cgip config --set system_prompt="You are a senior software engineer specializing in Python and web development. Provide practical, production-ready solutions."
Test Your Configuration
After setting up configuration, test it:
cgip "Hello, test my configuration"
Back Up Your Configuration
Since configuration improves your workflow, consider backing up your config file:
cp ~/.config/cgip/config.toml ~/backups/cgip-config-backup.toml
Next Steps
- Learn about environment variables for temporary overrides
- Explore advanced usage patterns with your configured defaults
- Try different models to find what works best for your use cases
Session Command
The session
subcommand provides tools for managing your current conversation session. It allows you to view, clear, and manage the context that Chat Gipitty maintains between interactions.
Overview
Sessions in Chat Gipitty store conversation history to enable continuous, contextual conversations. The session command gives you control over this stored context.
Usage
cgip session [OPTIONS]
Options
--view, -v
View the current session context, showing all stored messages.
cgip session --view
This displays:
- All user messages from the session
- All assistant responses
- The total number of messages
- Session metadata (name, creation time, etc.)
--clear, -c
Clear all stored context from the current session.
cgip session --clear
This removes all conversation history while keeping the session name active for future interactions.
Examples
Viewing Session Content
# Check what's in your current session
cgip session --view
Example output:
Session: my-coding-session (5 messages)
Created: 2024-01-15 10:30:00
[User] I'm working on a Rust web API
[Assistant] That's great! Rust is excellent for web APIs due to its performance and safety...
[User] How should I handle authentication?
[Assistant] For authentication in a Rust web API, you have several good options...
[User] Can you show me an example with JWT?
[Assistant] Here's a practical example of JWT authentication in Rust...
Clearing Session History
# Clear current session to start fresh
cgip session --clear
Example output:
Session 'my-coding-session' cleared successfully.
Checking Session Status
# View current session (will show empty if no context)
cgip session --view
If no session is active:
No active session. Set CGIP_SESSION_NAME to enable sessions.
Session Workflow
Typical Session Management Workflow
-
Start a session by setting the session name:
export CGIP_SESSION_NAME="debugging-session"
-
Have conversations with Chat Gipitty:
cgip "I'm having issues with my React component" cgip "The error says 'Cannot read property of undefined'"
-
Check session context when needed:
cgip session --view
-
Clear session when context becomes irrelevant:
cgip session --clear
When to Use Session Commands
View Session (--view
)
Use when you want to:
- See what context will be included in your next query
- Debug why the model's responses seem off-topic
- Review the conversation history
- Check if sensitive information is stored
- Understand how much context has accumulated
Clear Session (--clear
)
Use when you need to:
- Start a completely new topic
- Remove irrelevant or confusing context
- Clear sensitive information from the session
- Reset after the context has become too long
- Fix issues where the model seems confused
Integration with Other Commands
Session-Aware Queries
All regular Chat Gipitty commands use session context:
# First query establishes context
cgip "I'm learning Rust ownership concepts"
# Later queries build on this context
cgip "Can you give me an example of borrowing?"
cgip "What about mutable references?"
# Check what context is being used
cgip session --view
Bypassing Sessions
Use --no-session
to ignore session context:
# This won't add to or use session context
cgip --no-session "What's the weather like?"
# Session context remains unchanged
cgip session --view # Still shows previous Rust discussion
Session Information Display
When viewing sessions, you'll see:
Message Count
The total number of messages (user + assistant) stored.
Session Name
The current CGIP_SESSION_NAME
value.
Creation Time
When the session was first created.
Message History
All user questions and assistant responses in chronological order.
Context Size
Indication of how much context is being stored (useful for token management).
Best Practices
Regular Session Maintenance
# Check session size periodically
cgip session --view | head -1
# Clear when context becomes too large or irrelevant
cgip session --clear
Topic Transitions
# Before switching topics, check current context
cgip session --view
# Clear if the new topic is unrelated
cgip session --clear
# Then start the new topic
cgip "Now I want to learn about Docker containers"
Debugging with Session Commands
# If responses seem off-topic, check session context
cgip session --view
# Clear and retry if context is confusing
cgip session --clear
cgip "Let me rephrase my question..."
Error Handling
No Session Active
cgip session --view
# Output: No active session. Set CGIP_SESSION_NAME to enable sessions.
Solution:
export CGIP_SESSION_NAME="my-session"
cgip session --view
Empty Session
cgip session --view
# Output: Session 'my-session' is empty.
This is normal for new sessions or after clearing.
Permission Issues
If you get permission errors, check session storage directory:
# Linux/macOS
ls -la ~/.config/cgip/sessions/
ls -la ~/Library/Application\ Support/cgip/sessions/
Advanced Usage
Scripting with Session Commands
#!/bin/bash
# Script to manage coding sessions
# Check if session has too many messages
MESSAGE_COUNT=$(cgip session --view | grep -o '[0-9]* messages' | cut -d' ' -f1)
if [ "$MESSAGE_COUNT" -gt 20 ]; then
echo "Session has $MESSAGE_COUNT messages, clearing..."
cgip session --clear
fi
Session Inspection
# Get session info for debugging
cgip session --view > session_dump.txt
# Analyze session content
grep "User\]" session_dump.txt | wc -l # Count user messages
grep "Assistant\]" session_dump.txt | wc -l # Count assistant messages
Conditional Session Management
# Clear session if working directory changes
CURRENT_PROJECT=$(basename $PWD)
if [[ "$LAST_PROJECT" != "$CURRENT_PROJECT" ]]; then
cgip session --clear
export LAST_PROJECT="$CURRENT_PROJECT"
fi
Image Command
The image
command allows you to analyze images using OpenAI's vision-capable models (like GPT-4o). This feature enables you to ask questions about images, extract text from images, describe visual content, and more.
Overview
The image subcommand automatically ensures you're using a vision-capable model and will switch to gpt-4o
if your current model doesn't support vision. It supports multiple image formats and provides flexible prompting options.
Basic Usage
cgip image --file photo.jpg "What do you see in this image?"
Command Help
Analyze an image using vision models. Use --file to specify the image path
Usage: cgip image [OPTIONS] --file <FILE> [PROMPT]
Arguments:
[PROMPT] Additional prompt text to include with the image analysis
Options:
-f, --file <FILE> Path to the image file to analyze
-m, --max-tokens <MAX_TOKENS> Maximum number of tokens in the response [default: 300]
-h, --help Print help
-V, --version Print version
Supported Image Formats
The image command supports the following formats:
- JPEG (.jpg, .jpeg)
- PNG (.png)
- GIF (.gif)
- WebP (.webp)
Basic Examples
Simple Image Description
cgip image --file photo.jpg
# Uses default prompt: "What is in this image?"
Custom Analysis Prompt
cgip image --file screenshot.png "Extract all text from this image"
Detailed Analysis
cgip image --file diagram.jpg "Explain this diagram in detail" --max-tokens 500
Common Use Cases
Text Extraction (OCR)
Extract text from screenshots, documents, or signs:
cgip image --file receipt.jpg "Extract all text from this receipt and format it as a list"
cgip image --file whiteboard.png "What does the text on this whiteboard say?"
Document Analysis
Analyze documents, receipts, forms, and other text-heavy images:
cgip image --file receipt.jpg "What items are on this receipt and what's the total?"
cgip image --file form.png "Help me fill out this form - what information is required?"
Code Analysis
Analyze screenshots of code or development environments:
cgip image --file code_screenshot.png "What does this code do? Are there any potential issues?"
cgip image --file error_screen.png "What error is shown here and how can I fix it?"
Visual Content Description
Describe photos, artwork, charts, and other visual content:
cgip image --file chart.png "Describe the data shown in this chart"
cgip image --file artwork.jpg "Describe the style and composition of this artwork"
Technical Diagrams
Analyze technical diagrams, flowcharts, and system architectures:
cgip image --file architecture.png "Explain this system architecture diagram"
cgip image --file flowchart.jpg "Walk me through this process flowchart"
Advanced Usage
Combining with Other Commands
Process image analysis results with other tools:
# Save analysis to file
cgip image --file diagram.png "Explain this" --max-tokens 1000 > analysis.txt
# Convert analysis to speech
cgip image --file chart.jpg "Describe this data" | cgip tts --voice nova
Multiple Images Workflow
While the command handles one image at a time, you can process multiple images:
# Process multiple screenshots
for img in screenshots/*.png; do
echo "=== Analyzing $img ==="
cgip image --file "$img" "What error or issue is shown here?"
done
Context-Aware Analysis
Combine image analysis with additional context:
# Analyze error with additional context
cgip image --file error.png "Given that this is a React application, what's causing this error?"
Response Length Control
Default Token Limit
The default max-tokens is 300, suitable for brief descriptions:
cgip image --file photo.jpg "Briefly describe this image"
Extended Analysis
For detailed analysis, increase the token limit:
cgip image --file complex_diagram.png "Provide a comprehensive analysis" --max-tokens 1000
Concise Responses
For very brief responses, use focused prompts:
cgip image --file text.png "Extract only the main heading" --max-tokens 50
Model Selection
Automatic Model Selection
The image command automatically uses vision-capable models:
- If your configured model supports vision, it uses that model
- If your configured model doesn't support vision, it switches to
gpt-4o
Supported Vision Models
Currently supported vision-capable models include:
gpt-4o
gpt-4-vision-preview
gpt-4-turbo
gpt-4
Checking Your Model
Verify your current model configuration:
cgip config --get model
Set a vision-capable model as default:
cgip config --set model=gpt-4o
Image Processing Details
Automatic Processing
Images are automatically:
- Read from the file system
- Encoded as base64
- Sent to the API with appropriate MIME type based on file extension
- Combined with your text prompt for analysis
File Size Considerations
- Large images may take longer to process
- Very large images might hit API limits
- Consider resizing extremely large images for better performance
Quality Recommendations
- Higher resolution images generally produce better analysis
- Ensure text in images is clearly readable
- Good lighting and contrast improve results
Error Handling
Common Issues and Solutions
File not found:
cgip image --file nonexistent.jpg "analyze this"
# Error: No such file or directory
Check the file path and ensure the file exists.
Unsupported format:
cgip image --file document.pdf "analyze this"
# May not work - PDF is not a supported image format
Convert PDFs to images first or use supported formats.
Network errors: Check your API key and network connection.
Model not available: Ensure you have access to vision-capable models in your API account.
Best Practices
Prompt Engineering
Be specific about what you want:
# Good
cgip image --file chart.png "What are the main trends shown in this sales data chart?"
# Less effective
cgip image --file chart.png "What is this?"
Ask for structured output when needed:
cgip image --file receipt.png "Extract the items and prices as a formatted list"
Image Quality
- Use clear, well-lit images
- Ensure text is readable
- Avoid blurry or low-contrast images
Token Management
- Use appropriate max-tokens for your needs
- Brief descriptions: 100-300 tokens
- Detailed analysis: 500-1000 tokens
- Comprehensive reports: 1000+ tokens
Cost Considerations
Vision requests consume more tokens than text-only requests:
- Monitor your usage
- Use appropriate token limits
- Consider image size and complexity
Troubleshooting
Image Not Loading
Check file permissions and path:
ls -la path/to/image.jpg
Poor Analysis Quality
- Try a more specific prompt
- Ensure image quality is good
- Increase max-tokens for more detailed responses
Model Errors
If you get model-related errors:
# Check current model
cgip config --get model
# Set to a known vision model
cgip config --set model=gpt-4o
Integration Examples
Development Workflow
# Analyze UI mockups
cgip image --file mockup.png "What UI elements are shown and how should they be implemented?"
# Debug visual issues
cgip image --file bug_screenshot.png "What's wrong with this UI layout?"
Documentation
# Generate image descriptions for documentation
cgip image --file feature_screenshot.png "Write a caption for this screenshot" --max-tokens 100
Data Analysis
# Analyze charts and graphs
cgip image --file sales_chart.png "What insights can you derive from this sales data?"
Next Steps
- Learn about TTS command to convert image analysis to speech
- Explore agent command for automated workflows involving images
- Try combining with other features for powerful multi-modal workflows
TTS Command
The tts
command allows you to convert text to speech using OpenAI's Text-to-Speech models. This feature enables you to generate high-quality audio from text input with various voice options and customization settings.
Overview
The TTS subcommand reads text from command arguments, stdin, or both. When both are provided, it combines the stdin text with the argument text (stdin first, then argument), making it easy to pipe content from other commands and add additional text.
Basic Usage
cgip tts "Hello, this is a test of text-to-speech functionality"
Command Help
Convert text to speech using OpenAI's TTS models
Usage: cgip tts [OPTIONS] [TEXT]
Arguments:
[TEXT] Text to convert to speech. If not provided, reads from stdin
Options:
-m, --model <MODEL> TTS model to use [default: tts-1]
-v, --voice <VOICE> Voice to use for speech synthesis [default: alloy]
-o, --output <OUTPUT> Output file path for the audio [default: speech.mp3]
-i, --instructions <INSTRUCTIONS> Instructions for the voice (how to speak)
-f, --format <FORMAT> Audio format (mp3, opus, aac, flac) [default: mp3]
-s, --speed <SPEED> Speed of speech (0.25 to 4.0) [default: 1.0]
-h, --help Print help
-V, --version Print version
Available Options
Models
- tts-1: Standard quality, faster generation
- tts-1-hd: High definition quality, slower generation
Voices
- alloy: Neutral, balanced voice
- echo: Male voice with clear pronunciation
- fable: British accent, expressive
- onyx: Deep, authoritative voice
- nova: Young, energetic voice
- shimmer: Soft, gentle voice
Audio Formats
- MP3 (.mp3): Most compatible, good compression
- OPUS (.opus): Excellent compression, modern codec
- AAC (.aac): Good quality and compression, widely supported
- FLAC (.flac): Lossless compression, larger file size
Speed Settings
0.25
: Very slow, good for learning or accessibility0.5
: Slow, clear pronunciation1.0
: Normal speed (default)1.5
: Slightly faster, efficient listening2.0
: Fast, good for familiar content4.0
: Very fast, maximum speed
Basic Examples
Simple Text to Speech
cgip tts "Welcome to our application!"
Custom Voice and Output
cgip tts --voice nova --output welcome.mp3 "Welcome to our application!"
High-Definition Model
cgip tts --model tts-1-hd --speed 0.8 --voice shimmer "This is spoken slowly and clearly"
Input Handling
The TTS command handles text input in the following priority:
- Text argument only: Uses the provided text argument
- Stdin only: Uses text from stdin (when no text argument is provided)
- Both stdin and text argument: Combines stdin text with argument text (stdin first, then argument)
- Neither: Shows error message
Reading from Stdin
echo "This text comes from stdin" | cgip tts --voice echo --output stdin_speech.mp3
Combining Stdin and Argument
# Combines both inputs
hostname | cgip tts "and that's all she wrote" --output combined.mp3
# Result: "Hello from hostname" + "and that's all she wrote"
Reading from Files
cat announcement.txt | cgip tts --voice fable --output announcement.mp3
Voice Characteristics
alloy
- Neutral and versatile
- Works well for most content types
- Balanced tone and pace
echo
- Clear male voice
- Good for professional content
- Authoritative but approachable
fable
- British accent
- Expressive and engaging
- Good for storytelling
onyx
- Deep male voice
- Authoritative and confident
- Professional presentations
nova
- Young and energetic
- Good for casual content
- Upbeat and friendly
shimmer
- Soft and gentle
- Good for calming content
- Soothing and peaceful
Audio Format Details
MP3
- Most widely supported format
- Good balance of quality and file size
- Compatible with virtually all devices
OPUS
- Modern, efficient codec
- Excellent compression
- Smaller file sizes than MP3
- Good for web applications
AAC
- High quality audio
- Good compression
- Widely supported by modern devices
- Apple ecosystem preferred format
FLAC
- Lossless compression
- Highest audio quality
- Larger file sizes
- Good for archival purposes
Advanced Usage
Custom Instructions
cgip tts --instructions "Speak in a cheerful and enthusiastic tone" "Today is a wonderful day!"
Speed Adjustments
# Slow for learning
cgip tts --speed 0.5 --voice onyx "Technical explanation spoken slowly"
# Fast for quick review
cgip tts --speed 1.5 --voice nova "Quick summary of key points"
# Very fast for familiar content
cgip tts --speed 2.0 "Content you want to review quickly"
Different Formats for Different Uses
# High quality for archival
cgip tts --format flac --output archive.flac "Important announcement"
# Compressed for web
cgip tts --format opus --output web.opus "Website audio content"
# Compatible for sharing
cgip tts --format mp3 --output share.mp3 "Content to share widely"
Common Workflows
Documentation to Audio
# Convert README to audio
cat README.md | cgip tts --voice fable --output readme_audio.mp3
Code Comments to Audio
# Extract and vocalize code comments
grep -n "#" script.py | cgip tts --voice echo --output code_comments.mp3
Email or Message Reading
# Read important messages aloud
cat important_email.txt | cgip tts --voice alloy --speed 0.8
Multilingual Content
# TTS works with multiple languages
cgip tts --voice nova "Bonjour, comment allez-vous?"
Batch Processing
# Process multiple text files
for file in *.txt; do
cgip tts --voice alloy --output "${file%.txt}.mp3" < "$file"
done
Integration with Other Commands
Combining with Chat GipiTTY
# Generate text and convert to speech
cgip "Write a short story about a robot" | cgip tts --voice fable --output story.mp3
# Analyze code and speak the explanation
cgip "explain this function" -f script.py | cgip tts --voice echo
Pipeline Processing
# Process data and create audio summary
cat data.csv | cgip "summarize this data" | cgip tts --voice nova --output summary.mp3
System Integration
# Create audio notifications
echo "Build completed successfully" | cgip tts --voice alloy --output notification.mp3
play notification.mp3
Error Handling
Common Issues
No text provided:
Error: No text provided via argument or stdin
Solution: Provide text via argument or pipe it through stdin.
Invalid speed value:
Error: Speed must be between 0.25 and 4.0
Solution: Use a speed value within the valid range.
Invalid audio format:
Error: Unsupported format. Use: mp3, opus, aac, flac
Solution: Use one of the supported audio formats.
API errors: Check your API key and network connection.
Best Practices
Choose Appropriate Voice
- alloy: General purpose, neutral content
- echo: Professional, technical content
- fable: Engaging, storytelling content
- onyx: Authoritative, formal content
- nova: Casual, energetic content
- shimmer: Calming, gentle content
Optimize Speed
- Use slower speeds (0.5-0.8) for complex or technical content
- Use normal speed (1.0) for general content
- Use faster speeds (1.2-2.0) for familiar or review content
Select Appropriate Format
- MP3: General use, sharing, compatibility
- OPUS: Web applications, smaller files
- AAC: Apple devices, streaming
- FLAC: Archival, highest quality
Use Instructions Effectively
# For technical content
cgip tts --instructions "Speak clearly and pause at technical terms"
# For emotional content
cgip tts --instructions "Speak with appropriate emotion and emphasis"
# For educational content
cgip tts --instructions "Speak slowly and emphasize key points"
Cost Considerations
- TTS is charged per character of input text
- HD models (tts-1-hd) cost more than standard models (tts-1)
- Monitor usage to manage costs
- Consider caching frequently used audio files
Troubleshooting
Audio Quality Issues
- Try the HD model:
--model tts-1-hd
- Adjust speed for clarity:
--speed 0.8
- Use appropriate voice for content type
File Output Issues
- Check write permissions in output directory
- Ensure output path is valid
- Verify disk space availability
Network Issues
- Check internet connection
- Verify API key is set correctly
- Check API rate limits
Environment Requirements
Make sure you have:
OPENAI_API_KEY
environment variable set- Access to OpenAI's TTS models in your account
- Sufficient API credits (TTS requests are charged per character)
Custom API Endpoints
You can use custom API endpoints:
export OPENAI_BASE_URL=https://your-custom-api.com
cgip tts "Hello world"
Next Steps
- Learn about embedding command for text analysis
- Explore agent command for automated workflows
- Try combining with image analysis for multi-modal content
Embedding Command
The embedding
subcommand generates embedding vectors for text using OpenAI's embedding models. Embeddings are dense numerical representations of text that capture semantic meaning and can be used for similarity search, clustering, and other machine learning tasks.
Overview
Embeddings convert text into high-dimensional vectors that represent the semantic meaning of the text. Similar texts will have similar embeddings, making them useful for:
- Semantic search and similarity matching
- Text clustering and classification
- Recommendation systems
- Information retrieval
- Machine learning feature vectors
Usage
cgip embedding [OPTIONS] [TEXT]
Arguments
[TEXT]
- Text to generate embeddings for. If not provided, reads from stdin
Options
-m, --model <MODEL>
- Embedding model to use (default: text-embedding-3-small)-o, --output <OUTPUT>
- Output file path. If not set, prints to stdout
Basic Examples
Generate Embeddings for Text
# Basic usage with text argument
cgip embedding "Hello, world!"
Output:
-0.0123456789, 0.0234567890, -0.0345678901, 0.0456789012, ...
Read from stdin
# Pipe text to embedding command
echo "This is example text" | cgip embedding
Save to File
# Save embedding vector to file
cgip embedding "Important text" --output embedding.txt
# Read from stdin and save to file
echo "Text from stdin" | cgip embedding --output vector.txt
Model Options
Available Models
text-embedding-3-small (default)
- Dimensions: 1536
- Cost: Lower cost option
- Performance: Good for most use cases
- Speed: Faster processing
cgip embedding "sample text" --model text-embedding-3-small
text-embedding-3-large
- Dimensions: 3072
- Cost: Higher cost
- Performance: Best accuracy and quality
- Speed: Slower processing
cgip embedding "sample text" --model text-embedding-3-large
text-embedding-ada-002 (legacy)
- Dimensions: 1536
- Status: Legacy model, still supported
- Note: Consider upgrading to newer models
cgip embedding "sample text" --model text-embedding-ada-002
Advanced Examples
Batch Processing
# Process multiple texts from a file
cat texts.txt | while read line; do
echo "$line" | cgip embedding --output "embeddings/$(echo "$line" | tr ' ' '_').txt"
done
Compare Text Similarity
# Generate embeddings for comparison
cgip embedding "The cat sat on the mat" --output cat_text.txt
cgip embedding "A feline rested on the rug" --output feline_text.txt
# Use external tools to calculate cosine similarity
Document Processing
# Process documentation files
for file in docs/*.md; do
filename=$(basename "$file" .md)
cgip embedding --output "embeddings/${filename}.vec" -f "$file"
done
Search Index Creation
# Create embeddings for search index
find . -name "*.txt" -exec sh -c '
filename=$(basename "$1" .txt)
cgip embedding --output "search_index/${filename}.vec" -f "$1"
' _ {} \;
Input Handling
The embedding command handles input in the following priority:
- Text argument only: Uses the provided text argument
- Stdin only: Uses text from stdin (when no text argument is provided)
- Both stdin and text argument: Combines stdin text with argument text
- Neither: Shows error message
Examples of Input Combinations
# Text argument only
cgip embedding "Hello world"
# Stdin only
echo "Hello world" | cgip embedding
# Both (combines stdin + argument)
echo "Hello" | cgip embedding "world"
# Results in embedding for: "Hello world"
Output Formats
Standard Output (default)
Comma-separated floating point numbers:
-0.012345, 0.023456, -0.034567, 0.045678, ...
File Output
Same format but written to specified file:
cgip embedding "text" --output vector.txt
cat vector.txt
# -0.012345, 0.023456, -0.034567, 0.045678, ...
Integration Examples
With Python
# Process embedding output in Python
import subprocess
import numpy as np
def get_embedding(text):
result = subprocess.run(
['cgip', 'embedding', text],
capture_output=True,
text=True
)
return np.array([float(x.strip()) for x in result.stdout.split(',')])
embedding = get_embedding("Hello world")
print(f"Embedding shape: {embedding.shape}")
With Shell Scripts
#!/bin/bash
# Generate embeddings for all text files
for file in *.txt; do
echo "Processing $file..."
basename=$(basename "$file" .txt)
cgip embedding --output "${basename}.vec" -f "$file"
echo "Saved embedding to ${basename}.vec"
done
With JSON Processing
# Create JSON structure with embeddings
jq -n --arg text "Hello world" --argjson embedding "$(cgip embedding "Hello world" | jq -R 'split(",") | map(tonumber)')" '{
text: $text,
embedding: $embedding,
timestamp: now
}'
Use Cases
Semantic Search
# Index documents
for doc in documents/*.txt; do
cgip embedding -f "$doc" --output "index/$(basename "$doc").vec"
done
# Search for similar documents (requires similarity calculation)
cgip embedding "search query" --output query.vec
Content Recommendation
# Generate embeddings for user preferences
cgip embedding "user likes science fiction and technology" --output user_profile.vec
# Generate embeddings for content items
cgip embedding "article about artificial intelligence" --output content_item.vec
Text Classification
# Generate embeddings for training data
while IFS=',' read -r label text; do
echo "$text" | cgip embedding --output "training/${label}_$(date +%s).vec"
done < training_data.csv
Error Handling
No Input Provided
cgip embedding
# Error: No text provided. Please provide text as an argument or via stdin.
Invalid Model
cgip embedding "text" --model invalid-model
# Error: Model 'invalid-model' not found
Output File Issues
cgip embedding "text" --output /root/readonly.txt
# Error: Cannot write to output file: Permission denied
Best Practices
1. Choose the Right Model
- Use
text-embedding-3-small
for most applications - Use
text-embedding-3-large
when accuracy is critical - Consider cost vs. performance trade-offs
2. Preprocessing Text
# Clean and normalize text before embedding
echo "Text with extra spaces" | tr -s ' ' | cgip embedding
3. Batch Processing
# Process multiple texts efficiently
cat input_texts.txt | cgip embedding --output batch_embeddings.txt
4. Error Handling in Scripts
#!/bin/bash
if ! cgip embedding "test" > /dev/null 2>&1; then
echo "Error: Embedding service not available"
exit 1
fi
Performance Considerations
Token Limits
- Each model has input token limits
- Very long texts may be truncated
- Consider splitting long documents
API Rate Limits
- Be mindful of API rate limits for batch processing
- Add delays between requests if needed
- Consider caching results
Storage
- Embedding vectors can be large (1536-3072 dimensions)
- Consider compression for large-scale storage
- Use appropriate data types (float32 vs float64)
Troubleshooting
Common Issues
API Key Issues:
# Verify API key is set
echo $OPENAI_API_KEY
# Test with simple embedding
cgip embedding "test"
Model Not Available:
# List available models
cgip --list-models
# Use default model
cgip embedding "text" # Uses text-embedding-3-small
Output File Problems:
# Check directory permissions
ls -la output_directory/
# Use full path
cgip embedding "text" --output /full/path/to/output.txt
Agent Command
The agent
subcommand provides an agentic workflow that lets the model run shell commands on your behalf. It offers a powerful way to delegate complex tasks that require multiple command executions and iterative problem-solving.
Overview
The agent subcommand gives the AI model access to a special execute
tool that allows it to run shell commands in a specified directory. The model can use command output as feedback to make decisions about what to do next, creating an autonomous workflow for task completion.
Usage
cgip agent [OPTIONS] <DIRECTORY> <INSTRUCTION>
Arguments
<DIRECTORY>
- Directory the agent is allowed to operate in (required)<INSTRUCTION>
- Natural language instruction describing the goal (required)
Options
--input <FILES>
- Comma separated list of files whose contents should be added to the context--max-actions <N>
- Maximum number of commands the agent will execute before stopping (default: 10)
How It Works
When invoked, the agent:
- Receives your instruction and any provided file contents
- Gets access to an
execute
tool that can run shell commands - Runs commands iteratively, using output to inform next steps
- Continues until the task is complete or max actions reached
- Provides a summary of all commands executed
The model has access to this tool definition:
{
"type": "function",
"function": {
"name": "execute",
"description": "Run a shell command",
"parameters": {
"type": "object",
"properties": {
"command": {"type": "string"}
},
"required": ["command"]
}
}
}
Basic Examples
Simple Directory Listing
cgip agent . "list the current directory"
The agent will execute ls
and return the results.
Project Analysis
cgip agent /path/to/project "analyze the project structure and identify the main components"
The agent might:
- Run
find . -name "*.py" | head -10
to find Python files - Run
cat requirements.txt
to check dependencies - Run
ls -la
to see the overall structure - Provide a comprehensive analysis based on findings
Build and Test
cgip agent . "build the project and run tests, report any issues"
The agent could:
- Detect the project type (e.g.,
cargo build
for Rust) - Run the build command
- Execute tests if build succeeds
- Analyze any errors and suggest fixes
Advanced Examples
Code Quality Analysis
cgip agent src/ "analyze code quality and suggest improvements" --input "package.json,README.md"
With file input providing context, the agent might:
- Run linting tools appropriate for the language
- Check for security vulnerabilities
- Analyze code complexity
- Review documentation coverage
Environment Setup
cgip agent . "set up development environment for this project" --max-actions 15
The agent could:
- Detect project requirements
- Install dependencies
- Set up configuration files
- Run initial setup commands
- Verify everything works
Debugging Assistance
cgip agent . "investigate why tests are failing and fix the issues"
The agent might:
- Run the test suite to see failures
- Examine failing test output
- Look at relevant source files
- Make necessary fixes
- Re-run tests to verify fixes
Options in Detail
--input <FILES>
Provide additional context by including file contents:
# Single file
cgip agent . "optimize this code" --input "src/main.py"
# Multiple files
cgip agent . "review these components" --input "src/user.py,src/auth.py,tests/test_auth.py"
# Configuration files for context
cgip agent . "deploy this application" --input "docker-compose.yml,package.json"
--max-actions <N>
Control how many commands the agent can execute:
# Quick tasks
cgip agent . "check project status" --max-actions 3
# Complex tasks
cgip agent . "refactor the codebase" --max-actions 20
# Default is 10 actions
cgip agent . "analyze and improve performance"
Safety Features
Directory Restriction
The agent can only operate within the specified directory:
# Agent limited to current directory
cgip agent . "clean up temporary files"
# Agent limited to specific subdirectory
cgip agent src/ "refactor the source code"
This prevents the agent from:
- Accessing files outside the specified directory
- Making system-wide changes
- Affecting other projects
Action Limits
The --max-actions
limit prevents runaway execution:
- Stops infinite loops
- Limits resource usage
- Ensures predictable completion time
Command Visibility
All executed commands are:
- Shown in real-time as they run
- Included in the final summary
- Available for review and audit
Best Practices
1. Start with Clear Instructions
# Good: Specific and actionable
cgip agent . "find all Python files with TODO comments and create a task list"
# Less effective: Vague goal
cgip agent . "improve the project"
2. Use Appropriate Directory Scope
# Focused scope for specific tasks
cgip agent src/ "refactor utility functions"
# Broader scope for project-wide tasks
cgip agent . "set up CI/CD pipeline"
3. Provide Relevant Context
# Include relevant files for context
cgip agent . "update dependencies" --input "package.json,requirements.txt"
4. Set Appropriate Action Limits
# Simple tasks: low limit
cgip agent . "check syntax errors" --max-actions 5
# Complex tasks: higher limit
cgip agent . "migrate database schema" --max-actions 15
Common Use Cases
Development Tasks
# Code generation
cgip agent . "create unit tests for all functions in src/utils.py"
# Refactoring
cgip agent src/ "extract common code into shared utilities"
# Documentation
cgip agent . "generate API documentation from code comments"
Project Management
# Dependency management
cgip agent . "audit and update all dependencies to latest versions"
# Project setup
cgip agent . "initialize project with standard configuration files"
# Cleanup
cgip agent . "remove unused files and clean up directory structure"
Analysis and Reporting
# Code analysis
cgip agent . "analyze code complexity and generate report"
# Security audit
cgip agent . "scan for potential security vulnerabilities"
# Performance analysis
cgip agent . "profile the application and identify bottlenecks"
Output and Summary
After completion, the agent provides:
Real-time Output
Commands and their output are shown as they execute:
Executing: ls -la
total 24
drwxr-xr-x 5 user user 4096 Jan 15 10:30 .
drwxr-xr-x 3 user user 4096 Jan 15 10:25 ..
-rw-r--r-- 1 user user 123 Jan 15 10:30 main.py
...
Executing: python -m pytest
===== test session starts =====
...
Final Summary
A summary of all executed commands:
Agent completed successfully. Commands executed:
1. ls -la
2. python -m pytest
3. cat test_results.txt
Task completed: All tests are passing and the project structure looks good.
Troubleshooting
Permission Issues
If the agent can't execute commands:
# Check directory permissions
ls -la /path/to/directory
# Ensure you have execute permissions
chmod +x /path/to/directory
Command Not Found
If commands fail:
# Check if required tools are installed
which python
which npm
which cargo
# Install missing dependencies
Action Limit Reached
If the agent stops due to action limits:
# Increase the limit for complex tasks
cgip agent . "complex task" --max-actions 20
# Or break down into smaller tasks
cgip agent . "first part of complex task" --max-actions 10
Integration Examples
With CI/CD
# In a CI script
cgip agent . "run all tests and generate coverage report" --max-actions 5
With Development Workflow
# Pre-commit hook
cgip agent . "check code quality and fix simple issues" --max-actions 8
With Documentation
# Documentation generation
cgip agent . "update README with current project status" --input "package.json,src/main.py"
Upgrade Command
The upgrade
subcommand provides a convenient way to upgrade Chat Gipitty to the latest release directly from the command line. This ensures you always have access to the newest features and bug fixes.
Overview
The upgrade command automatically:
- Checks for the latest version available
- Downloads and installs the new version
- Preserves your existing configuration
- Provides feedback on the upgrade process
Usage
cgip upgrade
The upgrade command takes no additional arguments or options.
How It Works
When you run cgip upgrade
, the following process occurs:
- Version Check: Compares your current version with the latest available release
- Download: Downloads the appropriate binary for your system
- Installation: Replaces the current binary with the new version
- Verification: Confirms the upgrade was successful
Examples
Basic Upgrade
cgip upgrade
Example output:
Current version: 0.4.5
Latest version: 0.5.0
Downloading cgip v0.5.0...
✓ Download complete
✓ Installation successful
Chat Gipitty has been upgraded to v0.5.0
Already Up to Date
cgip upgrade
When already on the latest version:
Current version: 0.5.0
You are already running the latest version of cgip.
Installation Methods
The upgrade command works differently depending on how Chat Gipitty was originally installed:
Cargo Installation (Recommended)
If installed via cargo install cgip
, the upgrade command will:
- Use
cargo install --force cgip
to reinstall with the latest version - Maintain all cargo-specific configurations
- Work seamlessly with Rust toolchain
Manual Installation
For manual installations, the upgrade command will:
- Download the appropriate binary for your platform
- Replace the existing binary in place
- Preserve file permissions
Platform Support
The upgrade command supports automatic upgrades on:
- Linux (x86_64, ARM64)
- macOS (Intel, Apple Silicon)
- Windows (x86_64)
Platform detection is automatic based on your system.
Prerequisites
Permissions
The upgrade command needs appropriate permissions to:
- Write to the installation directory
- Replace the existing binary
- Create temporary files for download
Network Access
- Internet connection for downloading releases
- Access to GitHub releases API
- Access to release asset downloads
System Requirements
- Same system requirements as the new version
- Sufficient disk space for download and installation
Configuration Preservation
The upgrade process preserves:
- Configuration files: Your
config.toml
settings remain unchanged - Session data: Existing sessions are maintained
- Environment variables: All environment settings continue to work
- Shell integration: Existing shell aliases and functions remain functional
Safety Features
Backup
The upgrade process includes safety measures:
- Creates backup of current binary before replacement
- Can rollback if upgrade fails
- Validates new binary before finalizing upgrade
Verification
After upgrade:
- Verifies new binary is functional
- Confirms version number matches expected version
- Tests basic functionality
Troubleshooting
Permission Denied
If you encounter permission errors:
# Linux/macOS: Use sudo if needed
sudo cgip upgrade
# Or change ownership of installation directory
sudo chown -R $(whoami) /usr/local/bin/cgip
Network Issues
For network-related problems:
# Check internet connectivity
ping github.com
# Check if GitHub is accessible
curl -I https://api.github.com/repos/divanvisagie/chat-gipitty/releases/latest
Download Failures
If downloads fail:
- Check available disk space
- Verify write permissions to temp directory
- Try again later (temporary GitHub issues)
Corrupted Download
If the downloaded binary appears corrupted:
- The upgrade will automatically retry
- Check your internet connection stability
- Clear temporary files and try again
Manual Upgrade Alternative
If the automatic upgrade fails, you can upgrade manually:
Via Cargo (Recommended)
cargo install --force cgip
Via GitHub Releases
- Visit the releases page
- Download the appropriate binary for your system
- Replace the existing binary
- Make the new binary executable (Linux/macOS)
Via Package Managers
Some package managers may have their own update mechanisms:
# Homebrew (if available)
brew upgrade cgip
# APT (if available)
sudo apt update && sudo apt upgrade cgip
Upgrade Frequency
Checking for Updates
You can check if updates are available without upgrading:
# Check current version
cgip --version
# Visit releases page or use upgrade command (it will show if updates are available)
cgip upgrade
Recommended Schedule
- Patch releases: Upgrade promptly for bug fixes
- Minor releases: Upgrade when convenient for new features
- Major releases: Review changelog and upgrade when ready
Release Channels
Chat Gipitty follows semantic versioning:
- Patch releases (0.5.1): Bug fixes, security updates
- Minor releases (0.6.0): New features, improvements
- Major releases (1.0.0): Breaking changes, major rewrites
The upgrade command always targets the latest stable release.
Rollback
If you need to rollback after an upgrade:
Automatic Rollback
The upgrade process includes automatic rollback on failure.
Manual Rollback
If you need to manually rollback:
# Reinstall specific version via cargo
cargo install --force cgip --version 0.4.5
# Or download specific version from releases
# and replace the binary manually
Integration with CI/CD
For automated environments:
# Check if upgrade is needed in scripts
CURRENT_VERSION=$(cgip --version | cut -d' ' -f2)
LATEST_VERSION=$(curl -s https://api.github.com/repos/divanvisagie/chat-gipitty/releases/latest | jq -r .tag_name)
if [ "$CURRENT_VERSION" != "$LATEST_VERSION" ]; then
echo "Upgrade available: $CURRENT_VERSION -> $LATEST_VERSION"
cgip upgrade
fi
Best Practices
Before Upgrading
- Read the changelog to understand what's changing
- Backup important sessions if doing major upgrades
- Test in non-production environments first
After Upgrading
- Verify functionality with a simple test query
- Check that your configuration still works
- Review new features in the documentation
Staying Updated
- Subscribe to releases on GitHub for notifications
- Check periodically with
cgip upgrade
- Join community channels for upgrade announcements
Security Considerations
The upgrade process:
- Downloads only from official GitHub releases
- Verifies checksums when available
- Uses secure HTTPS connections
- Preserves existing file permissions
Configuration
Chat Gipitty can be configured through multiple methods to customize its behavior for your needs. This page covers all the configuration options available and how to set them.
Configuration Methods
Chat Gipitty supports configuration through:
- Configuration file (
config.toml
) - Environment variables
- Command-line options
Configuration is applied in order of precedence:
- Command-line options (highest priority)
- Environment variables
- Configuration file (lowest priority)
Configuration File
Location
The configuration file is located in your system's configuration directory:
- Linux:
~/.config/cgip/config.toml
- macOS:
~/Library/Application Support/cgip/config.toml
- Windows:
%APPDATA%\cgip\config.toml
File Format
The configuration file uses TOML format:
# Chat Gipitty Configuration File
# Default model to use
model = "gpt-4"
# Custom API base URL (optional)
base_url = "https://api.openai.com"
# Default system prompt (optional)
system_prompt = "You are a helpful assistant."
# Show progress indicator by default
show_progress = false
# Default to markdown output
markdown = false
# Session configuration
session_name = ""
# File input settings
default_file_extensions = [".txt", ".md", ".rs", ".py", ".js"]
Available Options
Core Settings
model
(string)
- Default model to use for queries
- Default:
"gpt-4"
- Example:
"gpt-3.5-turbo"
,"gpt-4-turbo"
base_url
(string)
- Custom API base URL for alternative providers
- Default:
"https://api.openai.com"
- Example:
"http://localhost:11434/v1"
for Ollama
system_prompt
(string)
- Default system prompt for all queries
- Default: None
- Example:
"You are a coding assistant"
Display Settings
show_progress
(boolean)
- Show progress indicator by default
- Default:
false
- Equivalent to
-p, --show-progress
markdown
(boolean)
- Format output in markdown by default
- Default:
false
- Equivalent to
-m, --markdown
show_context
(boolean)
- Show context by default
- Default:
false
- Equivalent to
-c, --show-context
Session Settings
session_name
(string)
- Default session name
- Default: Empty (no sessions)
- Can be overridden by
CGIP_SESSION_NAME
environment variable
no_session
(boolean)
- Disable sessions by default
- Default:
false
- Equivalent to
-n, --no-session
Using the Config Command
The config
subcommand allows you to view and modify configuration values:
View Configuration
# View all configuration
cgip config
# View specific setting
cgip config --get model
cgip config --get base_url
Set Configuration
# Set model
cgip config --set model=gpt-4-turbo
# Set custom API endpoint
cgip config --set base_url=http://localhost:11434/v1
# Set system prompt
cgip config --set system_prompt="You are a helpful coding assistant"
# Enable progress indicator by default
cgip config --set show_progress=true
Examples
# Configure for OpenAI GPT-4
cgip config --set model=gpt-4
cgip config --set base_url=https://api.openai.com
# Configure for local Ollama
cgip config --set model=llama2
cgip config --set base_url=http://localhost:11434/v1
# Configure for daily sessions
cgip config --set session_name=$(date -I)
# Set a default system prompt
cgip config --set system_prompt="You are an expert programmer. Provide concise, accurate answers."
Environment Variables
Environment variables override configuration file settings:
Required Variables
OPENAI_API_KEY
- Your API key for the configured provider
- Required for most functionality
Optional Variables
OPENAI_BASE_URL
- Custom API endpoint
- Overrides
base_url
in config file
CGIP_SESSION_NAME
- Session name for session management
- Overrides
session_name
in config file
Examples
# Basic OpenAI setup
export OPENAI_API_KEY="sk-your-key-here"
# Custom provider setup
export OPENAI_API_KEY="your-provider-key"
export OPENAI_BASE_URL="https://api.provider.com/v1"
# Session management
export CGIP_SESSION_NAME=$(date -I)
Command-Line Options
Command-line options have the highest precedence and override both environment variables and configuration file settings.
Model Selection
# Override configured model
cgip -M gpt-3.5-turbo "query"
# List available models
cgip -l
Output Control
# Show progress (overrides config)
cgip -p "long running query"
# Show context (overrides config)
cgip -c "query with context"
# Markdown output (overrides config)
cgip -m "formatted query"
Session Control
# Disable session for this query
cgip -n "isolated query"
Configuration Examples
Developer Setup
Configuration for software development:
# config.toml
model = "gpt-4"
system_prompt = "You are an expert software developer. Provide concise, practical solutions."
show_progress = true
show_context = false
# Environment
export OPENAI_API_KEY="your-key"
export CGIP_SESSION_NAME="dev-$(date +%Y%m%d)"
Data Analysis Setup
Configuration for data analysis tasks:
# config.toml
model = "gpt-4-turbo"
system_prompt = "You are a data analyst. Focus on insights and actionable recommendations."
markdown = true
show_progress = true
Local AI Setup
Configuration for local AI models via Ollama:
# config.toml
model = "llama2"
base_url = "http://localhost:11434/v1"
show_progress = true
# Environment (Ollama doesn't need real API key)
export OPENAI_API_KEY="ollama"
Multi-Provider Setup
Using shell functions for different providers:
# In ~/.bashrc or ~/.zshrc
# OpenAI function
openai() {
OPENAI_API_KEY="$OPENAI_KEY" \
OPENAI_BASE_URL="https://api.openai.com" \
cgip -M gpt-4 "$@"
}
# Local function
local() {
OPENAI_API_KEY="ollama" \
OPENAI_BASE_URL="http://localhost:11434/v1" \
cgip -M llama2 "$@"
}
# Claude function (via proxy)
claude() {
OPENAI_API_KEY="$CLAUDE_KEY" \
OPENAI_BASE_URL="https://claude-proxy.com/v1" \
cgip -M claude-3 "$@"
}
Configuration Validation
Check Current Configuration
# View all current settings
cgip config
# Test configuration
cgip "test query"
# Verify model availability
cgip -l
Common Configuration Issues
Invalid Model:
cgip config --set model=gpt-4
cgip -l # Verify model is available
Wrong API Endpoint:
cgip config --set base_url=https://api.openai.com
# Test with a simple query
cgip "hello"
Missing API Key:
export OPENAI_API_KEY="your-key"
# Or check if it's set
echo $OPENAI_API_KEY
Advanced Configuration
Project-Specific Configuration
Create project-specific configuration with shell functions:
# In project directory
project_cgip() {
local config_file="$(pwd)/.cgip.toml"
if [[ -f "$config_file" ]]; then
CGIP_CONFIG_FILE="$config_file" cgip "$@"
else
cgip "$@"
fi
}
Conditional Configuration
Configure based on environment:
# In ~/.bashrc or ~/.zshrc
if [[ "$ENVIRONMENT" == "development" ]]; then
export OPENAI_BASE_URL="http://localhost:11434/v1"
export OPENAI_API_KEY="ollama"
else
export OPENAI_API_KEY="$PRODUCTION_OPENAI_KEY"
fi
Configuration Templates
Save common configurations as templates:
# Save current config as template
cp ~/.config/cgip/config.toml ~/.config/cgip/templates/dev.toml
# Load template
cp ~/.config/cgip/templates/dev.toml ~/.config/cgip/config.toml
Troubleshooting Configuration
Debug Configuration
# Check where config file should be
cgip config --get model 2>&1 | grep -i "config"
# Verify environment variables
env | grep -E "(OPENAI|CGIP)"
# Test with explicit options
cgip -M gpt-3.5-turbo "test"
Reset Configuration
# Remove config file to reset to defaults
rm ~/.config/cgip/config.toml
# Or reset specific settings
cgip config --set model=gpt-4
cgip config --set base_url=https://api.openai.com
Configuration Precedence Testing
# Test precedence (command line > env > config)
# 1. Set in config file
cgip config --set model=gpt-3.5-turbo
# 2. Override with environment
CGIP_MODEL=gpt-4 cgip "test"
# 3. Override with command line
CGIP_MODEL=gpt-4 cgip -M gpt-4-turbo "test"
Security Considerations
API Key Security
- Never commit API keys to version control
- Use environment variables for sensitive data
- Consider using secret management systems
- Rotate keys regularly
Configuration File Permissions
# Secure config file
chmod 600 ~/.config/cgip/config.toml
# Verify permissions
ls -la ~/.config/cgip/config.toml
Environment Variable Security
# Use env files that are gitignored
echo "OPENAI_API_KEY=your-key" > .env
echo ".env" >> .gitignore
source .env
Environment Variables
Chat Gipitty uses several environment variables for configuration. These variables allow you to customize the behavior without modifying configuration files or passing command-line arguments repeatedly.
Required Variables
OPENAI_API_KEY
Required for most functionality
Your OpenAI API key or compatible API key for your chosen provider.
export OPENAI_API_KEY="sk-your-api-key-here"
For other providers:
# Google Gemini
export OPENAI_API_KEY="your-gemini-api-key"
# Anthropic Claude
export OPENAI_API_KEY="your-claude-api-key"
# Mistral AI
export OPENAI_API_KEY="your-mistral-api-key"
Optional Variables
OPENAI_BASE_URL
Default: https://api.openai.com
Specify a custom API endpoint for OpenAI-compatible providers.
# Local Ollama instance
export OPENAI_BASE_URL="http://localhost:11434/v1"
# Google Gemini (via OpenAI-compatible proxy)
export OPENAI_BASE_URL="https://generativelanguage.googleapis.com/v1beta"
# Mistral AI
export OPENAI_BASE_URL="https://api.mistral.ai/v1"
# Anthropic Claude (via proxy)
export OPENAI_BASE_URL="https://api.anthropic.com/v1"
# Custom provider
export OPENAI_BASE_URL="https://your-provider.com/v1"
URL Construction Logic
Chat Gipitty intelligently constructs the API endpoint:
- If your base URL already contains
/chat/completions
, it uses it as-is - If your base URL ends with
/v1
(or similar version pattern), it appends/chat/completions
- Otherwise, it appends
/v1/chat/completions
(standard OpenAI pattern)
Examples:
# These all work correctly:
export OPENAI_BASE_URL="https://api.example.com/v1"
# Results in: https://api.example.com/v1/chat/completions
export OPENAI_BASE_URL="https://api.example.com/v2/chat/completions"
# Results in: https://api.example.com/v2/chat/completions (used as-is)
export OPENAI_BASE_URL="https://api.example.com"
# Results in: https://api.example.com/v1/chat/completions
CGIP_SESSION_NAME
Default: No session management
Controls session management and uniqueness. The uniqueness of your session depends on the value of this variable.
# New session for each terminal (using uuid)
export CGIP_SESSION_NAME=$(uuid)
# Daily sessions (same session for entire day)
export CGIP_SESSION_NAME=$(date -I)
# Weekly sessions
export CGIP_SESSION_NAME=$(date +%Y-W%U)
# Project-specific sessions
export CGIP_SESSION_NAME="project-$(basename $PWD)"
# Manual session naming
export CGIP_SESSION_NAME="my-coding-session"
Session Examples
Terminal-specific sessions:
# Add to ~/.bashrc or ~/.zshrc
export CGIP_SESSION_NAME=$(uuid)
Date-based sessions:
# Daily sessions
export CGIP_SESSION_NAME=$(date -I) # 2024-01-15
# Weekly sessions
export CGIP_SESSION_NAME=$(date +%Y-W%U) # 2024-W03
# Monthly sessions
export CGIP_SESSION_NAME=$(date +%Y-%m) # 2024-01
Project-based sessions:
# Use current directory name
export CGIP_SESSION_NAME="project-$(basename $PWD)"
# Use git repository name
export CGIP_SESSION_NAME="git-$(git rev-parse --show-toplevel | xargs basename)"
Configuration in Shell Profiles
Bash (~/.bashrc)
# Basic configuration
export OPENAI_API_KEY="your-api-key-here"
export CGIP_SESSION_NAME=$(uuid)
# Custom provider configuration
export OPENAI_BASE_URL="http://localhost:11434/v1"
export OPENAI_API_KEY="ollama" # Ollama doesn't require a real key
Zsh (~/.zshrc)
# Basic configuration
export OPENAI_API_KEY="your-api-key-here"
export CGIP_SESSION_NAME=$(uuid)
# Project-aware sessions
export CGIP_SESSION_NAME="$(basename $PWD)-$(date -I)"
Fish (~/.config/fish/config.fish)
# Basic configuration
set -gx OPENAI_API_KEY "your-api-key-here"
set -gx CGIP_SESSION_NAME (uuid)
# Daily sessions
set -gx CGIP_SESSION_NAME (date -I)
Provider-Specific Configurations
OpenAI (Default)
export OPENAI_API_KEY="sk-your-openai-key"
# OPENAI_BASE_URL uses default: https://api.openai.com
Ollama (Local)
export OPENAI_BASE_URL="http://localhost:11434/v1"
export OPENAI_API_KEY="ollama" # Placeholder, not used by Ollama
Google Gemini
export OPENAI_BASE_URL="https://generativelanguage.googleapis.com/v1beta"
export OPENAI_API_KEY="your-gemini-api-key"
Mistral AI
export OPENAI_BASE_URL="https://api.mistral.ai/v1"
export OPENAI_API_KEY="your-mistral-api-key"
Anthropic Claude (via proxy)
export OPENAI_BASE_URL="https://api.anthropic.com/v1"
export OPENAI_API_KEY="your-claude-api-key"
Advanced Configuration
Multiple API Keys
For different projects or providers:
# Project-specific configuration
if [[ "$PWD" == *"work-project"* ]]; then
export OPENAI_API_KEY="$WORK_OPENAI_KEY"
export OPENAI_BASE_URL="https://work-api.company.com/v1"
else
export OPENAI_API_KEY="$PERSONAL_OPENAI_KEY"
# Use default OpenAI endpoint
fi
Dynamic Session Names
# Function to generate smart session names
cgip_session() {
if git rev-parse --git-dir > /dev/null 2>&1; then
# In a git repository
echo "git-$(git rev-parse --show-toplevel | xargs basename)-$(date +%Y%m%d)"
else
# Not in git, use directory and date
echo "$(basename $PWD)-$(date +%Y%m%d)"
fi
}
export CGIP_SESSION_NAME=$(cgip_session)
Conditional Configuration
# Use local Ollama during development, OpenAI in production
if [[ "$ENVIRONMENT" == "development" ]]; then
export OPENAI_BASE_URL="http://localhost:11434/v1"
export OPENAI_API_KEY="ollama"
else
export OPENAI_API_KEY="$PRODUCTION_OPENAI_KEY"
fi
Troubleshooting
Common Issues
API Key not found:
# Check if variable is set
echo $OPENAI_API_KEY
# Set temporarily for testing
OPENAI_API_KEY="your-key" cgip "test query"
Wrong endpoint:
# Check current base URL
echo $OPENAI_BASE_URL
# Test with explicit URL
OPENAI_BASE_URL="http://localhost:11434/v1" cgip "test query"
Session not working:
# Check session name
echo $CGIP_SESSION_NAME
# Test session functionality
cgip session --view
Debugging Environment
# Show all cgip-related environment variables
env | grep -E "(OPENAI|CGIP)"
# Test configuration
cgip config --get model
cgip config --get base_url
Security Considerations
API Key Security
- Never commit API keys to version control
- Use environment files that are gitignored
- Consider using secret management tools for production
Environment Files
Create a .env
file (add to .gitignore
):
# .env file
OPENAI_API_KEY=your-secret-key
OPENAI_BASE_URL=https://api.openai.com
CGIP_SESSION_NAME=my-session
Load with:
# Load environment file
source .env
Session Privacy
- Be aware that session names might be visible in process lists
- Use generic session names for sensitive work
- Clear sessions when working with confidential information:
cgip session --clear
Custom API Endpoints
Chat GipiTTY is designed to work with any OpenAI Chat Completions API-compatible provider. You can specify a custom API endpoint by setting the OPENAI_BASE_URL
environment variable. If not set, the default is https://api.openai.com
.
URL Construction
Chat GipiTTY intelligently constructs the API endpoint:
- If your base URL already contains
/chat/completions
, it uses it as-is - If your base URL ends with
/v1
(or similar version pattern), it appends/chat/completions
- Otherwise, it appends
/v1/chat/completions
(standard OpenAI pattern)
Provider Examples
Local Ollama Instance
export OPENAI_BASE_URL=http://localhost:11434/v1
Google Gemini (via OpenAI-compatible proxy)
export OPENAI_BASE_URL=https://generativelanguage.googleapis.com/v1beta
Mistral AI (via OpenAI-compatible endpoint)
export OPENAI_BASE_URL=https://api.mistral.ai/v1
Anthropic Claude (via OpenAI-compatible endpoint)
export OPENAI_BASE_URL=https://api.anthropic.com/v1
Other OpenAI-compatible Services
export OPENAI_BASE_URL=https://your-provider.com/v1
Custom Endpoint Patterns
If your provider uses a different endpoint pattern, you can specify the full URL:
export OPENAI_BASE_URL=https://custom-api.com/v2/chat/completions
Supported Providers
Chat GipiTTY works with any service that implements the OpenAI Chat Completions API standard:
- OpenAI (ChatGPT, GPT-4, GPT-3.5, etc.)
- Local models via Ollama
- Google Gemini (via OpenAI-compatible endpoints)
- Mistral AI (via OpenAI-compatible endpoints)
- Anthropic Claude (via OpenAI-compatible endpoints)
- Any other provider implementing the OpenAI Chat Completions API standard
Compatibility Notes
Custom OPENAI_BASE_URL
values can point to these or other OpenAI-compatible endpoints, but such providers might not implement the complete API and compatibility cannot be guaranteed.
As long as your provider implements the OpenAI Chat Completions API standard, Chat GipiTTY will work with it seamlessly.
Authentication
Most providers will still require you to set an API key:
export OPENAI_API_KEY=your_provider_api_key_here
Some providers may use different authentication methods or environment variable names. Consult your provider's documentation for specific authentication requirements.
Advanced Usage
This section covers advanced usage patterns and features for power users who want to get the most out of Chat GipiTTY. These features allow for more sophisticated workflows and customization options.
Overview
Advanced usage includes:
- File Input - Advanced file handling and multiple file processing
- Model Selection - Choosing and switching between different AI models
- System Prompts - Customizing AI behavior with system-level instructions
Key Concepts
Context Management
Advanced users can precisely control how Chat GipiTTY builds context from multiple sources:
# Combine multiple inputs with priority control
cat data.csv | cgip "analyze this data" -f config.yaml --no-session
Custom Workflows
Create sophisticated pipelines that leverage Chat GipiTTY's flexibility:
# Multi-step analysis pipeline
find . -name "*.py" | xargs wc -l | sort -nr | head -10 | cgip "which files need refactoring?"
Provider-Specific Features
Take advantage of unique capabilities offered by different AI providers:
# Use specific models for specialized tasks
cgip -M gpt-4o-search-preview "/search latest Python security vulnerabilities"
cgip -M claude-3-sonnet "provide detailed code analysis" -f complex_algorithm.py
Advanced Configuration
Environment-Based Configuration
Set up different configurations for different environments:
# Development environment
export CGIP_SESSION_NAME="dev-$(date +%Y-%m-%d)"
export OPENAI_BASE_URL="http://localhost:11434/v1"
# Production analysis environment
export CGIP_SESSION_NAME="prod-analysis"
export OPENAI_BASE_URL="https://api.openai.com"
Custom Model Aliases
Create shortcuts for frequently used model configurations:
# Set up aliases in your shell
alias cgip-fast="cgip -M gpt-3.5-turbo"
alias cgip-smart="cgip -M gpt-4o"
alias cgip-local="cgip -M llama2:7b"
Performance Optimization
Token Management
For large inputs, be strategic about token usage:
# Limit context size for faster responses
head -100 large_file.log | cgip "summarize the key issues"
# Use specific prompts to get concise answers
cgip "list the top 3 issues only" -f error_log.txt
Batch Processing
Process multiple files efficiently:
# Process files in batches
for file in *.py; do
echo "=== $file ==="
cgip "review this code" -f "$file" --no-session
echo
done
Integration Patterns
Shell Functions
Create reusable shell functions for common tasks:
# Add to your .bashrc or .zshrc
analyze_logs() {
tail -n ${2:-100} "$1" | cgip "analyze these logs for issues"
}
review_code() {
cgip "review this code for bugs and improvements" -f "$1"
}
explain_command() {
man "$1" | cgip "explain this command in simple terms"
}
Git Hooks
Integrate Chat GipiTTY into your git workflow:
#!/bin/bash
# pre-commit hook
git diff --cached | cgip "review these changes for potential issues" --no-session
Troubleshooting Advanced Usage
Context Too Large
If you hit token limits:
- Use
head
/tail
to limit input size - Use
--no-session
to exclude session history - Be more specific in your prompts
Model Compatibility
Different models have different capabilities:
- Vision models for image analysis
- Search-enabled models for web queries
- Local models may have limited features
Performance Issues
To improve response times:
- Choose appropriate models for the task
- Limit context size
- Use local models for simple tasks
For specific details on each advanced feature, see the individual sections in this chapter.
File Input
Chat GipiTTY provides powerful file input capabilities that allow you to include file contents as context for AI analysis. This section covers advanced file handling techniques and best practices.
Basic File Input
The -f
or --file
flag includes file contents in your query:
cgip "explain this code" -f src/main.py
Multiple File Input
You can include multiple files by using the -f
flag multiple times:
cgip "compare these implementations" -f version1.py -f version2.py
File Input with Other Context
File input works seamlessly with other input sources:
# Combine piped input, arguments, and files
git diff | cgip "review these changes" -f CONTRIBUTING.md
Input Priority Order
Chat GipiTTY processes input in this priority order:
- stdin (highest priority)
- Command arguments
- Files (lowest priority)
This means files provide background context while stdin and arguments take precedence.
Advanced File Patterns
Wildcard Expansion
Use shell wildcards to include multiple files:
# Include all Python files (shell expands the wildcard)
cgip "analyze this codebase" -f src/*.py
# Include specific file types
cgip "review configuration" -f config/*.yaml -f config/*.json
Directory Analysis
Analyze entire directories by combining with other tools:
# Analyze all files in a directory
find src/ -name "*.py" | head -5 | xargs -I {} cgip "brief analysis" -f {}
# Get file listing with analysis
ls -la src/ | cgip "what can you tell me about this project structure?" -f README.md
File Type Handling
Text Files
Most text-based files work seamlessly:
# Code files
cgip "review this code" -f app.py -f test.py
# Configuration files
cgip "check this configuration" -f config.yaml -f .env
# Documentation
cgip "summarize these docs" -f README.md -f CHANGELOG.md
Binary Files
Binary files are not supported directly, but you can analyze them indirectly:
# Analyze binary metadata
file suspicious_binary | cgip "what type of file is this?"
# Check binary dependencies
ldd my_program | cgip "analyze these dependencies"
Large Files
For large files, consider limiting content:
# First 100 lines
head -100 large_log.txt | cgip "analyze these log entries"
# Last 50 lines
tail -50 large_log.txt | cgip "what are the recent issues?"
# Specific sections
sed -n '100,200p' large_file.txt | cgip "analyze this section"
Practical Examples
Code Review Workflow
# Review changes with context
git diff HEAD~1 | cgip "review these changes" -f README.md -f package.json
# Analyze function with its tests
cgip "review this function and its tests" -f src/utils.py -f tests/test_utils.py
Configuration Analysis
# Check configuration consistency
cgip "are these configs consistent?" -f prod.yaml -f staging.yaml -f dev.yaml
# Analyze configuration with documentation
cgip "validate this configuration" -f config.yaml -f config-schema.json
Documentation Tasks
# Generate documentation from code
cgip "create API docs for these modules" -f api/*.py
# Update documentation based on changes
git diff --name-only | grep '\.py$' | head -3 | xargs -I {} cgip "update docs for changes" -f {} -f docs/api.md
Data Analysis
# Analyze data with metadata
cgip "analyze this dataset" -f data.csv -f metadata.json
# Compare data files
cgip "what changed between these datasets?" -f old_data.csv -f new_data.csv
Best Practices
File Size Considerations
- Small files (< 1KB): Include multiple files freely
- Medium files (1-10KB): Include selectively
- Large files (> 10KB): Use
head
/tail
/grep
to extract relevant sections
Context Management
# Include relevant context files
cgip "debug this error" -f error.log -f config.yaml -f README.md
# Avoid overwhelming context
cgip "analyze main function only" -f <(grep -A 50 "def main" script.py)
Security Considerations
- Never include files with secrets or API keys
- Be cautious with sensitive configuration files
- Use
.gitignore
patterns as a guide for what to exclude
Performance Tips
# Use specific file sections
cgip "analyze the error handling" -f <(grep -A 10 -B 5 "try:" app.py)
# Combine preprocessing with file input
cgip "analyze recent errors" -f <(grep ERROR app.log | tail -20)
Error Handling
File Not Found
# Check if file exists before using
[[ -f "config.yaml" ]] && cgip "analyze config" -f config.yaml || echo "Config file not found"
Permission Issues
# Use sudo if needed for system files
sudo cat /var/log/syslog | tail -50 | cgip "analyze system logs"
Large Context Warnings
If you get context too large errors:
- Reduce file count
- Use file excerpts instead of full files
- Process files individually
Integration with Workflows
CI/CD Integration
# Analyze test failures with source
npm test 2>&1 | cgip "analyze test failures" -f package.json -f src/main.js
Development Workflow
# Pre-commit analysis
git diff --cached --name-only | grep -E '\.(py|js|ts)$' | head -3 | xargs -I {} cgip "quick review" -f {}
System Administration
# Analyze system issues with config context
dmesg | tail -20 | cgip "analyze system messages" -f /etc/fstab -f /etc/systemd/system/my-service.service
File input is one of Chat GipiTTY's most powerful features, enabling you to provide rich context for AI analysis while maintaining the flexibility of command-line workflows.
Model Selection
Chat GipiTTY supports multiple AI models and providers, giving you flexibility to choose the right model for your specific task. This section covers how to select models, understand their capabilities, and optimize your usage.
Default Model
By default, Chat GipiTTY uses gpt-4o
when connected to OpenAI. This provides a good balance of capability and performance for most tasks.
Selecting Models
Command Line Selection
Use the -M
or --model
flag to specify a model for a single query:
cgip -M gpt-3.5-turbo "Simple question that doesn't need GPT-4"
cgip -M gpt-4o "Complex reasoning task"
cgip -M claude-3-sonnet "Detailed code analysis"
Configuration
Set a default model in your configuration:
cgip config --set model=gpt-4o
List Available Models
See what models are available with your current provider:
cgip --list-models
Model Categories
OpenAI Models
For up-to-date information about OpenAI models, capabilities, and pricing, see the official OpenAI documentation.
Common models include:
- GPT-4 family - Most capable models for complex reasoning
- GPT-3.5 family - Fast and cost-effective for simple tasks
- Specialized models - Vision, search, and other specialized capabilities
Local Models (via Ollama)
When using Ollama (OPENAI_BASE_URL=http://localhost:11434/v1
):
# Popular local models
cgip -M llama2:7b "Question for local Llama model"
cgip -M codellama:13b "Code-related question"
cgip -M mistral:7b "General purpose query"
cgip -M phi:2.7b "Lightweight model for simple tasks"
Other Providers
Claude (Anthropic)
For current Claude model information, see Anthropic's documentation.
cgip -M claude-3-opus "Most capable Claude model"
cgip -M claude-3-sonnet "Balanced Claude model"
cgip -M claude-3-haiku "Fast Claude model"
Gemini (Google)
For current Gemini model information, see Google's AI documentation.
cgip -M gemini-pro "Google's flagship model"
cgip -M gemini-pro-vision "Gemini with vision capabilities"
Model Selection Strategy
By Task Type
Code Analysis and Programming
# Best models for code
cgip -M gpt-4o "review this code" -f complex_algorithm.py
cgip -M claude-3-sonnet "explain this codebase" -f src/*.py
cgip -M codellama:13b "simple code question" -f script.sh # Local option
Creative Writing and Content
# Creative tasks
cgip -M gpt-4o "write a creative story about..."
cgip -M claude-3-opus "generate marketing copy for..."
Data Analysis
# Analytical tasks
cgip -M gpt-4-turbo "analyze this dataset" -f data.csv
cgip -M claude-3-sonnet "find patterns in this data" -f logs.txt
Quick Questions and Simple Tasks
# Fast, simple queries
cgip -M gpt-3.5-turbo "what's the capital of France?"
cgip -M mistral:7b "simple calculation or fact check" # Local option
Image Analysis
# Vision-capable models
cgip -M gpt-4o image --file photo.jpg "describe this image"
cgip -M gpt-4-vision-preview image --file diagram.png "explain this diagram"
Web Search
# Search automatically uses optimal model
cgip --search "latest developments in AI" # Auto-selects gpt-4o-search-preview
By Performance Requirements
Speed-Optimized
# Fastest responses
cgip -M gpt-3.5-turbo "quick question"
cgip -M phi:2.7b "simple local query" # Local, very fast
Quality-Optimized
# Best quality responses
cgip -M gpt-4o "complex reasoning task"
cgip -M claude-3-opus "detailed analysis requiring nuance"
Cost-Optimized
# Lower cost options
cgip -M gpt-3.5-turbo "cost-sensitive query"
cgip -M llama2:7b "free local processing" # No API costs
By Context Length
Large Context Requirements
# For large files or long conversations
cgip -M gpt-4-turbo "analyze entire codebase" -f src/*.py # 128k context
cgip -M claude-3-sonnet "process long document" -f book.txt # 200k context
Standard Context
# Normal usage
cgip -M gpt-4o "regular queries" # 128k context
cgip -M gpt-3.5-turbo "simple tasks" # 16k context
Advanced Model Usage
Model-Specific Features
Automatic Model Selection
Some features automatically select appropriate models:
# Web search auto-selects search-optimized model
cgip --search "current events"
# Image analysis ensures vision-capable model
cgip image --file photo.jpg "describe this"
# TTS uses speech models automatically
cgip tts "text to convert"
Provider-Specific Capabilities
# OpenAI specific
cgip -M gpt-4o "/search with web browsing"
# Claude specific (via compatible endpoint)
cgip -M claude-3-opus "long-form analysis with nuanced reasoning"
# Local model benefits
cgip -M llama2:7b "private, offline processing"
Model Switching in Sessions
# Start with fast model
cgip -M gpt-3.5-turbo "initial question"
# Switch to more capable model for follow-up
cgip -M gpt-4o "complex follow up based on previous answer"
Environment-Based Selection
Set up different defaults for different environments:
# Development environment - use local models
export OPENAI_BASE_URL=http://localhost:11434/v1
cgip config --set model=codellama:13b
# Production analysis - use best models
export OPENAI_BASE_URL=https://api.openai.com
cgip config --set model=gpt-4o
Best Practices
Model Selection Guidelines
- Start simple: Use
gpt-3.5-turbo
for straightforward queries - Upgrade when needed: Switch to
gpt-4o
for complex reasoning - Use specialized models: Choose vision models for images, code models for programming
- Consider context: Use high-context models for large files
- Balance cost and quality: Expensive models aren't always necessary
Performance Tips
# Cache expensive model responses in sessions
export CGIP_SESSION_NAME="analysis-session"
cgip -M gpt-4o "expensive analysis" -f large_dataset.csv
# Use cheaper models for iteration
cgip -M gpt-3.5-turbo "quick clarification about previous response"
Cost Management
# Preview context before sending to expensive models
cgip --show-context -M gpt-4o "preview what will be sent"
# Use local models for development/testing
cgip -M llama2:7b "test query structure before using paid API"
Troubleshooting Model Selection
Model Not Available
# Check available models
cgip --list-models
# Verify provider configuration
cgip config --get base_url
Feature Not Supported
# Some features require specific model families
cgip -M gpt-4o image --file photo.jpg # Vision requires vision-capable model
cgip -M gpt-4o-search-preview --search "query" # Search requires search-enabled model
Performance Issues
# Switch to faster model if response time is important
cgip -M gpt-3.5-turbo "time-sensitive query"
# Use local models to avoid network latency
cgip -M llama2:7b "local processing"
Model Resources
For detailed model comparisons, capabilities, and current pricing:
- OpenAI Models: OpenAI Platform Documentation
- Anthropic Claude: Claude Model Overview
- Google Gemini: Gemini Model Documentation
- Ollama Models: Ollama Model Library
These official sources provide the most up-to-date information about model capabilities, context lengths, pricing, and availability.
System Prompts
System prompts allow you to customize the AI's behavior by providing persistent instructions that apply to all interactions. They're like setting the "personality" or "role" for the AI model, helping it respond in a specific way that matches your needs.
What are System Prompts?
System prompts are special instructions that:
- Set the AI's role, personality, or expertise level
- Apply to all messages in a conversation
- Influence how the AI interprets and responds to queries
- Remain active throughout a session
Setting System Prompts
Command Line
Use the --system
flag to set a system prompt for a single query:
cgip --system "You are a senior Rust developer" "explain this error" -f error.log
Configuration
Set a default system prompt that applies to all queries:
# Set a default system prompt
cgip config --set system_prompt="You are a helpful assistant specializing in software development"
# View current system prompt
cgip config --get system_prompt
# Clear system prompt
cgip config --unset system_prompt
Session-Specific Prompts
System prompts work great with sessions:
# Set session and system prompt for a specific project
export CGIP_SESSION_NAME="rust-project"
cgip --system "You are a Rust expert helping with systems programming" "initial question"
# Subsequent queries in the same session maintain the system prompt context
cgip "follow-up question"
Common System Prompt Patterns
Role-Based Prompts
Senior Developer
cgip --system "You are a senior software engineer with 15 years of experience. Provide detailed, production-ready advice." "review this code" -f app.py
Code Reviewer
cgip config --set system_prompt="You are a thorough code reviewer. Focus on security, performance, maintainability, and best practices."
DevOps Engineer
cgip --system "You are a DevOps engineer specializing in containerization and CI/CD pipelines." "analyze this Docker setup" -f docker-compose.yml
Expertise Level Prompts
Beginner-Friendly
cgip --system "Explain everything in simple terms suitable for a beginner. Use analogies and avoid jargon." "what is a REST API?"
Expert Level
cgip --system "Assume deep technical expertise. Provide advanced insights and implementation details." "optimize this algorithm" -f algorithm.py
Communication Style Prompts
Concise Responses
cgip config --set system_prompt="Be extremely concise. Provide only essential information."
Detailed Explanations
cgip --system "Provide comprehensive explanations with examples, reasoning, and context." "explain this concept"
Structured Output
cgip --system "Always structure responses with clear headings, bullet points, and numbered steps." "analyze this issue"
Specialized System Prompts
Security Analysis
cgip --system "You are a cybersecurity expert. Focus on identifying vulnerabilities, security best practices, and potential attack vectors." "review this authentication code" -f auth.py
Performance Optimization
cgip --system "You are a performance optimization specialist. Focus on efficiency, scalability, and resource usage." "analyze this query" -f slow-query.sql
Documentation Writing
cgip --system "You are a technical writer. Create clear, well-structured documentation with examples." "document this API" -f api.py
Testing and QA
cgip --system "You are a QA engineer. Focus on test coverage, edge cases, and quality assurance." "create test cases for this function" -f function.js
Context-Specific Prompts
Project-Specific
# For a React project
export CGIP_SESSION_NAME="react-app"
cgip --system "You are a React expert working on a modern web application using hooks, TypeScript, and Next.js."
# For a data science project
export CGIP_SESSION_NAME="data-analysis"
cgip --system "You are a data scientist specializing in Python, pandas, and machine learning."
Technology-Specific
# Rust development
cgip --system "You are a Rust systems programmer. Focus on memory safety, performance, and idiomatic Rust patterns."
# Cloud architecture
cgip --system "You are a cloud architect specializing in AWS. Consider scalability, cost optimization, and best practices."
Advanced System Prompt Techniques
Multi-Role Prompts
cgip --system "You are both a senior developer and a mentor. Provide code solutions while teaching underlying concepts." "help me understand this pattern" -f design-pattern.py
Output Format Specifications
cgip --system "Always respond in this format: 1) Problem Summary 2) Solution 3) Implementation Steps 4) Testing Approach" "debug this issue"
Constraint-Based Prompts
cgip --system "You can only suggest solutions using standard library functions. No external dependencies allowed." "implement this feature" -f requirements.txt
Industry-Specific
# Financial technology
cgip --system "You are a fintech developer. Consider regulatory compliance, security, and financial accuracy."
# Healthcare
cgip --system "You are a healthcare software developer. Prioritize HIPAA compliance, data privacy, and reliability."
Best Practices
Effective System Prompts
- Be Specific: Clear, specific instructions work better than vague ones
- Set Expertise Level: Specify the level of technical detail you want
- Define Output Format: Tell the AI how to structure responses
- Include Constraints: Mention any limitations or requirements
- Set Context: Provide relevant background information
Examples of Good vs. Poor System Prompts
Good System Prompts
# Good: Specific role and output format
cgip --system "You are a senior Python developer. Provide code solutions with explanations, focusing on readability and performance. Include error handling and type hints."
# Good: Clear constraints and context
cgip --system "You are a DevOps engineer working with Kubernetes. Suggest solutions that work in production environments and follow security best practices."
Poor System Prompts
# Poor: Too vague
cgip --system "Be helpful"
# Poor: Contradictory instructions
cgip --system "Be extremely detailed but also very concise"
Managing System Prompts
Environment-Based Configuration
# Development environment
cgip config --set system_prompt="You are a developer focused on rapid prototyping and debugging."
# Production environment
cgip config --set system_prompt="You are a senior engineer focused on reliability, security, and maintainability."
Project-Specific Setup
Create project-specific configurations:
# In project directory
echo 'export CGIP_SESSION_NAME="$(basename $(pwd))"' >> .envrc
echo 'cgip config --set system_prompt="You are a [PROJECT_TYPE] expert working on [PROJECT_NAME]."' >> .envrc
System Prompt Examples by Use Case
Code Development
# General development
cgip config --set system_prompt="You are a senior full-stack developer. Provide clean, maintainable code with proper error handling and documentation."
# Specific language
cgip --system "You are a Go expert. Focus on idiomatic Go patterns, concurrency, and performance." "optimize this function" -f handler.go
System Administration
# Linux administration
cgip config --set system_prompt="You are a Linux system administrator. Focus on security, performance, and best practices for production systems."
# Network administration
cgip --system "You are a network engineer. Consider security, performance, and monitoring when suggesting solutions."
Data Analysis
# Data science
cgip --system "You are a data scientist. Provide Python solutions using pandas, numpy, and matplotlib. Include data validation and visualization suggestions."
# Database optimization
cgip --system "You are a database administrator specializing in PostgreSQL. Focus on query optimization, indexing, and performance tuning."
Troubleshooting System Prompts
System Prompt Not Working
# Check if system prompt is set
cgip config --get system_prompt
# Verify with context display
cgip --show-context "test query"
Conflicting Instructions
If system prompts conflict with query instructions:
- System prompts take precedence
- Be specific in your query to override system behavior
- Consider updating the system prompt for better alignment
Session Confusion
If the AI seems confused about its role:
- Clear the session:
cgip session --clear
- Restart with a clearer system prompt
- Make system prompts more specific
System prompts are a powerful way to customize Chat GipiTTY's behavior for your specific workflows and requirements. Experiment with different prompts to find what works best for your use cases.
Development Workflow
This section covers development setup and workflow for Chat GipiTTY contributors and advanced users who want to build from source.
Development Setup
Prerequisites
To build Chat GipiTTY from source, you'll need:
- Rust (latest stable version)
- Git
- A POSIX-compliant system (Linux, macOS, or WSL on Windows)
Clone and Build
git clone https://github.com/divanvisagie/chat-gipitty.git
cd chat-gipitty
cargo build
Running from Source
You can run Chat GipiTTY directly from the source directory:
cargo run -- --help
Running Tests
cargo test
Ubuntu Development Setup
On Ubuntu, some additional packages are required to build the deb package:
sudo apt-get install build-essential dh-make debhelper devscripts
Development Tools
Building Documentation
The documentation is built using mdBook:
# Install mdbook if not already installed
cargo install mdbook
# Build the book
cd book
mdbook build
# Serve locally for development
mdbook serve
Creating Releases
For maintainers, see the release process documentation for preparing new releases.
Package Building
Debian Package
# Build deb package (Ubuntu/Debian)
./scripts/build_deb.sh
Homebrew Formula
# Update homebrew formula
./scripts/homebrew.sh
Project Structure
chat-gipitty/
├── src/ # Main Rust source code
│ ├── main.rs # Application entry point
│ ├── args.rs # Command line argument parsing
│ ├── chat.rs # Core chat functionality
│ ├── chatgpt/ # OpenAI API client
│ ├── sub/ # Subcommands
│ └── ...
├── book/ # Documentation source
├── docs/ # Generated documentation
├── assets/ # Logo and assets
├── scripts/ # Build and release scripts
└── test_data/ # Test files
Contributing Guidelines
- Fork the repository on GitHub
- Create a feature branch from main
- Make your changes with appropriate tests
- Run the test suite to ensure nothing breaks
- Submit a pull request with a clear description
Code Style
- Follow standard Rust formatting (
cargo fmt
) - Run clippy for linting (
cargo clippy
) - Add tests for new functionality
- Update documentation as needed
Testing Your Changes
Before submitting a pull request:
# Format code
cargo fmt
# Run linter
cargo clippy
# Run tests
cargo test
# Test the binary
cargo run -- --help
Debugging
Environment Variables
Useful environment variables for development:
# Enable debug logging
export RUST_LOG=debug
# Show API requests (be careful with sensitive data)
export CGIP_DEBUG_API=1
Common Development Tasks
Adding a New Subcommand
- Create a new file in
src/sub/
- Implement the subcommand logic
- Add the subcommand to
src/sub/mod.rs
- Update argument parsing in
src/args.rs
- Add documentation in
book/src/
Testing API Changes
# Test with different models
cargo run -- -M gpt-3.5-turbo "test prompt"
# Test with custom endpoint
export OPENAI_BASE_URL=http://localhost:11434/v1
cargo run -- "test prompt"
Release Process
Maintainers should follow these steps for releases:
- Update version in
Cargo.toml
- Update
CHANGELOG.md
- Create and push version tag
- GitHub Actions will handle the rest
For detailed release instructions, see the project's internal documentation.
Contributing
Thank you for your interest in contributing to Chat GipiTTY! This guide covers how to contribute to the project, from reporting bugs to submitting code changes.
Ways to Contribute
Reporting Issues
- Bug Reports: Found a bug? Please report it with detailed steps to reproduce
- Feature Requests: Have an idea for a new feature? Let us know!
- Documentation: Help improve the documentation with corrections or additions
- Performance Issues: Report performance problems or suggest optimizations
Code Contributions
- Bug Fixes: Fix reported issues
- New Features: Implement requested features
- Performance Improvements: Optimize existing code
- Tests: Add or improve test coverage
- Documentation: Update code comments and documentation
Development Setup
For detailed development setup instructions, see the Development Workflow section, which covers:
- Prerequisites and dependencies
- Building from source
- Running tests
- Project structure
- Development tools and debugging
Contribution Process
1. Fork and Clone
- Fork the repository on GitHub
- Clone your fork locally
- Add the upstream repository as a remote
git clone https://github.com/your-username/chat-gipitty.git
cd chat-gipitty
git remote add upstream https://github.com/divanvisagie/chat-gipitty.git
2. Create a Branch
Create a new branch for your work:
git checkout -b feature/your-feature-name
# or
git checkout -b fix/issue-number
3. Make Changes
- Write clear, well-documented code
- Follow existing code style and conventions
- Add tests for new functionality
- Update documentation as needed
4. Test Your Changes
Before submitting, ensure your changes work correctly:
# Run the full test suite
cargo test
# Check formatting
cargo fmt --check
# Run clippy for linting
cargo clippy
# Test manually
cargo run -- "test query"
5. Commit and Push
# Stage your changes
git add .
# Commit with a clear message
git commit -m "Add feature: description of your changes"
# Push to your fork
git push origin feature/your-feature-name
6. Submit a Pull Request
- Go to the GitHub repository
- Click "New Pull Request"
- Select your branch
- Fill out the pull request template
- Submit for review
Code Style and Standards
Rust Code Style
- Follow standard Rust formatting (
cargo fmt
) - Use
cargo clippy
to catch common issues - Write clear, descriptive variable and function names
- Add documentation comments for public APIs
Commit Messages
Use clear, descriptive commit messages:
Add feature: web search integration
- Implement /search command prefix
- Add automatic model switching for GPT models
- Update documentation with search examples
Documentation
- Update relevant documentation for new features
- Add examples for new functionality
- Keep the README up to date
- Update the book documentation for user-facing changes
Adding New Features
New Command Line Options
- Add to
args.rs
in the appropriate struct - Handle the option in
main.rs
- Update help text and documentation
New Subcommands
- Create a new file in
src/sub/
- Implement the subcommand logic
- Add to
src/sub/mod.rs
- Update argument parsing in
args.rs
- Add documentation in
book/src/
New API Features
- Add to the
chatgpt/
module - Update request/response structs
- Add error handling
- Update tests
Testing Guidelines
Types of Tests
- Unit tests: Test individual functions and modules
- Integration tests: Test feature interactions
- Manual testing: Test real-world scenarios
Test Requirements
- Add tests for new functionality
- Test error conditions and edge cases
- Ensure tests are deterministic and isolated
- Don't depend on external services in tests
Manual Testing Checklist
# Test basic functionality
cargo run -- "test query"
# Test with different input sources
echo "test" | cargo run -- "analyze this"
cargo run -- -f test_file.txt "analyze this file"
# Test subcommands
cargo run -- config --help
cargo run -- session --help
Review Process
What to Expect
- Initial Review: Maintainers will review your pull request
- Feedback: You may receive suggestions for improvements
- Iterations: Make requested changes and push updates
- Approval: Once approved, your changes will be merged
Review Criteria
- Code quality and style
- Test coverage
- Documentation updates
- Backward compatibility
- Performance impact
Issue Reporting
Bug Reports
When reporting bugs, please include:
- Steps to reproduce the issue
- Expected vs actual behavior
- System information (OS, Rust version)
- Relevant error messages or logs
- Minimal example if possible
Feature Requests
For feature requests, please describe:
- The problem you're trying to solve
- Your proposed solution
- Alternative approaches considered
- Potential impact on existing functionality
Community Guidelines
Code of Conduct
- Be respectful and inclusive
- Welcome newcomers and help them learn
- Focus on constructive feedback
- Maintain professionalism in all interactions
Communication
- Use GitHub issues for bug reports and feature requests
- Join discussions in pull requests
- Ask questions if anything is unclear
- Provide context and examples when reporting issues
Recognition
Contributors are recognized through:
- Git commit history
- GitHub contributors list
- Release notes for significant contributions
- Project documentation
Getting Help
If you need help contributing:
- Check the Development Workflow for technical setup
- Review existing issues and pull requests
- Ask questions in GitHub discussions
- Look at recent commits for examples
Release Process
For information about the release process, see the Development Workflow section.
Thank you for contributing to Chat GipiTTY! Every contribution, no matter how small, helps make the project better for everyone.