Claude Code

Claude Code is a command-line agentic coding tool from Anthropic, available as part of the Claude Pro or Max subscription.

Install Claude Code:

npm install -g @anthropic-ai/claude-code
 
# Optional GitHub CLI
brew install gh
 
gh auth login

Enable sound alerts when tasks complete:

claude config set --global preferredNotifChannel terminal_bell

Symlink Claude Settings:

ln -sf ~/.lz.config/claude_code/.claude/CLAUDE.md ~/.claude/CLAUDE.md
ln -sf ~/.lz.config/claude_code/.claude/commands ~/.claude/commands
ln -sf ~/.lz.config/claude_code/.claude/settings.json ~/.claude/settings.json

Using Memory

Quickly add memories with the # shortcut:

# Always use descriptive variable names

You can also use /memory to edit all types of memory directly.

  • Be specific: “Use 2-space indentation” is better than “Format code properly”.
  • Use structure to organise: Format each memory as a bullet point and group related memories under descriptive markdown headings.
  • Review periodically: Update memories as your project evolves to ensure Claude always uses the most up-to-date information and context.

Resume previous conversations

Claude Code provides two options for resuming previous conversations:

  • --continue to continue the most recent conversation automatically
  • --resume to display a conversation picker

Examples:

# Continue most recent conversation
claude --continue
 
# Continue most recent conversation with a specific prompt
claude --continue --print "Show me our progress"
 
# Show conversation picker
claude --resume
 
# Continue most recent conversation in non-interactive mode
claude --continue --print "Run the tests again"

Run parallel Claude Code sessions with Git Worktrees

Create a new worktree:

# Create a new worktree with a new branch 
git worktree add ../project-feature-a -b feature-a
 
# Or create a worktree with an existing branch
git worktree add ../project-bugfix bugfix-123

This is particularly useful for AI coding agents working on different tasks for the same codebase in parallel, as each agent may take a fair amount of time to complete their task.

Manage your worktrees:

# List all worktrees
git worktree list
 
# Remove a worktree when done
git worktree remove ../project-feature-a

Ollama

[Ollama]((https://github.com/ollama/ollam) allows you to run large language models locally.

Instal Ollama via brew on Mac OSX:

brew install --cask ollama

Run the Ollam application and follow the step-by-step setup guide to install the Ollama command line tool.

To run a model locally:

# The models I am currently running locally:
 
ollama run phi3
ollama run llama3
ollama run gemma

Ollma’s default settings are optimised for small LLMs. Use the following settings to allow Ollma to release the potential of mid-size LLMs (Reference: Reddit):

export OLLAMA_CONTEXT_LENGTH=32768
export OLLAMA_FLASH_ATTENTION=true
export OLLAMA_KV_CACHE_TYPE=q4_0

llama.cpp is an open-source software library that performs inference on various large language models such as Llama. It is co-developed alongside the GGML project, a general-purpose tensor library.

— Wikipedia

Install with homebrew:

brew install llama.cpp

LM Studio is a feature-rich tool that is built on top of llama.cpp:

brew install --cask lm-studio

Google Vertex AI SDK

To run Jupyter notebooks in VS Code with google-cloud-sdk:

brew install --cask google-cloud-sdk
gcloud config set project YOUR_PROJECT_ID
 
# To install or remove components at your current SDK version [475.0.0], run:
# gcloud components install COMPONENT_ID
# gcloud components remove COMPONENT_ID
 
# To update your SDK installation to the latest version [475.0.0], run:
# $ gcloud components update
 
# Log into Google Cloud
gcloud auth application-default login
 
# Verify the configuration
gcloud config list

If you receive an error (ModuleNotFoundError: No module named 'imp') when running any of the above commands, try:

brew upgrade google-cloud-sdk
 
# As of 13/05/2024, gloud components update does not support python3.12+)
# We need to use python 3.11 to run the command
 
export CLOUDSDK_PYTHON=$(which python3.11)
gcloud components update
 
# gloud does work with python3.12+,
# we can safely unset the environment variable
 
unset CLOUDSDK_PYTHON
gcloud version

If you receive PermissionDenied: 403 Your application is authenticating by using local Application Default Credentials. The aiplatform.googleapis.com API requires a quota project, which is not set by default. To learn how to set your quota project, see [https://cloud.google.com/docs/authentication/adc-troubleshooting/user-creds](https://cloud.google.com/docs/authentication/adc-troubleshooting/user-creds) . [reason: "SERVICE_DISABLED", use the command below to set up the quota project.

gcloud auth application-default set-quota-project YOUR_PROJECT

OpenHands

OpenHands (formerly OpenDevin) is an AI-powered platform for software development agents. OpenHands agents can do anything a human developer can: modify code, run commands, browse the web, call APIs, and, yes, even copy code snippets from StackOverflow.

Run OpenHands locally with Docker:

docker pull docker.all-hands.dev/all-hands-ai/runtime:0.42-nikolaik
 
docker run -it --rm --pull=always \
    -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.42-nikolaik \
    -e LOG_ALL_EVENTS=true \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v ~/.openhands-state:/.openhands-state \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    --name openhands-app \
    docker.all-hands.dev/all-hands-ai/openhands:0.42

Run OpenHands with Ollama

# From the docs, no modifications required
docker run -it --rm --pull=always \
    -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.42-nikolaik \
    -e LOG_ALL_EVENTS=true \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v ~/.openhands-state:/.openhands-state \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    --name openhands-app \
    docker.all-hands.dev/all-hands-ai/openhands:0.42

Open the UI and configure your provider:

  • Custom Model: ollama/your_model
  • Base URL: http://host.docker.internal:11434
  • API Key: Leave blank or use ollama