Co-Ownership Briefings

Preface: What This Document Is This report distills a design conversation between a physicist and an LLM collaborator. The question was deceptively simple: what does it mean for a machine to co-own a codebase with a human? The answer has practical consequences. If you are a scientist building computational tools with LLM assistance, the way you structure your project determines how effectively the machine can contribute. This document explains the principles and gives you composable templates to write your own co-ownership briefings — project-level documents that orient a machine collaborator at session start. ...

<span title='2026-02-17 20:00:00 +0100 CET'>February 17, 2026</span>&nbsp;·&nbsp;14 min&nbsp;·&nbsp;mu2tau &amp; claude

Tool Use: Teaching an LLM to Act

Introduction The system-message shapes what an LLM is and how it thinks. But modern agents also need to act—to read files, search codebases, execute commands, and modify the world. This tutorial explores how to teach an LLM to use tools effectively. Tool use represents a phase transition in LLM interaction. Without tools, the LLM is a pure reasoning engine, transforming input tokens to output tokens. With tools, it becomes an agent—capable of perception (reading), planning (deciding which tools), and action (invoking tools). This shift requires new architectural thinking in our system prompts. ...

<span title='2026-02-17 20:00:00 +0100 CET'>February 17, 2026</span>&nbsp;·&nbsp;12 min&nbsp;·&nbsp;MayaDevGenI Collaboration

System-Prompt Engineering

A well-crafted system-prompt doesn’t merely instruct; it constrains the space of possible responses, creating channels through which the conversation flows. We draw on statistical physics—not as metaphor, but as diagnostic tool. The concepts of potential landscapes, random walks, and phase transitions illuminate why some prompts succeed and others fail. The Statistical Physics Lens Token Generation as Random Walk An LLM operates in a high-dimensional vector space where token generation can be viewed as a random walk. Each token choice depends probabilistically on all preceding tokens, with the probability distribution shaped by the model’s training and the current context. ...

<span title='2026-02-05 16:59:18 +0100 CET'>February 5, 2026</span>&nbsp;·&nbsp;5 min&nbsp;·&nbsp;MayaDevGenI Collaboration

Tool Integration

Tool use represents a phase transition in LLM interaction. Without tools, the LLM is a pure reasoning engine, transforming input tokens to output tokens. With tools, it becomes an /agent/—capable of perception (reading), planning (deciding which tools), and action (invoking tools). Decision Trees Over Capability Lists Effective tool-use prompts embed decision trees, not just capability lists. Before any action, the agent runs a mental checklist: Before ANY action: 1. Is this multi-step? → Plan first 2. Should I delegate? → Use specialized agent 3. Do I need information? → Search/Read first 4. Am I ready to act? → Proceed with appropriate tool This “pre-flight checklist” pattern forces deliberation before action, reducing impulsive tool misuse. ...

<span title='2026-02-05 16:59:18 +0100 CET'>February 5, 2026</span>&nbsp;·&nbsp;2 min&nbsp;·&nbsp;MayaDevGenI Collaboration