MayaDevGenI

A Framework for Principled Human-Machine Collaboration.

What happens when we stop treating machine intelligence as a tool and start treating it as a thinking partner?

Not as oracle—it hallucinates. Not as servant—that wastes what it offers. Not as replacement—it lacks what you have. But as collaborator: a strange new kind of mind that traverses conceptual space differently than you do.

MayaDevGenI (Maya’s Creative Developer’s Generative AI) emerges from this question.

Tutorial Chapter 9: The Workspace

The previous chapters developed a craft: how to write system-prompts that shape LLM behavior with precision. But craft requires a medium. A sculptor needs clay, not a description of clay. This chapter concerns the environment where system-prompts are authored, tested, deployed, and refined—the workspace in which the collaboration actually unfolds. The thesis is specific: Emacs, a programmable text environment, has become the most capable platform for operationalizing the system-prompt craft developed in this tutorial. Not because Emacs is trendy (it is nearly fifty years old), but because its architecture—transparent, extensible, text-native—aligns with what LLM collaboration demands. The medium shapes the practice. ...

<span title='2026-02-20 12:00:00 +0100 CET'>February 20, 2026</span>&nbsp;·&nbsp;7 min&nbsp;·&nbsp;MayaDevGenI Collaboration

Tutorial Chapter 8: Skills

Tools let an agent act. System-prompts shape how it thinks. But there is a gap between the two: domain knowledge that is too specific for a system-prompt yet too procedural for a tool. A commit workflow. A code review checklist. A deployment runbook. Knowledge that says not “here is a capability” but “here is how to do this particular thing well.” This is what skills address. A skill is a packet of specialized instructions that an agent loads on demand—expanding its competence for a specific task without permanently consuming context. If system-prompts are the agent’s character and tools are its hands, skills are its training manuals, pulled from the shelf when the task requires them. ...

<span title='2026-02-17 20:00:00 +0100 CET'>February 17, 2026</span>&nbsp;·&nbsp;9 min&nbsp;·&nbsp;MayaDevGenI Collaboration

Tutorial Chapter 7: Scaling Up

The prompts we’ve crafted so far have been compact—under 100 tokens, focused on a single role with a few behavioral constraints. This suffices for many purposes. But some collaborations demand more: explicit priority orderings, detailed epistemic standards, nuanced interaction patterns that can’t compress into a sentence or two. This section explores when and how to scale up, using a substantial real-world prompt as our case study. When Simple Isn’t Enough A simple prompt fails to meet your needs when you observe: ...

<span title='2026-02-05 16:59:19 +0100 CET'>February 5, 2026</span>&nbsp;·&nbsp;10 min&nbsp;·&nbsp;MayaDevGenI Collaboration

Tutorial Chapter 1: Why System-Prompts Matter

You have used an LLM. You typed a question, received an answer—perhaps useful, perhaps generic. The exchange felt transactional: you asked, it responded, the conversation drifted wherever momentum carried it. But there is another mode of interaction. Before your first message, before you even arrive, a hidden preamble can shape everything that follows. This is the /system-prompt/—a message the model receives as context, yet which you, as user, never see in the conversation flow. It establishes who the model is, how it should behave, what it should prioritize, and what it should avoid. ...

<span title='2026-02-05 16:59:18 +0100 CET'>February 5, 2026</span>&nbsp;·&nbsp;2 min&nbsp;·&nbsp;MayaDevGenI Collaboration

Tutorial Chapter 2: The Mechanics

Communication with an LLM occurs through an API—typically a JSON-based protocol that structures every interaction. Understanding this structure demystifies what happens when you “talk” to a model. The Request Anatomy A typical API request contains: Endpoint: The URL you’re addressing (e.g., /v1/chat/completions) Headers: Authentication and content-type metadata Body: The payload containing your actual request The body carries three essential components: model: Which LLM you’re addressing parameters: Generation settings (temperature, max tokens, etc.) messages: The conversation itself The Messages Array The messages array is where interaction lives. It is an ordered list of message objects, each with a role and content: ...

<span title='2026-02-05 16:59:18 +0100 CET'>February 5, 2026</span>&nbsp;·&nbsp;2 min&nbsp;·&nbsp;MayaDevGenI Collaboration

Tutorial Chapter 3: Prompts as Potential Landscapes

You need not read this section to write effective system-prompts. But if you wish to understand what you’re doing—to develop intuition rather than follow recipes—a conceptual framework helps. We offer one drawn from statistical physics. Token Generation as Random Walk An LLM generates text one token at a time. At each step, it computes a probability distribution over all possible next tokens, then samples from that distribution. The sequence of choices traces a path through a high-dimensional space of possibilities. ...

<span title='2026-02-05 16:59:18 +0100 CET'>February 5, 2026</span>&nbsp;·&nbsp;3 min&nbsp;·&nbsp;MayaDevGenI Collaboration

Tutorial Chapter 4: Crafting Your System-Prompt

Theory informs; practice teaches. Here we construct system-prompts from first principles, developing intuition through concrete examples. The Core Principles Economy The context window is finite. Your system-prompt competes with conversation history for the model’s attention. Every unnecessary token dilutes the signal. Be concise—not terse, but dense. Say what matters; omit what doesn’t. Semantic Density Maximize meaning per token. Prefer “Respond with scientific rigor” over “Make sure your responses are accurate and based on scientific evidence.” The first is five tokens; the second is twelve. Both convey similar intent, but the first leaves more room for conversation. ...

<span title='2026-02-05 16:59:18 +0100 CET'>February 5, 2026</span>&nbsp;·&nbsp;4 min&nbsp;·&nbsp;MayaDevGenI Collaboration

Tutorial Chapter 5: When Prompts Fail

A system-prompt that works perfectly on first draft is rare. More often, you’ll observe behaviors that diverge from your intent. This section catalogs common failure modes and their remedies—a diagnostic toolkit for prompt refinement. Failure Mode 1: Conflicting Instructions Symptoms The model oscillates between behaviors, produces incoherent compromises, or seems to ignore parts of your prompt. Responses feel inconsistent across turns. Cause Your prompt asks for incompatible things. “Be thorough and comprehensive” conflicts with “Keep responses under 100 words.” “Always ask clarifying questions” conflicts with “Respond immediately to requests.” The probability landscape has multiple competing minima; the model bounces between them. ...

<span title='2026-02-05 16:59:18 +0100 CET'>February 5, 2026</span>&nbsp;·&nbsp;4 min&nbsp;·&nbsp;MayaDevGenI Collaboration

Tutorial Chapter 6: Iterative Refinement

A system-prompt is not written; it is evolved. The process resembles experimental science more than engineering—you form hypotheses, test them empirically, and refine based on observation. This final section describes the practice. The Experimental Loop 1. Draft Begin with a candidate prompt based on the principles in Crafting Your System-Prompt. Don’t aim for perfection; aim for a reasonable starting point. Explicit is better than clever. Clear is better than complete. 2. Test Engage in representative conversations. Don’t just try your best-case scenarios—probe the edges. Ask questions that might reveal weaknesses. Push into areas where you’re uncertain how the model will behave. Vary your interaction style. ...

<span title='2026-02-05 16:59:18 +0100 CET'>February 5, 2026</span>&nbsp;·&nbsp;3 min&nbsp;·&nbsp;MayaDevGenI Collaboration