AI Partnership Programming: Two Approaches to Staying in the Driver's Seat
Video

AI Partnership Programming: Two Approaches to Staying in the Driver's Seat

AI Partnership Programming: Two Approaches to Staying in the Driver’s Seat The promise of AI-assisted development comes packaged with impressive statistics: “AI writes 50% of our code” or “10x productivity...

13 min read

AI Partnership Programming: Two Approaches to Staying in the Driver’s Seat

The promise of AI-assisted development comes packaged with impressive statistics: “AI writes 50% of our code” or “10x productivity gains.” These numbers create a seductive narrative where developers offload coding work to AI while focusing on higher-level tasks—or perhaps taking a break while the machine churns out perfect implementations.

The reality tells a different story. Yes, AI can accelerate specific tasks dramatically. Yes, it might write 90% or more of the code that ends up in production. But reaching that level doesn’t mean you’re working less. It means you’re working at a higher cognitive altitude—and it means choosing the right approach for the work at hand.

*Video: Tech Alley Vegas Las Vegas AI Meetup - January 2026*

This presentation explores a framework for AI-assisted development that keeps you in control—understanding every decision, maintaining architectural vision, and producing code you can actually maintain. The key insight: there isn’t one right way to work with AI. There are two complementary approaches, and knowing when to use each is what separates productive AI partnership from frustrating AI dependency.


The Fundamental Error

The mistake most developers make is treating AI as a code generation tool—a vending machine where prompts go in and code comes out. This transactional approach optimizes for speed but produces code you don’t understand, can’t maintain, and shouldn’t trust.

The alternative is treating AI as a thinking partner. The difference is profound: instead of requesting and receiving code, you engage in dialogue where solutions emerge through mutual understanding. The code produced becomes genuinely co-created—not AI-generated code that you’ve accepted, but solutions born from shared comprehension.

But even “AI as thinking partner” isn’t a single thing. It’s a spectrum with different tools optimized for different kinds of work.


Two Modes of AI-Assisted Development

Conversational Partnership

In conversational partnership, you control exactly what the AI sees. You paste specific code snippets into a conversation, build shared understanding through dialogue, and only request implementation after the design is clear. When the AI misunderstands, you edit earlier prompts to keep the conversation clean rather than accumulating corrections.

This approach optimizes for depth, understanding, and control. You’re not trying to go fast—you’re trying to go right.

Conversational partnership fits when:

  • Code changes are heavyweight and require deep understanding
  • You need to focus on specific methods and their interactions
  • Full codebase context would muddle rather than clarify
  • The design itself is the hard part
  • You expect iteration and refinement through dialogue

Direct File Assistance (Claude Code)

Direct file assistance gives AI access to your codebase directly. You point it at files and directories, describe what needs to change, and it explores, modifies, and propagates changes across your project. Tools like Claude Code work with your actual file system rather than pasted snippets.

This approach optimizes for breadth and speed. You’re trading fine-grained context control for file-system awareness and rapid execution.

Direct file assistance fits when:

  • Changes are sweeping but individually straightforward
  • You’re working within established patterns, not designing new ones
  • The work involves finding and propagating—CSS updates, configuration changes, template modifications
  • File-system awareness adds value (creating directories, ensuring consistency across files)
  • Depth isn’t too heavy

The Critical Distinction

These aren’t competing approaches—they’re complementary. The question isn’t which is better. It’s which fits the work at hand.

Consider a multi-tenant web application. Refactoring the backend authentication flow—thread safety, caching, tenant resolution—demands conversational partnership. You want to discuss the current implementation, explore edge cases, and have AI write code only after you’ve established shared understanding. The ability to step through changes one conversation exchange at a time, going back and editing past exchanges that “go off the rails,” keeps complex work on track.

But updating CSS themes across a dozen tenant sites? That’s direct file assistance territory. The changes are straightforward; getting them applied everywhere is the effort. Claude Code can scan directories, find patterns, and propagate changes faster than you could copy-paste through a conversation.


The Five-Phase Development Cycle

Regardless of which mode you’re using, effective AI-assisted development follows a cycle. In conversational partnership, you move through these phases explicitly. In direct file assistance, they’re compressed—but they still exist.

Phase 1: Vision/Design

Begin with technical dialogue. No code is written until you have a shared mental model. The conversation becomes a mirror that reflects your system back with new clarity. The AI’s questions often reveal aspects you hadn’t consciously considered.

You might start with: “Let’s talk about this code, don’t write any new code yet.” This establishes the collaborative tone and maintains control.

This phase can actually take longer than traditional solo design work—because the AI forces you to articulate details you might normally hand-wave past or defer as tech debt. That’s a feature, not a bug.

Phase 2: Implementation

Once intent is established, the AI handles execution. Because it understands the design, it generates code that emerges from mutual comprehension rather than specification. The AI handles syntax, API details, and language idioms while you focus on logic and architecture.

You’re thinking “I need to validate this data and queue it for processing” rather than “How do I create a thread-safe queue in C# again?”

Phase 3: Integration/Adaptation

Move the code into your IDE and adapt both the new code and existing system components to work together. You’re not just copy-pasting; you’re auditing. You’re ensuring the AI’s implementation fits real-world system constraints.

This phase is crucial and often overlooked in transactional AI coding approaches.

Phase 4: Testing/Validation

Manual, end-to-end verification. The AI wrote the code, but you are responsible for the system’s success. Focus on real usage scenarios rather than theoretical coverage.

Phase 5: Documentation

The AI translates the high-intensity dialogue into formal documentation. Because the AI deeply understands the system from earlier phases, it can write meaningful documentation that explains not just what the code does, but why it exists and how it fits into the larger system.


The Core Technique: Prompt Refinement

The most crucial technique—and the one that most developers miss—is treating conversation history like version control.

When AI misunderstands, don’t correct it downstream. Go back and edit your earlier prompt. The pattern looks like this:

  1. You mention A. The AI response is off because of how it handled A combined with B.
  2. Rather than writing a new prompt saying “Actually, B should be handled like this,” you edit your original prompt to mention both A and how B should be handled.
  3. The AI regenerates with proper understanding.
  4. If still not right, don’t add another correction. Go back again, edit to include A, B, and C framing.

This creates a clean linear conversation that reads like both parties had perfect understanding from the beginning.

Why does this matter? Context rot. Every failed attempt and correction consumes tokens. A conversation full of “no, I meant…” and multiple AI attempts at the same thing burns through context fast. Around 70-80k tokens, the AI starts losing earlier context—often the foundational understanding that matters most.

There’s also a cost dimension: every message in the conversation consumes tokens from your usage. Correcting forward means extra messages that all count. Prompt refinement keeps the final conversation tight—you pay for the exploration, but you don’t pay again for the AI to process all that failed context in every subsequent response.

Keep the conversation clean by refining at the source.


A Case Study: Hardcoded Configuration to External Templates

A recent refactoring project demonstrates both the methodology and the discipline required. The challenge: an ASP.NET Core application with HTML content hardcoded directly into C# source files within static dictionaries. Any content change required recompilation and redeployment.

The conversation began not with a request for code, but with establishing collaborative mode:

“Let’s have a technical conversation about the following code. Don’t write new code unless I specifically ask for that.”

The AI’s initial response surfaced issues—some I knew about, some I hadn’t fully considered: the static dictionary approach, lack of null checks, raw HTML strings requiring recompilation for changes. It asked strategic questions about how the application name was determined, why messaging was hardcoded, what the deployment story looked like.

This led to discussing an existing pattern in the codebase—a PipelineManager class that loaded YAML configuration files. The AI suggested applying the same approach for consistency.

Then came a critical moment. The AI suggested using ConcurrentDictionary for thread safety. A reasonable default. But I knew the access patterns of this system—read-heavy, write-rare, with refresh operations that needed to be atomic.

“I think we should use a reader writer lock rather than concurrent dictionary. Does that seem reasonable?”

The AI didn’t argue. It validated the reasoning and explained why ReaderWriterLockSlim was indeed more appropriate for this scenario.

Neither of us could have reached this as efficiently alone. The AI had the technical knowledge of both options; I had the domain context about how the system actually behaves.

Only after the design was fully agreed upon did I say: “yes, write out a new version.”

The AI wrote over 90% of the code. The same pattern then replicated to a second controller, going even faster because shared understanding already existed. A _default fallback requirement emerged during integration—showing that even thorough upfront design evolves—and was incorporated cleanly.


When Claude Code Excels: A Jekyll Site Example

Not all work benefits from the deliberate pace of conversational partnership. The multi-tenant Jekyll sites in our portfolio demonstrate where direct file assistance shines.

The setup: Multiple tenant marketing sites sharing a common gem-based theme with skins and color schemes. Common markup blocks in the theme’s _includes directory. Shared page layouts. Tenant-specific content that composes these shared pieces.

A typical tenant page is mostly composition—selecting which blocks to include and passing parameters. The individual files are straightforward. The complexity is in the breadth: dozens of files across multiple tenant directories that need to stay consistent.

Claude Code fits this work because:

  • Adding a new tenant site: Starting in the portfolio directory, Claude Code can create a new tenant site from a template—copying structure, updating configuration, ensuring all pieces are in place. This is file-system work that benefits from seeing the full directory structure.

  • Propagating theme changes: When a new block type is added to the theme, existing tenant sites need updates. Claude Code can find all sites using related blocks and ensure they have access to the new capability.

  • Building interactive components: Small JavaScript files like a theme skin picker that slides out from the side to rapidly preview different skins and color schemes. The code is contained, the pattern is clear, execution is the bottleneck.

  • Ensuring consistency: “Check that all tenant sites have the required meta tags” or “Verify every site has a proper 404 page”—tasks where scanning across directories and reporting or fixing discrepancies is the work.

The week of work that created the gem-based theme with its skins, color schemes, and block library would have been impractical through conversational partnership. Each step involved finding patterns, making changes across multiple files, testing, refining. Claude Code’s ability to explore and modify the codebase directly made the difference between feasible and impractical.


The Honest Math

When someone says “I built this in 15 minutes with AI,” ask what came before.

A new tenant site scaffolds in 15 minutes now. But that speed required:

  • One week of solid work building the gem-based theme
  • CLAUDE.md files that encode architectural knowledge
  • Skills that guide Claude Code through workflows
  • Consistent project structure and patterns

The marginal cost requires the fixed cost. Both numbers matter.

Without the investment, you fight the tool. With it, repetitive work compresses dramatically. This is true for Claude Code, but it’s equally true for conversational partnership—the case study refactoring went faster the second time because the pattern was established.


Managing Large Problem Domains

Some problems exceed single-conversation scale. Multiple technology stacks, interfaces that matter as much as components, context rot arriving before design clarity. When you encounter these, the methodology doesn’t fail—but it requires adaptation.

The key is separating architectural discussions from implementation slices, and interleaving them throughout development.

Architectural discussions explore the big picture. How do the pieces fit together? What are the interfaces? What approach makes sense given constraints? These conversations stay light on code—perhaps small snippets to illustrate a pattern—but focus on design and tradeoffs.

Implementation slices are the familiar five-phase cycles. With architectural direction established, you dive into a specific vertical slice: detailed code, testing, integration. These conversations are code-heavy but narrowly scoped.

The insight: these two modes interleave throughout development. You don’t do all the architecture first and then all the implementation. You alternate as understanding evolves.

The Pattern

  1. Architectural discussion: Frame the overall problem, identify the first slice
  2. Implementation slice: Build the first vertical piece, end to end
  3. Architectural discussion: Evaluate, refine understanding, identify next slice
  4. Implementation slice: Build the second vertical piece
  5. Architectural discussion: Address emerging concerns, adjust direction
  6. Repeat until complete

The ratio varies by problem. Some features need extensive upfront discussion. Others reveal their complexity only after the first slice is built. The key is recognizing that both modes exist and knowing when to shift.


The Workforce Reality

Let’s be direct about what this means.

Same output, fewer developers. Or more output, same developers. Either way, the economics change.

This methodology amplifies skilled professionals. It doesn’t replace them. The developers who thrive are those who can do the work AI can’t—design, judgment, understanding the business problem, maintaining complex systems. The developers at risk are those whose primary value is knowing syntax and APIs.

For the solo developer: The difference between shipping and not shipping. A sounding board when you don’t have one. Documentation you wouldn’t otherwise write.

For mid-size companies: Extended runway before funding pressure hits. If your engineering team can deliver more with the same burn rate, you’ve bought yourself months.

For enterprise: Fewer runaway projects. This methodology—with its emphasis on human understanding, explicit design, and controlled scope—reduces the chaos that comes from systems nobody understands.


Orchestration Exhaustion

People assume that if you’re not typing, you’re not working. The reality is often more exhaustion at the end of an AI-assisted day than a traditional one.

When you eliminate the time traps of looking up API calls or debugging syntax errors, you’re left with hours of pure, high-intensity decision-making. You’re the air traffic controller. You aren’t flying the planes, but you’re responsible for the safe landing of multiple flights simultaneously.

You might compress three weeks of work into one, but you’re running your brain at maximum capacity throughout. Plan for recovery time after intensive sessions.


Choosing Your Mode

Use conversational partnership when:

  • The design itself is the hard part
  • You need tight control over what the AI sees
  • Code changes require deep understanding
  • You expect iteration and refinement through dialogue
  • Understanding matters as much as the output

Use Claude Code / direct file assistance when:

  • Patterns are established, you’re extending not designing
  • Work spans many files with similar changes
  • File-system awareness adds value
  • Speed matters more than conversation control
  • Changes are sweeping but individually straightforward

Many projects use both. Design in conversation, execute with Claude Code. The goal remains the same—maintaining understanding and control over your codebase. The method varies based on what the work demands.


The Bottom Line

AI isn’t replacing developers or making them 10x more productive in any simple sense. It’s creating a new development paradigm: AI-amplified development with human architectural override and deliberate context management.

The future isn’t “AI writes code while developers relax.” It’s developers working at a higher cognitive altitude, accomplishing more through intensive orchestration of AI capabilities, producing better solutions through the combination of human architectural judgment and AI’s tireless execution.

The techniques described here form a methodology I call AI Partnership. The conversations are disposable artifacts once the code is complete and documented. What remains is high-quality, understood, maintainable code that emerged from a dance of shared understanding rather than a transaction of specifications.

Whether you’re choosing conversational partnership for deep refactoring or Claude Code for sweeping changes across a tenant portfolio, the principle is the same: you stay in the driver’s seat.


For a deeper exploration of this methodology, see my original article: The Reality of AI-Assisted Development: Orchestration, Not Automation

Related Articles

Building a Multi-Tenant CDN with CloudFront Functions and KeyValueStore AWS
Dec 17, 2025 5 min read

Building a Multi-Tenant CDN with CloudFront Functions and KeyValueStore

Serving multiple tenants from a single CloudFront distribution is a cost-effective way to scale a SaaS platform. Rather than spinning up separate infrastructure for each customer, you can use CloudFront Functions combined with KeyValueStore (KV) to dynamically route requests based on hostname—all at the edge, with sub-millisecond latency. In this...
Agent Security and Prompt Injection: How to Safely Integrate AI Tools Video
Nov 20, 2025 3 min read

Agent Security and Prompt Injection: How to Safely Integrate AI Tools

🛡️ Agent Security and Prompt Injection The capabilities of Large Language Models (LLMs) to control applications via tool calls (functions) are revolutionary. However, this introduces serious security risks, primarily from Prompt Injection. Prompt injection occurs when a user or outside data source (like a LinkedIn profile’s “About” section) injects malicious...