Skip to main content

Masks Off: 2025 in Review

· 10 min read
Ron Amosa
Hacker/Engineer/Geek

Masks Off: 2025 in Review

Talofa reader,

To say 2025 was a 'challenging year' would be the understatement of understatements. No, it wasn't challenging—it was full-on "masks off": power consolidation and imperial impunity on full display.

Yes, it's going to be one of those newsletters.

It's that wonderful, magical time of year where we get to reflect on everything that happened for us this year. The highs, the lows, the lessons learned and friends we made along the way. But in my heart of hearts, or at least if I'm getting it one hundred, I don't feel that way. At all.

AI Is a Conversation

· 6 min read
Ron Amosa
Hacker/Engineer/Geek

AI Is a Conversation

Talofa reader,

I remember one of my first AI-related engagements with one of my highly capable AWS Partners, a multi-national, multi-billion dollar company, highly resourced. It was early 2024, and I was delivering a workshop on GenAI. I was talking with one of the company tech leaders, and I was a little surprised at how they were thinking about AI—essentially, off-loading a good chunk of the problem decisions to the LLMs, and little to no emphasis on context details and scope going either way: to, from and back to the LLM.

The Complexity Wall: My Half-Day Experiment with Spec-Driven Development (SDD)

· 6 min read
Ron Amosa
Hacker/Engineer/Geek

The Complexity Wall - Spec-Driven Development Experiment

How a simple bias highlighter became a 96-task enterprise project in 4 hours

TL;DR

I spent a half-day (4-5 hours) experimenting with Spec-Driven Development using the spec-kit framework on GitHub. I gave it a simple project idea - a Chrome extension to highlight bias in news articles - but my overly complex project description led to a 96-task implementation plan that completely missed the point. The lesson: Even the input to SDD frameworks matters enormously.

The Experiment Setup

I wanted to test Spec-Driven Development, specifically using spec-kit integrated with Claude Code. The idea wasn't exactly simple: build a Chrome extension that highlights obviously biased language in news articles, but I had faith SDD would be able to handle it.

Instead of just building it with a light vibe coding session, I decided to feed this into the spec-kit framework to see what SDD would produce.

The AI Resistance

· 9 min read
Ron Amosa
Hacker/Engineer/Geek

The AI Resistance

Talofa reader,

I guess the first thing I should say about "AI" as a concept, an idea and a thing that's taken over the world as we know it, is- I'm not pro AI.

I'm not anti AI either- I feel like those labels immediately put people in the "extremes" of any argument and discussion.

Let's talk about how I see, and hear the argument about AI that I often see online.

To Live and Die in the Simulation

· 6 min read
Ron Amosa
Hacker/Engineer/Geek

To Live and Die in the Simulation

Talofa Reader,

We live in a simulation.

Or at least that's what it feels like right now.

But what's being simulated is living a normal life, with its ups and downs, in a world that "has its problems" but is otherwise unproblematic- apparently.

And it's a simulation because outside of this "bubble" or "delusion" a lot of very bad things are going on in the world.

That's the preface and pretext of this piece.

[[send tweet]]

Part 4: The Anatomy of AI Agents - Practical Security Implications

· 7 min read
Ron Amosa
Hacker/Engineer/Geek

Practical AI Agent Security Implications and Defense Strategies

In Part 3, we explored the core components of AI agents—the Brain, Perception, and Action modules—and the specific security vulnerabilities each introduces. Now, let's examine how these vulnerabilities create practical security challenges and discuss approaches for mitigating these risks.

Practical Security Implications

Understanding individual component vulnerabilities is important, but the real security challenge emerges when we consider how these vulnerabilities interact in practice.

The interconnected nature of AI agent components creates a security challenge greater than the sum of its parts. Vulnerabilities in one component can cascade through the system, creating complex attack scenarios that traditional security approaches may struggle to address.

Part 3: AI Agent Security Vulnerabilities - Brain and Perception Module Analysis

· 11 min read
Ron Amosa
Hacker/Engineer/Geek

AI Agent Architecture and Security Vulnerabilities Analysis

In Part 1 of this series, we explored how AI agents are transforming enterprise technology with their ability to perceive, decide, and act autonomously.

In Part 2, we examined three critical shifts in AI system evolution that have fundamentally altered the security landscape: the transition from rules-based to learning-based systems, the progression from single-task to multi-task capabilities, and the advancement from tool-using to tool-creating agents.

Today, we'll take a technical deep dive into the anatomy of modern AI agents, examining what's happening under the hood and the specific security vulnerabilities in each core component. As organizations rapidly adopt these powerful systems, understanding these vulnerabilities becomes essential for security professionals tasked with protecting their environments.

At its core, an AI agent consists of three primary components: the Brain (typically an LLM) that handles reasoning and decision-making, the Perception module that processes environmental inputs, and the Action module that interacts with systems and tools. Each component introduces unique security challenges that, when combined, create a complex attack surface unlike anything we've seen in traditional systems.

Part 2: Evolution - Three Critical Shifts in the AI Security Landscape

· 8 min read
Ron Amosa
Hacker/Engineer/Geek

Three Critical Shifts in the AI Security Landscape

In Part 1 of this series, we explored how AI agents—autonomous systems capable of perceiving, deciding, and acting—are transforming enterprise technology. We examined their core components (Brain, Perception, and Action modules) and why these systems matter now more than ever.

Today, we'll examine three fundamental shifts in how AI systems have evolved—transitions that have dramatically altered the security landscape. These aren't just technical changes; they represent fundamental transformations in how AI systems operate, the risks they pose, and the challenges organizations face in securing them.