Building a Local Semantic Search Engine - Part 5: Learning by Building
Building a semantic search engine taught me more about embeddings than reading about them ever could. The real value wasn't the tool—it was understanding what those 768 numbers actually mean.
Building a Local Semantic Search Engine - Part 4: Caching for Speed
First search on a new directory: wait for every chunk to embed. A hundred chunks? A few seconds. A thousand? You're waiting—and burning electricity (or API dollars if you're using a cloud service). Second search: instant. The difference? A JSON file storing pre-computed vectors. Caching turned "wait for it" into "already done."
Building a Local Semantic Search Engine - Part 3: Indexing and Chunking
I pointed the search engine at itself—indexing the embeddinggemma project's own 3 files into 20 chunks. Why 20 chunks from 3 files? Because a 5,000-word README as a single embedding buries the relevant section. Chunking solves that.
Building a Local Semantic Search Engine - Part 2: From Keywords to Meaning
Traditional search fails when you don't remember the exact words. Searching "debugging" won't find your notes about "fixing bugs." Semantic search finds them anyway—because it searches by meaning, not keywords.
Building a Local Semantic Search Engine - Part 1: What Are Embeddings?
"I love playing with my dog" and "My puppy is so playful and fun" are 80.4% similar. Compare that to "Cars are expensive to maintain"—only 45.5% similar. How does a computer know that? Embeddings—and I wanted to run them entirely on my laptop.
Building an MCP Agentic Stock Trading System - Part 7: MCP Experimentation Lessons
After building three AI trading agents with MCP, here's what I'd do differently.
Building an MCP Agentic Stock Trading System - Part 6: Cloud vs Local vs Rules
Building with three agent types taught me: you can optimize for speed, cost, or intelligence—pick two.
Building an MCP Agentic Stock Trading System - Part 5: Backtesting All Three Agents
I ran all three agents over 2 months of real market data to see how MCP handles different "brains" with the same tools. The results surprised me—but not in the way I expected.
Building an MCP Agentic Stock Trading System - Part 4: When Agents Disagree
Three AI agents analyze Apple stock on the same day. Two reach the same conclusion through reasoning, one through arithmetic. What does this reveal about AI decision-making?
Building an MCP Agentic Stock Trading System - Part 3: The Agentic Loop
The agentic loop is where LLMs become active problem-solvers instead of passive responders. The LLM doesn't just answer once—it iteratively calls tools, analyzes results, and decides what to check next. My trading agent uses this to analyze stocks: fetch data, calculate indicators, check trends, then make a decision.
Building an MCP Agentic Stock Trading System - Part 2: The MCP Servers and Tools
MCP servers are like USB hubs for AI—they provide standardized tools that any agent can plug into. My trading system has two: one fetches market data, the other calculates technical indicators. Write them once, use them with Claude, local LLMs, or even traditional code.
Building an MCP Agentic Stock Trading System - Part 1: The Architecture
I wanted to experiment with Model Context Protocol (MCP) and compare local LLMs against API-based ones. So I built a stock paper-trading system with three brains: a rules-based trader, Claude API, and a local Llama model in LM Studio. Same market data, different decision-making approaches.
Adding nano-banana 3 Support to My CLI Wrapper
Twenty-four hours after Google dropped nano-banana 3, I shipped support for it. New model, new resolutions (4K!), new features. This is what building with AI is like at this point in November 2025.
