Aaron Lamb

Systems Engineer & Co-founder at Hexaxia Technologies

Posts by Aaron Lamb (7)

Aaron Lamb
Why I Built HexOps: A CVSS 10.0 Wake Up Call December 3rd, 2025. I’m checking the news over coffee when I see it: “Critical Security Vulnerability in React Server Components.” CVE-2025-55182. CVSS 10.0. Pre-authentication remote code execution. No workarounds. Patch immediately. I manage 22 active projects. Most of them run Next.js with React Server Components. Some are client projects. Some are internal tools. Some are products generating revenue. My morning just changed. Within days, CISA added CVE-2025-55182 to their Known Exploited Vulnerabilities catalog. Security researchers confirmed China-nexus threat groups were already exploiting it. By the end of the week, a botnet called RondoDox was actively scanning for unpatched servers. This wasn’t theoretical. This was happening. I started the patch rotation. Open project. Check React version. Run pnpm update. Verify the build. Test critical paths. Commit. Deploy. Move to the next project. Twenty-two times. It took most of the day. Not because patching is hard. Because doing it twenty-two times is tedious, error-prone, and completely manual. The Real Problem Here’s the thing: CVE-2025-55182 wasn’t special. It was just loud. Critical vulnerabilities drop constantly. Most don’t make headlines. They show up in pnpm audit output that you ignore because you’re busy shipping features. They accumulate in your node_modules like technical debt with an expiration date. The React2Shell incident forced me to confront something I’d been avoiding: I had no system for managing patches across projects. Each project lived in its own directory. Its own terminal tab. Its own mental context. To check if a project was vulnerable, I had to cd into it, run the audit command, read the output, decide what to do. Multiply that by 22 projects and you get a full day of context switching just to answer the question: “Am I exposed?” And that’s just visibility. Actually applying patches meant repeating the process with update commands, build verification, and deployment. Every project. Every time. This isn’t a tooling problem unique to me. Every developer managing multiple projects faces this. The difference is most people manage 2 or 3 projects. I was managing 22. The pain was impossible to ignore. What I Built Before I started building, I looked around. Surely someone had solved this. PM2 handles process management for Node apps. It’s solid for running production services but it’s not what I needed for local development across dozens of projects with different frameworks and package managers. I stepped back and thought about what I actually wanted. My first job in tech was at a web hosting company. I spent years working with WHM and cPanel. Say what you want about those tools, but they gave you one interface to manage hundreds of sites. Check status, restart services, view logs. All without SSH-ing into individual accounts. Later, as a sysadmin, I used Red Hat Satellite for patch management across enterprise Linux fleets. Scan for vulnerabilities, see what’s affected, push updates in batches. Visibility and control at scale. More recently, working in DevOps, I leaned on Portainer for container management. One dashboard to see all your containers, start and stop them, check resource usage. Each of these tools solved a similar problem in different contexts. Centralized visibility. Batch operations. Reduced context switching. Nothing combined these ideas for local development. No “cPanel for your projects folder.” No Satellite for your node_modules. No Portainer for your dev servers. So I built it. HexOps is the brain child of all those experiences. A control panel, dashboard, and patch management solution for developers managing multiple local projects. The core idea is simple: one interface that shows every project, its status, and its patch health. No more cd-ing into directories. No more running audit commands in 22 terminals. No more spreadsheets tracking which projects got patched. HexOps watches your project directories. It knows which projects are running, which have outdated packages, and which have known vulnerabilities. Everything in one view. The patches dashboard was the feature I needed most. It pulls vulnerability data from npm audit and outdated package info from your package manager. Then it sorts everything by severity. Critical issues float to the top. You see the worst problems first. From there, you can update packages individually or in batches. Select five projects with the same vulnerable dependency, update them all. HexOps handles the package manager commands, you review the results. It tracks patch history too. Every update gets logged with timestamps, success status, and output. When your security team asks “when did we patch CVE-2025-55182?” you have an answer. I added other things along the way. An integrated terminal so you don’t need a separate window. System health monitoring for CPU and memory. Project start/stop controls. Git integration for committing patches with generated messages. But the patches dashboard is why HexOps exists. Everything else is convenience. Patch visibility is the point. Proof It Works: This Week Three days ago, CVE-2026-23864 dropped. Another React and Next.js vulnerability. Denial of service via memory exhaustion. Not as severe as React2Shell, but still needs patching. This time was different. I opened HexOps. The patches dashboard already showed which projects were affected. I selected all of them, clicked update, reviewed the results. Committed the changes with generated messages. Done. The whole process took maybe 30 minutes. Most of that was waiting for builds to verify. No context switching. No hunting through directories. No wondering if I missed one. That’s the difference tooling makes. December was a full day of scrambling. January was a coffee break. Why Give It Away? I could keep HexOps internal. It solves my problem. Job done. But the React2Shell incident affected a lot more than my 22 projects. Wiz reported that 39% of cloud environments contained vulnerable React or Next.js instances. Shadowserver counted 90,000 exposed servers weeks after the patch was available. That’s not because developers don’t care about security. It’s because checking 22 projects is annoying and checking 220 is impossible without tooling. The ecosystem has a visibility problem. Developers build things, dependencies accumulate, vulnerabilities appear, and nobody notices until a CVSS 10.0 makes headlines. Then everyone scrambles. Better tooling makes that scramble shorter. Or prevents it entirely. I’ve benefited enormously from open source software. Next.js, React, Node, pnpm, the entire stack I build on. Releasing HexOps is partly about giving back. Someone else managing a pile of projects shouldn’t have to build this from scratch. It’s also pragmatic. Unpatched projects become compromised projects. Compromised projects become attack infrastructure. The fewer vulnerable servers sitting around, the better for everyone. Including me. Mostly, I want to help other developers keep their projects secured in a timely fashion. The tooling gap is real. If HexOps helps someone patch faster when the next critical CVE drops, that’s a win for the whole ecosystem. And honestly, I built this for myself. It works. I use it daily. Figured others might want it too. Get It, Use It, Break It HexOps is available now on GitHub under the MIT license. Free to use, free to modify, free to ignore entirely if it’s not for you. One important note: HexOps is designed for local development. It runs on your workstation where you write code. It’s not meant to be deployed on servers, in CI/CD pipelines, or anywhere facing the internet. It has no authentication, no access controls, and assumes you trust everyone on your local network. Keep it where it belongs: on the machine where you develop. It runs on localhost, watches directories you configure, and stays out of your way until you need it. No cloud accounts. No telemetry. No subscriptions. The documentation covers installation, configuration, and the feature set in detail. If you’re managing more than a handful of projects and the CVE scramble sounds familiar, give it a look. If you find bugs, open an issue. If you want features, open a discussion. If you build something better, let me know. I’ll probably use it. The React2Shell incident was a wake up call. Not just about one vulnerability, but about how unprepared most of us are when critical patches drop. Better tooling won’t prevent the next CVE-2025-55182. But it can make the response a lot less painful. That’s why HexOps exists. That’s why it’s free. Project Links: GitHub: github.com/Hexaxia-Technologies/hexops Documentation: Included in the repo Related Reading: React Security Advisory for CVE-2025-55182 Next.js CVE-2025-66478 Advisory Akamai on CVE-2026-23864 About the Author: Aaron Lamb is a founder of Hexaxia Technologies, a consultancy specializing in cybersecurity, infrastructure engineering, and AI product development. He’s been building and breaking things in this industry for 30 years.
Aaron Lamb
Migrating From Squarespace to HexCMS: Building the Tools You Need I built HexCMS because I wanted a Git-based CMS without the security headaches of WordPress or the complexity of enterprise solutions. (Full story here) Then I had to actually use it. A client runs a hobbyist blog with 60+ posts and hundreds of images. They’ve been on Squarespace for years. Works fine, but they’re paying monthly for features they don’t use and dealing with customization limitations. “Can you move it to HexCMS?” Sure. How hard could migrating a blog be? The Squarespace Export Problem Squarespace has an export feature. It gives you an XML file. I downloaded it, opened it, immediately regretted the decision. What you get: Malformed XML (WordPress export format, sort of) HTML content that’s half Squarespace’s custom blocks, half inline styles Image URLs that point to Squarespace CDN No clear structure for metadata Tags and categories mixed together randomly Dates in weird formats What you need for HexCMS: Clean Markdown files YAML frontmatter (title, date, author, tags) Local images downloaded and properly referenced Consistent slug format Proper directory structure The XML export was useless. I’d have to scrape the live site. Three Migration Strategies I built three approaches because different situations need different tools. Strategy 1: RSS Feed Migration Squarespace provides RSS feeds. Quick, easy, reliable. One problem: RSS feeds only include the most recent 25 posts. The blog has 60+ posts. RSS wouldn’t cut it. But RSS is perfect for testing. Fast feedback loop. Parse 25 posts, check the output, iterate quickly. npm run migrate -- --dry-run This became my development workflow. Make changes, run RSS migration in dry-run mode, check markdown output, repeat. Strategy 2: Sitemap Scraping For complete migrations, I needed the sitemap. Squarespace generates a sitemap.xml with every blog post URL. Parse the sitemap, scrape each URL, extract content and images, convert to Markdown. Challenges: Rate limiting (don’t hammer the server) Different Squarespace templates use different HTML structures Images embedded in various formats Need to respect robots.txt (even though it’s our own site) Implementation: // Polite scraping: 1.5 second delay between requests // Try multiple content selectors (different templates) // Exponential backoff on failures // Detailed logging for debugging Works perfectly for full migrations. Takes time, but gets everything. Strategy 3: Single Post Migration Sometimes you just need to migrate one post. Test the migration. Fix a broken import. Migrate new content. npm run migrate:sitemap -- --url=https://www.example-blog.com/blog/my-post Same scraping logic as sitemap mode, but targets one URL. Fast iteration for debugging specific posts. The Image Problem Every blog post has 5-15 images. Photos, diagrams, reference materials. Squarespace hosts them on their CDN. Challenge 1: Image download failures Network issues. Squarespace rate limiting. Large images timing out. Random 500 errors from their CDN. Solution: Retry logic with exponential backoff. Failed image? Wait 1 second, retry. Failed again? Wait 2 seconds. Then 4, 8, 16, max 30 seconds. // Download with retry // Track failures // Log which images failed for manual recovery // Continue migration even if some images fail Most failures are temporary. Retry logic caught 95% of them. The remaining 5% logged to console for manual download. Challenge 2: Concurrent downloads 60 posts × 10 images = 600 image downloads. Sequential downloads would take hours. Concurrent downloads: 5 at a time. Fast enough. Not so fast that we overwhelm Squarespace’s servers or my network. Configurable via imageConcurrency setting. Slow network? Set to 2. Fast connection? Set to 10. Challenge 3: Image organization Where do images go? One giant folder? Post-specific directories? Decision: Post-specific directories. Each post gets an images/blog/post-slug/ folder. Why: Clean organization (find images by post name) No naming conflicts (two posts can have “cover.jpg”) Easy to delete post and all its images together Matches HexCMS conventions Challenge 4: Image references in Markdown Original Squarespace HTML: <img src="https://images.squarespace-cdn.com/content/v1/abc123/def456/image.jpg" /> Converted Markdown needs: ![Image description](/images/blog/post-slug/image.jpg) The migrator: Extracts image URL from HTML Downloads to local directory Generates local path Converts HTML <img> to Markdown with correct path All automatic during conversion HTML to Markdown Conversion Squarespace stores content as HTML with custom styling and classes. HexCMS wants clean Markdown. Used Turndown library. Works great for basic HTML. Had to customize for Squarespace quirks. Problem 1: Nested formatting Squarespace loves nested <div> wrappers with inline styles. Turndown preserves the nesting, output looks terrible. Solution: Strip Squarespace-specific classes before conversion. Let Turndown handle clean HTML. Problem 2: Custom blocks Squarespace has custom blocks for galleries, quotes, call-outs. These render as complex HTML structures. Solution: Custom Turndown rules for common patterns. Image galleries become lists of images. Block quotes get cleaned up. Code blocks preserve formatting. Problem 3: Embedded content YouTube embeds. Twitter embeds. Instagram posts. Squarespace wraps these in <iframe> tags with their own styling. Solution: Preserve the embed codes but clean up the wrapper. Markdown doesn’t have native embeds, but Next.js can process them. Frontmatter Generation Every HexCMS post needs YAML frontmatter: --- title: "Post Title" author: "Blog Author" publishedAt: "2024-01-15" excerpt: "First 150 characters of post..." featuredImage: "/images/blog/post-slug/cover.jpg" status: "published" featured: false tags: ["hobby", "topic-1", "topic-2"] --- Squarespace provides most of this data, just in different formats. Title: Clean from HTML <h1> or meta tags. Author: Squarespace doesn’t expose author in RSS or sitemap. Set a default author name in config. Configurable per blog. Date: Extract from URL slug or meta tags. Squarespace uses consistent date format in permalinks. Excerpt: First paragraph of content, stripped of HTML, truncated to 150 chars. Featured Image: First image in post content becomes featured image. Download to cover.jpg. Tags: Squarespace mixes tags and categories. Extract both, deduplicate, lowercase for consistency. Incremental Migration First migration run: 60 posts. Takes 15 minutes. Works perfectly. Then I found a bug in image path generation. Fixed it. Now what? Re-migrate all 60 posts? Added skipSlugs configuration. List of post slugs already migrated. Migrator skips them. skipSlugs: [ 'already-migrated-post', 'another-old-post', ] Incremental migrations: Migrate initial batch Find issues in output Fix migrator code Add migrated slugs to skipSlugs Re-run migration (only processes new posts) Repeat until perfect For this migration, I ran 4 passes. First got 60 posts. Second fixed image paths (skipped 60, migrated 0). Third caught 3 new posts published during migration (skipped 60, migrated 3). Fourth was final verification. Real-World Numbers Client blog migration: 63 total posts 487 images downloaded 15 minutes total runtime 0 manual interventions after setup 2 image download failures (recovered via retry logic) Output: 63 clean Markdown files YAML frontmatter formatted correctly All images local and referenced properly Ready for HexCMS without modifications Comparison to manual migration: Manual: 5-10 minutes per post × 63 posts = 5-10 hours Automated: 10 minutes setup + 15 minutes migration = 25 minutes Time saved: 4.5-9.5 hours What I Learned: Migrator Edition Dry run mode is critical. Can’t overstate this. Migrations are destructive. You don’t want to discover bugs after writing 63 files. Dry run mode shows exactly what will happen without changing anything. I ran dry-run mode probably 50 times during development. Caught issues early. Validated fixes immediately. Multiple strategies beat one perfect solution. RSS migration is fast but incomplete. Sitemap migration is complete but slow. Single-post migration is perfect for debugging. Don’t force users into one approach. Give them options. Let them choose based on their situation. Logging is your debugging tool. Network failures. HTML parsing issues. Image download problems. You can’t predict every edge case. Verbose logging mode saved me hours. When something failed, logs told me exactly where and why. npm run migrate:sitemap -- --verbose Shows every step: fetching sitemap, parsing URLs, scraping content, downloading images, writing files. Critical for debugging production migrations. Retry logic pays off. Network requests fail. Servers have bad moments. Timeouts happen. Exponential backoff with max retries solved 95% of failures automatically. The 5% that failed logged clearly for manual recovery. Parallel downloads with limits. Sequential: Too slow. Unlimited parallel: Overwhelms server. 5 concurrent downloads hit the sweet spot. Fast enough. Polite enough. Configurable for different scenarios. Configuration files beat hard-coded values. Every migration is different. Different blog URL, different output directory, different author name, different concurrency needs. migrator.config.js makes the tool reusable. Took 10 extra minutes to implement. Saved hours on subsequent uses. When You Need This You’re migrating from Squarespace to anything Markdown-based: HexCMS, Jekyll, Hugo, Gatsby, Next.js with MDX, Astro. If your target needs Markdown and local images, this tool works. You have a large blog: Small blog (5-10 posts)? Manually copy content. Large blog (50+ posts)? Automation pays off. You value your time: Manual migration is tedious. Copy content, download images, format frontmatter, fix references, repeat 50 times. Automated migration: Configure once, run, verify. Use your time for customization and design instead. You want repeatability: Migrating multiple blogs with similar structure? Configure once, reuse everywhere. I can now migrate any Squarespace blog to HexCMS in under 30 minutes. Most of that is configuration and verification, not actual work. What’s Next for the Tool Current state: Built specifically for Squarespace to HexCMS. Works reliably for that use case. The patterns (retry logic, multi-strategy approach, incremental migration) should work for other Markdown-based targets, but I haven’t tested them yet. If you need help migrating off Squarespace, reach out: Hexaxia contact form or LinkedIn. Future improvements: Better error recovery (resume from last successful post instead of re-processing). Support for Squarespace’s newer block formats (they keep changing HTML structure). Video migration (currently skips embedded videos, could download them). Export validation (verify every image downloaded, every link works, every frontmatter field populated). But honestly: It works well enough. The client blog migrated perfectly. Tool is reliable for its purpose. Additional features would be nice-to-have, not critical. Software doesn’t need to be perfect. It needs to solve the problem. This solves the problem. The Bigger Picture This migrator exists because I built HexCMS. HexCMS exists because I wanted Git-based content management without WordPress security headaches. Building your own tools creates a cascade of related tools. CMS requires a migrator. Migrator requires error handling. Error handling requires logging. Each piece enables the next. I spent maybe 8 hours building this migrator. Saved 5-10 hours on the first migration. Will save another 5-10 hours on the next migration. ROI is already positive. More importantly: I control the entire stack. Content in Git. CMS serves from Git. Migrator populates Git. No vendor lock-in anywhere. No services that can be deprecated. No APIs that can change. That’s the real win. About the Author: Aaron Lamb is the founder of Hexaxia Technologies, specializing in cybersecurity consulting, infrastructure engineering, and AI product development. Project Links: HexCMS Architecture Post Need migration help? Contact Hexaxia
Aaron Lamb
Building True Offline-First PWAs: Architecture Patterns That Actually Work I built a PWA for field sales tracking as a proof-of-concept. The requirements seemed straightforward: track store visits, capture product photos, generate reports. Standard CRUD app stuff. Then I started field testing. The app broke constantly. WiFi in warehouses is terrible. Cellular data drops when moving between locations. Users would capture data, see a success message, then hours later discover their work was gone. Upload failures looked like successes. Syncs timed out mid-request. The database corrupted after partial writes. The core problem: I built a web app that cached assets, not an offline-first app. There’s a massive difference. “Works offline sometimes” is not the same as “offline-first.” Most PWAs bolt caching onto a network-dependent architecture. It breaks the moment conditions get real. I had to rebuild from scratch with one principle: assume the network doesn’t exist, then add sync as a bonus when it shows up. Note on this project: This was a proof-of-concept that ultimately didn’t move to production. The client decided not to proceed with the project. That said, the lessons learned about offline-first architecture are broadly applicable, and the patterns work. Some security trade-offs (like long-lived device tokens) were acceptable for a POC but would need more hardening for a production system handling sensitive data. Why Most PWAs Fail at Offline-First: The Cascade of Breaking Here’s what I learned the hard way. Problem 1: The UI lies to users. User captures data offline. The app shows a success message. Looks saved. Eight hours later, they realize it’s gone. The save worked locally, but the sync failed silently. No error. No warning. Their work just disappeared. This happened constantly in early testing. I thought handling the IndexedDB save was enough. Wrong. The sync is where everything breaks, and users never see it happen. Problem 2: Authentication stops working. You need to be authenticated to call the API. But you need to call the API to get authenticated. Session tokens expire after 30 minutes. Refresh tokens require network calls. OAuth redirects are impossible offline. I initially used JWT tokens with 1-hour expiration. Field workers would go offline for 2-3 hours. When connectivity returned, their tokens were expired. They’d get logged out mid-sync, losing their upload progress. Brutal user experience. Problem 3: Conflict resolution is harder than you think. Two devices work offline. Both modify the same record. Which one wins when they sync? Last-write-wins sounds simple. It causes data loss. User A updates field X, User B updates field Y, last sync deletes one of those changes. Conflict resolution UIs sound sophisticated. Users don’t understand them. They just want their data to be there. Problem 4: Partial failures corrupt everything. Upload starts. Five images succeed. Network drops. Sixth image fails. Now what? The user sees “5 of 6 uploaded” but doesn’t know which one failed. Retry uploads all six, causing duplicates. Or worse, the local database marks everything as synced, leaving one image orphaned forever. Most teams compromise at this point. “Requires internet connection” disclaimer. Offline mode is read-only. Elaborate conflict UIs that confuse users. That’s not offline-first. That’s giving up. Dexie.js Schema Versioning: Never Break Offline Users I tried using IndexedDB directly. Terrible mistake. The API is callback hell. Schema migrations require manually tracking version numbers. One mistake corrupts the entire database. Users lose everything. Dexie.js saved me weeks of debugging. Clean Promise-based API, automatic migrations, better error handling. But here’s what I didn’t understand initially: you can never do destructive schema changes in an offline-first app. Think about it. A user goes offline for a week. They’re running version 1 of your schema. Meanwhile, you ship version 3 which removes a field they’re actively using. When they sync, chaos. The solution: only add, never remove. Additive-only schema migrations // IndexedDB setup with Dexie for offline-first storage import Dexie, { type EntityTable } from 'dexie'; const db = new Dexie('AppDB') as Dexie & { sessions: EntityTable<Session, 'id'>; products: EntityTable<Product, 'id'>; syncQueue: EntityTable<SyncQueue, 'id'>; deviceToken: EntityTable<DeviceToken, 'id'>; stockItems: EntityTable<StockItem, 'id'>; }; // Version 1: Original schema db.version(1).stores({ sessions: 'id, storeName, storeLocation, repName, visitDate, createdAt, status, synced', products: 'id, sessionId, productName, createdAt, synced', }); // Version 2: Add sync queue and product sync fields db.version(2).stores({ sessions: 'id, storeName, storeLocation, repName, visitDate, createdAt, status, synced', products: 'id, sessionId, productName, createdAt, synced, syncStatus, lastSyncAttempt', syncQueue: 'id, productId, sessionId, status, createdAt', }); // Version 3: Add device token for API auth db.version(3).stores({ sessions: 'id, storeName, storeLocation, repName, visitDate, createdAt, status, synced', products: 'id, sessionId, productName, createdAt, synced, syncStatus, lastSyncAttempt', syncQueue: 'id, productId, sessionId, status, createdAt', deviceToken: 'id', }); // Version 4: Add shift start/end times to sessions db.version(4).stores({ sessions: 'id, storeName, storeLocation, repName, visitDate, shiftStart, shiftEnd, createdAt, status, synced', products: 'id, sessionId, productName, createdAt, synced, syncStatus, lastSyncAttempt', syncQueue: 'id, productId, sessionId, status, createdAt', deviceToken: 'id', }); // Version 5: Add stock items for inventory tracking db.version(5).stores({ sessions: 'id, storeName, storeLocation, repName, visitDate, shiftStart, shiftEnd, createdAt, status, synced', products: 'id, sessionId, productName, createdAt, synced, syncStatus, lastSyncAttempt', syncQueue: 'id, productId, sessionId, status, createdAt', deviceToken: 'id', stockItems: 'id, sessionId, stockName, createdAt', }); Why this works: Version 1 creates the base tables. Version 2 adds sync tracking fields. Version 3 adds device tokens. Version 4 adds timestamps. Version 5 adds inventory. Nothing ever gets removed. Nothing ever breaks. When a user on version 1 opens the app a week later, Dexie runs migrations 2, 3, 4, and 5 sequentially. Their old data stays intact, just gets new fields with sensible defaults. I tried destructive migrations early on. Added a field in version 2, removed it in version 3 thinking I didn’t need it. Bad idea. Users who went offline between versions 2 and 3 had data that couldn’t sync. The server rejected it. I spent a day writing migration logic to handle the missing field. Never again. Now I only add, never remove. New fields get defaults. New tables start empty. Old clients ignore new fields they don’t understand. New clients fill in missing fields from old data. Everything stays compatible. Server-side compatibility: The server needs to handle clients running different schema versions. When data syncs, the client sends its version number. The server fills in missing fields with defaults. Simple pattern because additive migrations mean old data is just a subset of new data. // Helper function to create a product with schema-aware defaults export async function createProduct(product: Product): Promise<string> { // Set default syncStatus to 'pending' if not provided // Older schema versions don't have this field const productWithDefaults = { ...product, syncStatus: product.syncStatus || 'pending' as const, }; await db.products.add(productWithDefaults); return product.id; } This pattern means the app can evolve its data model without breaking offline users. When you need to add functionality, you add tables and fields. When you need to change behavior, you add status flags and feature toggles. The schema grows, but it never breaks backward compatibility. Exponential Backoff Sync: Stop Killing Batteries My first sync implementation checked for connectivity every 10 seconds. Simple. Reliable. And it killed the battery in 4 hours. Field workers complained immediately. They’d start their shift at 8 AM, battery would be dead by noon. The constant network polling drained power even when the requests failed instantly. I tried the opposite: manual sync button. Users had to remember to tap “Sync” when they saw connectivity. They forgot. Data sat in the queue for hours. When they finally remembered to sync, they’d have 50 items backed up and no idea which ones mattered. The solution: exponential backoff. Failed sync? Wait 1 second, retry. Failed again? Wait 2 seconds, retry. Failed again? Wait 4, then 8, then 16, max 30 seconds. If the network is truly down, you stop hammering it. If it’s intermittent, you catch brief windows of connectivity quickly. // Sync queue processing with exponential backoff const MAX_RETRIES = 5; const BASE_BACKOFF_MS = 1000; // 1 second const MAX_BACKOFF_MS = 30000; // 30 seconds /** * Calculate exponential backoff delay * 1s, 2s, 4s, 8s, 16s, max 30s */ function getBackoffDelay(retryCount: number): number { const delay = BASE_BACKOFF_MS * Math.pow(2, retryCount); return Math.min(delay, MAX_BACKOFF_MS); } /** * Check if item should be retried based on backoff timing */ function shouldRetryNow(item: SyncQueue): boolean { if (!item.lastSyncAttempt) { return true; // No previous attempt, process now } const backoffDelay = getBackoffDelay(item.retryCount); const lastAttemptTime = new Date(item.lastSyncAttempt).getTime(); const now = Date.now(); const timeSinceLastAttempt = now - lastAttemptTime; return timeSinceLastAttempt >= backoffDelay; } Battery life improved dramatically. Users could work full 8-hour shifts without charging. Track sync state explicitly: Every item in the queue has a status: pending, syncing, or failed. Prevents duplicate uploads. Shows users exactly what’s happening. /** * Process a single sync queue item */ async function processSyncItem(item: SyncQueue): Promise<boolean> { try { // Check if we should retry now (respecting backoff) if (!shouldRetryNow(item)) { return false; // Skip this item for now } // Check if exceeded max retries if (item.retryCount >= MAX_RETRIES) { console.warn(`Max retries exceeded for queue item ${item.id}`); await updateSyncQueueItem(item.id, { status: 'failed', lastSyncAttempt: new Date().toISOString(), uploadError: 'Max retries exceeded', }); await updateProduct(item.productId, { syncStatus: 'failed', uploadError: 'Max retries exceeded', }); return false; } // Update status to syncing await updateSyncQueueItem(item.id, { status: 'syncing', lastSyncAttempt: new Date().toISOString(), }); // Upload image const cloudUrl = await uploadImage( item.imageData, item.productId, item.sessionId ); // Update product with cloud URL and mark as synced await updateProduct(item.productId, { imageUrl: cloudUrl, syncStatus: 'synced', synced: true, lastSyncAttempt: new Date().toISOString(), uploadError: undefined, }); // Remove from sync queue await removeSyncQueueItem(item.id); return true; } catch (error) { // Update sync queue item with failure await updateSyncQueueItem(item.id, { status: 'failed', retryCount: Math.min(item.retryCount + 1, MAX_RETRIES), lastSyncAttempt: new Date().toISOString(), uploadError: error instanceof Error ? error.message : 'Unknown error', }); return false; } } What happens after days offline: The app doesn’t try to upload 100 items simultaneously when connectivity returns. It processes them one by one, respecting the backoff delays. If one item fails, the others keep going. Much easier to debug. Bonus: exponential backoff naturally handles API rate limiting. If the server starts returning 429 errors, the delays automatically throttle the client. Service Worker Authentication: The Catch-22 Traditional auth breaks offline: you need credentials to call the API, but you need the API to get credentials. Session tokens expire after 30 minutes. Users go offline for 3 hours. Tokens are expired when they reconnect. Auth fails, sync fails, users lose data. JWT refresh tokens don’t help. Refreshing requires an API call. Can’t make API calls offline. OAuth redirects are impossible without connectivity. Device tokens solve this: Generate a long-lived token on first app launch. Store it in IndexedDB. Include it in every API request. The server tracks which tokens belong to which users and can revoke them remotely if needed. Not as secure as short-lived sessions, but offline-first requires trade-offs. The alternative is an app that can’t function offline at all. // Device token generation and storage const DEVICE_TOKEN_ID = 'device-token'; /** * Get or create a device token for API authentication. * The token is generated once per device/browser and stored in IndexedDB. */ export async function getOrCreateDeviceToken(): Promise<string> { const existing = await db.deviceToken.get(DEVICE_TOKEN_ID); if (existing) { return existing.token; } // Generate a new token const token = `pt_${crypto.randomUUID().replace(/-/g, '')}`; const deviceToken: DeviceToken = { id: DEVICE_TOKEN_ID, token, createdAt: new Date().toISOString(), }; await db.deviceToken.add(deviceToken); return token; } Service workers need the token too: Service workers run background sync. They need auth credentials. But they can’t access localStorage or cookies. Only IndexedDB works in both contexts. // Service worker authentication with IndexedDB const DB_NAME = 'AppDB'; const TOKEN_STORE = 'deviceToken'; const TOKEN_ID = 'device-token'; // Get device token from IndexedDB in service worker context async function getDeviceToken() { return new Promise((resolve, reject) => { const request = indexedDB.open(DB_NAME); request.onerror = () => { console.error('[Service Worker] Failed to open IndexedDB'); resolve(null); }; request.onsuccess = (event) => { const db = event.target.result; // Check if the store exists if (!db.objectStoreNames.contains(TOKEN_STORE)) { console.warn('[Service Worker] Token store not found'); db.close(); resolve(null); return; } const transaction = db.transaction(TOKEN_STORE, 'readonly'); const store = transaction.objectStore(TOKEN_STORE); const getRequest = store.get(TOKEN_ID); getRequest.onsuccess = () => { db.close(); if (getRequest.result && getRequest.result.token) { resolve(getRequest.result.token); } else { resolve(null); } }; getRequest.onerror = () => { db.close(); resolve(null); }; }; }); } // Background sync using device token self.addEventListener('sync', (event) => { if (event.tag === 'sync-images') { event.waitUntil( syncImages().then(() => { // Notify clients that sync completed return self.clients.matchAll().then((clients) => { clients.forEach((client) => { client.postMessage({ type: 'SYNC_COMPLETE', success: true }); }); }); }) ); } }); // Process sync queue with authentication async function syncImages() { const token = await getDeviceToken(); if (!token) { console.warn('[Service Worker] No device token available'); return { success: false, error: 'No device token' }; } const response = await fetch('/api/sync', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${token}`, }, }); if (!response.ok) { throw new Error(`Sync failed with status: ${response.status}`); } return response.json(); } Security trade-offs: Device tokens are less secure. Compromised token = permanent access until revoked. But the alternative is an app that locks users out when offline. That’s worse. For the POC, this trade-off was acceptable. For production systems handling sensitive data, you’d want additional hardening: Server-side token revocation when devices are lost Device fingerprinting to detect suspicious usage Re-authenticate periodically when online (but never block offline work) Rate limiting and anomaly detection server-side Additional encryption layers for sensitive data in IndexedDB Offline-first means graceful degradation: full security online, reduced security offline, never complete lockout. Client-Side Operations: The Server Can’t Help You Traditional web apps offload heavy work to the server. Image processing, document generation, complex calculations - all server-side. Offline-first flips this completely. If it requires a server, it doesn’t work offline. Image compression: Users capture 20-30 product photos per shift. 3MB each. That’s 60-90MB per day over cellular connections. Initial version: upload raw photos, compress server-side. Problem: users couldn’t save photos offline. They’d capture images, lose connectivity, photos never uploaded. Solution: compress client-side before saving to IndexedDB. // Client-side image compression import imageCompression from 'browser-image-compression'; const compressionOptions = { maxSizeMB: 0.5, // Max 500KB maxWidthOrHeight: 800, // Max dimension useWebWorker: true, fileType: 'image/jpeg' as const, initialQuality: 0.8, }; export async function compressImage(file: File): Promise<File> { try { const compressedFile = await imageCompression(file, compressionOptions); return compressedFile; } catch (error) { console.error('Image compression failed:', error); // Return original if compression fails return file; } } export async function fileToBase64(file: File): Promise<string> { return new Promise((resolve, reject) => { const reader = new FileReader(); reader.readAsDataURL(file); reader.onload = () => resolve(reader.result as string); reader.onerror = (error) => reject(error); }); } Result: 3MB photo becomes 400KB. 80-90% smaller. Sync time drops from 30 seconds to 3 seconds. Battery life improves. Data costs drop by 85%. Users don’t notice quality loss. 800px is plenty for product photos viewed on phones. Document generation: Field workers need to generate reports from collected data at end of shift. Traditional approach: send data to server, server generates Word doc, user downloads. This breaks offline. Can’t generate report without connectivity. Solution: generate documents client-side. Browser creates the Word doc using docx.js. User downloads immediately, no network required. // Client-side Word document generation import { Document, Paragraph, TextRun, Packer } from 'docx'; export async function generateReport(session: SessionWithProducts): Promise<Blob> { const doc = new Document({ sections: [{ properties: {}, children: [ // Title new Paragraph({ text: 'FIELD REPORT', heading: HeadingLevel.HEADING_1, alignment: AlignmentType.CENTER, }), // Session info new Paragraph({ children: [ new TextRun({ text: 'Location: ', bold: true }), new TextRun(session.storeLocation), ], }), new Paragraph({ children: [ new TextRun({ text: 'Date: ', bold: true }), new TextRun(format(new Date(session.visitDate), 'MMMM d, yyyy')), ], }), // Products data ...session.products.map((product, index) => new Paragraph({ children: [ new TextRun({ text: `${index + 1}. ${product.productName}`, bold: true }), ], }) ), ], }], }); return Packer.toBlob(doc); } Users work their entire shift offline. Capture data, compress images, generate reports, download them. Zero network required. Why client-side is the only option: Anything that requires the server doesn’t work offline. Period. Trade-offs: larger bundle size, more complex client code. But modern browsers handle this well. WebWorkers prevent UI blocking. Performance is good enough. Testing: DevTools Lies About Real Network Conditions Chrome DevTools has an “offline” checkbox. It’s barely useful. Real networks are messier: Intermittent: works 30 seconds, drops 2 minutes, comes back Slow: requests take 60+ seconds then timeout Partial: some requests succeed, others fail randomly Rate limited: server rejects too many requests What actually works: Create custom network profiles in DevTools. “Warehouse WiFi”: 100ms latency, 500Kbps, 5% packet loss. “Moving vehicle”: 500ms latency, drops every 30 seconds. More important: field test with real devices in real conditions. App works great on simulated 3G, then breaks on actual rural 3G. The simulation doesn’t capture everything. Service worker debugging: Service workers are invisible. Errors don’t surface in console. You need extensive logging. Log every cache hit, cache miss, network request, sync event. Send logs to analytics in production. You can’t debug offline issues without visibility into what’s happening on user devices. IndexedDB inspection: Build admin tools to inspect sync queues remotely. When a user reports issues, you need to see what’s stuck in their queue. DevTools Application tab works for local testing, but you need production visibility. What I Learned: Mistakes and Corrections What worked: Device tokens from day one. Didn’t retrofit them later. Saved months. Sync queue as first-class data structure. Not an afterthought. Made failures visible and debuggable. Additive-only migrations. Never broke offline users. Client-side image compression. Sync completion jumped from 60% to 95%. Data usage dropped 80%. Mistakes I made: Mistake: Sync every 60 seconds automatically. Result: Batteries died in 4 hours. Fix: Exponential backoff + manual sync button for urgent cases. Mistake: Last-write-wins conflict resolution. Result: Users lost data when multiple devices synced. Fix: Keep full version history, let users resolve conflicts when needed (but make auto-resolution good enough that manual is rare). Mistake: Put too much logic in service worker. Result: Hit size limits, couldn’t access needed APIs. Fix: Service worker does minimal work (cache, sync coordination). Main app handles business logic. What to skip: Don’t build elaborate conflict resolution UIs. Users won’t use them. Last-write-wins + metadata is usually enough. Don’t optimize bundle size early. Get offline functionality working first. What’s critical: Invest in logging and observability. Offline-first apps fail in ways you can’t reproduce locally. You need visibility into production devices. IndexedDB queries are slower than SQL. Use indexed fields for frequent queries. Do complex filtering client-side. Service workers add complexity. But for offline-first, the trade-off is worth it. When Offline-First Is Worth the Complexity Don’t build offline-first unless you actually need it. It’s complex. More code. Larger bundles. Harder testing. Careful data modeling. You need it if: Users work in unreliable connectivity. Field workers. Warehouses. Rural areas. International travel. Remote locations. Users have unreliable devices. Older smartphones. Budget tablets. Limited data plans. Users perform critical workflows that can’t be interrupted. Emergency responders. Field technicians. Sales reps capturing orders. When you don’t need it: Users mostly have good connectivity. Occasional offline is acceptable. Users work on desktops with reliable office WiFi. For these cases: cache static assets aggressively, use optimistic UI updates, handle errors gracefully. Don’t build full offline-first. What’s next: Background Sync API is maturing. Periodic Background Sync will enable better automatic sync. Better conflict resolution (CRDT-style operational transforms) is possible. But the complexity often isn’t worth it for simple data models. Offline-first is a commitment. It shapes every architectural decision. But for users who need it, it’s the difference between productive work and constant frustration. This POC ran successfully during testing. Field workers tested it for several weeks. Zero data loss during that period. Sync completion hit 95%+. Battery lasted full shifts. The client decided not to move forward with the project for business reasons unrelated to the technical implementation. But the architectural patterns work, and the lessons apply to any offline-first scenario. Worth the complexity if you need it. About the Author: Aaron Lamb is the founder of Hexaxia Technologies, specializing in cybersecurity consulting, infrastructure engineering, and AI product development.
Aaron Lamb
When Your AI Assistant Wastes 3 Attempts: Building Better Interfaces I work closely with an AI assistant across many aspects of my business: strategy, operations, and technical development. It excels at architecture, debugging, writing code. But I noticed a pattern: certain technical tasks resulted in multiple failed attempts before success. The symptom: 3 consecutive errors trying to do one simple thing. The diagnosis: My tools weren’t designed for AI collaboration. The fix: 10 minutes of wrapper scripting eliminated the entire problem class. The Problem: 3 Failed Attempts I asked the AI assistant to add some documents to my RAG (Retrieval Augmented Generation) system. Simple task. Here’s what happened: # Attempt 1: Wrong Python command python episodic_memory.py --add file.md # Error: command not found # Attempt 2: Missing virtual environment python3 episodic_memory.py --add file.md # Error: ModuleNotFoundError: No module named 'chromadb' # Attempt 3: Wrong script entirely source venv/bin/activate && python episodic_memory.py --add file.md # Error: unrecognized arguments: --add The actual command needed: cd .ai/rag && source venv/bin/activate && \ python embed_documents.py --files ../../output/file.md the AI assistant eventually got there, but wasted significant time (and my API budget) failing first. Why This Happens AI assistants are excellent at: Understanding intent Writing new code Debugging logical errors Following patterns AI assistants struggle with: Remembering exact CLI syntax (which vs. where) Environment setup (is it python or python3?) Subtle distinctions (two scripts with similar names, different args) Undocumented conventions (must cd to specific directory first) The issue isn’t the AI. The issue is my interface was designed for humans who learn once, not AI that starts fresh every conversation. The Root Cause: Two Similar Scripts My RAG system had two Python scripts: episodic_memory.py - Stores conversation history python episodic_memory.py --store \ --state "User asked X" \ --action "Did Y" \ --outcome "Result Z" embed_documents.py - Embeds documents for search python embed_documents.py --files path/to/file.md Both lived in .ai/rag/, both required venv activation, both processed markdown files. the AI assistant couldn’t reliably pick the right one. Add in: Ubuntu’s python vs python3 quirk Virtual environment activation requirement Different working directory expectations Similar but incompatible argument syntax You get 3 failed attempts. The Solution: Single Entry Point I created a simple wrapper script: #!/bin/bash # ./rag-cli - Single entry point for all RAG operations VENV_PYTHON="$SCRIPT_DIR/venv/bin/python3" case "${1:-help}" in embed) shift "$VENV_PYTHON" "$SCRIPT_DIR/embed_documents.py" "$@" ;; episode) shift "$VENV_PYTHON" "$SCRIPT_DIR/episodic_memory.py" "$@" ;; search) shift "$VENV_PYTHON" "$SCRIPT_DIR/search.py" "$@" ;; health) "$VENV_PYTHON" "$SCRIPT_DIR/embed_documents.py" --health-check ;; *) echo "Usage: ./rag-cli [embed|episode|search|health]" ;; esac New interface: ./rag-cli embed --files output/file.md ./rag-cli episode --list ./rag-cli search "query" ./rag-cli health What this eliminates: ❌ No more python vs python3 confusion ❌ No more venv activation required ❌ No more choosing between scripts ❌ No more working directory issues ❌ No more syntax guessing Result: Zero failed attempts since implementation. The Documentation Fix I also added a quick reference file the AI reads first: QUICK-REF.md: # Quick Reference for AI Assistant **Use the wrapper script `./rag-cli` instead of calling Python directly.** ## Common Operations ### Embed Documents ```bash cd .ai/rag && ./rag-cli embed --files ../../output/filename.md Search cd .ai/rag && ./rag-cli search "your query" Common Mistakes to Avoid Don’t call Python directly - use the wrapper Don’t mix up the two systems - documents ≠ episodes Don’t try “add” command - uses --store, not add ``` Now when the AI assistant needs to use RAG: Reads QUICK-REF.md (3 seconds) Uses correct command (first try) Moves on Broader Lessons: AI-Friendly Tooling This pattern applies beyond RAG systems. If you’re building tools that AI assistants will use: 1. Single Entry Points Beat Multiple Scripts Instead of: python analyze_data.py --input file.csv python transform_data.py --input file.csv --output transformed.csv python validate_data.py transformed.csv Use: ./data-tools analyze file.csv ./data-tools transform file.csv -o transformed.csv ./data-tools validate transformed.csv 2. Consistent Argument Patterns AI struggles with subtle variations: --file vs --files vs --input --state vs --status vs --condition Pick one convention. Stick to it everywhere. 3. Self-Documenting Help Text ./tool help ./tool [command] --help Should show: Available commands Common examples Expected argument format Most common mistakes 4. Environment Abstraction Don’t make the AI remember: Virtual environment activation Working directory requirements PATH setup Environment variables Wrapper handles all of it. 5. Fail Fast with Clear Errors Bad error: Error: unrecognized arguments Good error: Error: unrecognized command 'add' Did you mean: --store (for episodes) or embed (for documents)? See ./rag-cli help Implementation Checklist Building an AI-friendly CLI wrapper: Single executable entry point Subcommands instead of separate scripts Handle environment setup internally Consistent argument naming across commands Built-in help text with examples Quick reference doc (QUICK-REF.md) Clear error messages with suggestions Version checking/health commands Time investment: 30 minutes to build wrapper + docs Time saved: Hours of debugging + reduced API costs The Tools Addition After fixing the wrapper issue, I asked what system packages would help. The AI suggested: fd-find - Better file finding (clearer than find) yq - YAML parsing (like jq for YAML) bat - Syntax highlighting These aren’t for the AI. They’re for making bash commands more readable and reliable. Better tooling means fewer edge cases the AI has to handle. Installing them was trivial. Not installing them would be penny-wise, pound-foolish. Results Before wrapper: 3 failed attempts to embed 3 files ~2 minutes wasted per RAG operation Frequent context switching to debug After wrapper: 0 failed attempts (100+ operations since) <5 seconds per operation No debugging needed ROI: 30 minutes to build Saved 2+ hours in first week Eliminated entire error class Takeaway When you see your AI assistant making the same mistakes repeatedly, the problem isn’t the AI. The problem is your interface wasn’t designed for AI collaboration. Simple wrappers, clear docs, consistent patterns. These aren’t “nice to have.” They’re force multipliers for AI-assisted development. The best part? These same improvements make your tools better for humans too. About the Author: Aaron Lamb is the founder of Hexaxia Technologies, specializing in cybersecurity consulting, infrastructure engineering, and AI product development.
Aaron Lamb
The Borrowed Trust Strategy: Scaling Referrals Without Burning Relationships The Problem: Cold outbound is broken. AI-generated emails have burned the market. Response rates are near zero. Email Service Providers are locking down deliverability. The Opportunity: Your satisfied clients each have 500+ LinkedIn connections. Those connections already trust your clients. What if you could systematically reach them? What is “Borrowed Trust”? Borrowed trust is a systematic approach to leveraging existing client relationships for warm outbound. Instead of cold outreach, you strategically access the networks of satisfied clients. The Core Insight: People trust recommendations from their connections far more than cold messages from salespeople. This strategy formalizes and scales that dynamic. Why This Works Now LinkedIn Deliverability: LinkedIn currently maintains near-100% deliverability vs. declining email deliverability due to Gmail/Yahoo restrictions. Answer Engine Optimization (AEO): LinkedIn is currently the #2 source cited by Large Language Models (LLMs), predicted to become #1 within 6-12 months. This makes LinkedIn presence critical for getting cited when prospects ask AI for recommendations. Trust Deficit: AI-generated outbound has created massive trust erosion. Borrowed trust bypasses this entirely by leveraging pre-existing relationships. Why This Matters for Consultants At Hexaxia, we’ve always operated on referrals and reputation. Cold outbound never felt right - it’s spammy and burns markets before you can build credibility. But waiting passively for referrals doesn’t scale. Borrowed trust solves this: systematic outbound that preserves relationships and leverages trust you’ve already earned through good work. The Complete Workflow Step 1: Gather “Raw” Proof Collect screenshots or quotes from clients who explicitly praised your service. Screenshots are preferable over text because they’re “raw” and “real” proof that’s difficult to fake. Sources: LinkedIn testimonials Email feedback Slack messages Video testimonials Project completion surveys Document specific outcomes: What problem did you solve? What measurable result did they achieve? What was the before/after state? Example: “DDTS went from no online presence to a professional website with booking system, increasing lead generation by 40% in 90 days.” Step 2: Use the “Connections Of” Filter LinkedIn Sales Navigator has a specific filter called “connections of” that identifies prospects who are personally connected to your satisfied clients. How to use it: Open Sales Navigator Use “connections of” filter Input names of clients who provided positive feedback Result: A list of prospects who have mutual trust with your advocates Why this matters: These aren’t cold prospects. They’re one degree from someone who vouches for you. Step 3: Execute the Outreach Message the connected prospects referencing the mutual connection and the proof. Example Script: “Hey [prospect name]. [Mutual connection name] really liked [specific outcome/project]. Would you be interested in learning more about how we achieved [specific result]?” Alternative for events: “Hey, [connection name] really liked this [webinar/event]. Do you want to get invited to this?” Why This Works: Outreach relies on existing trust between prospect and their connection Bypasses the salesperson’s credibility problem entirely Uses “borrowed trust” instead of building from zero Significantly higher response rates than cold outreach Real-World Example: The Podcast Strategy This approach is particularly effective for content creators. Here’s how podcast hosts leverage it: Interview a guest on their podcast Use “connections of” filter for that guest in Sales Navigator Send the recording to those connections Result: Gains traction by leveraging the guest’s network and credibility Why it works: You’re not selling - you’re sharing valuable content their connection participated in. Scaling Borrowed Trust: Content Amplification Next-level play: Create content WITH clients (not just about them). Examples: Case study interview/webinar featuring your client “How [Client] solved [Problem]” webinar series Virtual roundtable with multiple satisfied clients Then: Use “connections of” filter for each participating client. Outreach: “Hey [name], [client] just did a webinar with us on [topic]. Thought you’d find it valuable given your work in [industry].” Scale multiplier: One webinar with 3 clients = access to 1,500+ connections (3 x 500 average connections). Why This Works at Bootstrap Scale Traditional cold outbound: 1,000 cold emails 2% response rate = 20 responses 10% conversion = 2 deals High effort, low return Borrowed trust approach: 5 satisfied clients x 500 connections = 2,500 prospects 10% response rate = 250 warm conversations 20% conversion = 50 deals Lower volume, higher quality, preserves relationships Implementation for Your Business Phase 1: Foundation (Week 1-2) Get LinkedIn Sales Navigator ($99-149/mo) - needed for “connections of” filter Proof Collection: Screenshot positive feedback from each satisfied client Document specific outcomes (problem solved + result achieved) Get permission to reference them in outreach Phase 2: Systematic Outbound (Week 3-4) For each satisfied client: Use Sales Navigator “connections of” filter Start with 5-10 connections (test and iterate) Message using the example script template above Track response rates Content Strategy: Create detailed LinkedIn articles with case studies Include screenshots of results (not just theory) Track “saves” and “sends” metrics (not likes) Phase 3: Intelligence Layer (Ongoing) Social Listening: Monitor Reddit for raw, unfiltered pain points (r/cybersecurity, r/msp, r/sysadmin, r/netsec) Reddit is currently #1 LLM source - mine for authentic language Use prospect language in your content and messaging Strategic Content: Build citation-worthy thought leadership now When LinkedIn flips to #1 for LLMs (6-12 months), your content is already embedded in training data Critical Success Metrics Skip vanity metrics. The key insight: If a post receives more saves/sends than reactions (likes/comments), you’ve hit a real pain point. Why saves matter: Can’t be faked - requires actual value perception Indicates intent to reference later or share with others High save rates identify topics to amplify through thought leadership ads The Flywheel: Successful organic content (validated by high save rates) - Amplify through thought leadership ads - Convert into long-form articles - Further index in LLMs - AI sells for you 24/7 Key Takeaways LinkedIn = 100% deliverability vs. declining email performance Borrowed trust beats any message formula because it bypasses trust deficit Screenshots > text for proof (harder to fake, more authentic) “Connections of” filter = systematic access to warm networks Content WITH clients > content about clients (for maximum reach) Saves > likes as success metric (indicates real value) Reddit listening + LinkedIn content = AEO strategy (present + future LLM indexing) Resources SparkToro.com - Audience research, influencer identification LinkedIn Sales Navigator - “Connections of” filter essential AEO Overview Video - https://youtu.be/tMBdA2gkXgk?si=Aoz6txkuUaZuOPVD About the Author: Aaron Lamb is the founder of Hexaxia Technologies, specializing in cybersecurity consulting, infrastructure engineering, and AI product development.
Aaron Lamb
Building HexCMS: Security and Simplicity Through Git I built HexCMS because every CMS I evaluated had the same fundamental problem: the more features they added, the more attack surface they created. WordPress, Drupal, even modern headless CMSs - they all prioritize flexibility over security. I wanted the opposite: security first, simplicity always. The Core Problem with Traditional CMSs Here’s what bothered me about existing solutions: WordPress: Great ecosystem, terrible security model. Plugins can do anything. Database gets compromised, entire site is gone. Updates break things constantly. Headless CMSs (Contentful, Strapi, etc.): Better than WordPress, but still require authentication, API keys, admin panels. Every feature is another potential vulnerability. Most are overkill for a simple blog. Static Site Generators (Jekyll, Hugo): Close to what I wanted, but non-technical users can’t use them. Writing in Git directly isn’t realistic for most content creators. Why I Built This I’ve been in the CMS trenches for over two decades. Started with phpNuke in the early 2000s, lived through the Mambo to Joomla split, mastered SharePoint, Zope + Plone, and spent years deep in WordPress development. Eventually, I moved from WordPress to Squarespace just to calm my nerves. I didn’t want to wake up to some CVE notification that my blog got hacked. Squarespace worked, but customizing beyond the defaults took more effort than it was worth when my career had moved beyond web design. The breaking point came from watching clients struggle. I have a friend who’s brilliant in his field but not tech-savvy. Watching him fumble through basic content updates on his WordPress site hit hard. If someone that smart can’t figure out how to update their own website without calling me, the tooling is broken. The pattern repeated with clients. I’d build secure infrastructure for their operations, but their marketing sites ran on WordPress or enterprise CMSs that required constant maintenance, security updates, and hand-holding. The mismatch was glaring. About a year and a half ago, I sat down and rethought the entire problem: What does a CMS actually need to do? Who’s managing the content? Is it genuinely easy for them (not “easy for developers”)? Is it secure by default, not secure if configured correctly? That questioning led to HexCMS. A note on security: Building with Node.js and Next.js isn’t without its own challenges. The JavaScript ecosystem moves fast, and dependency vulnerabilities are a constant concern. I’ve built tooling to monitor and address these issues systematically. But that’s a topic for another post on hardening Next.js applications in production. The HexCMS Approach: Git as the Source of Truth I made one architectural decision that solved multiple problems: Content lives in Git. Everything else is derived. Two deployment modes: Mode 1: Git-Only (Pure Simplicity) Perfect for small to medium blogs: Write Markdown in Git (any repo, any branch) Push to GitHub/GitLab Next.js reads directly from Git and serves content via ISR No database required Mode 2: Git + PostgreSQL (Scale + Features) For large content sets and advanced features: New Markdown post added to Git Webhook triggers sync to PostgreSQL Database migrations import content and images Next.js reads from database for fast queries and builds Git remains the source of truth (database is a materialized view) Why PostgreSQL matters: Solves the “1000+ post” problem - Traditional static generators (Gatsby, Hugo) rebuild EVERYTHING on every change, making large sites prohibitively slow Constant build times - Always < 2 minutes regardless of content volume Full-text search - Database-powered search across all content Multi-author workflows - Advanced queries for filtering by author, tags, dates Image optimization - Store and serve optimized images via PostgreSQL Complex queries - Related posts, tag clouds, archive pages run instantly Why this matters: Start simple, scale when needed - Git-only until you hit build bottlenecks Provider flexibility - Supabase (full feature set), Vercel’s Neon (simple, low-mid tier) Git is always authoritative - database sync failures just rebuild from Git No lock-in - can switch between modes without losing content Why this works for security: No admin panel to hack - there’s no login, no dashboard, no attack surface Version control is built in - every change is tracked, auditable, reversible Database is optional - start with pure Git, add PostgreSQL only if you need the performance Separation of concerns - content storage (Git) is separate from content delivery (Next.js) Stateless by design - if using a database, it’s just a cache; Git is always the source of truth No file uploads - Markdown and assets go through Git, which has its own security model Why this works for simplicity: Start with zero infrastructure - just Git + Next.js, no database to set up One source of truth - Git is always correct; database sync failures just rebuild from Git Progressive complexity - add the database only when scale demands it (typically 100+ posts) Automated migrations - database sync handles content import automatically via webhook Works with any Git workflow - branches, PRs, reviews - all standard Git operations Platform agnostic - GitHub, GitLab, Gitea, self-hosted - doesn’t matter Solves the build time problem - unlike Gatsby/Hugo which rebuild everything, HexCMS syncs only changed files The Problem I Didn’t Anticipate HexCMS worked exactly as designed: secure, simple, Git-based. But it had one fatal flaw for real-world adoption: Non-technical users struggled with raw Markdown. Even developers who understood Git found the pure Markdown workflow cumbersome. Non-technical content creators wanted to write blog posts - asking them to write in raw Markdown syntax, remember frontmatter formatting, commit with descriptive messages, and push to main was not realistic. I needed a content editor. But I refused to compromise on the core principles: No web-based admin panel (introduces attack surface) No authentication system (security complexity) Git must remain the source of truth (no database mutations) Enter HexCMS Studio HexCMS Studio is a local desktop application (Electron-style Next.js app) that makes HexCMS usable for non-developers while preserving the security model. What it does: WYSIWYG editor (Tiptap) + code mode (CodeMirror 6) for Markdown Frontmatter form editor - fill in title, date, tags via a clean form Git integration - stage, commit, push, pull directly from the UI Multi-repo support - manage multiple HexCMS sites from one app Multi-theme - Light, Dark, Midnight, Sepia themes for different preferences Why it works: Runs locally only - requires filesystem access, never exposed to the internet No cloud dependencies - edits happen on your machine, push when ready Git workflow preserved - commits go to Git, not a database Zero server-side code - it’s just a Markdown editor with Git commands Non-technical users can now write blog posts in a visual editor, click “Publish,” and the content goes live - without anyone having to manage an admin panel, authentication system, or security updates. The Design Philosophy Building HexCMS taught me that simplicity is a security feature. Every feature you add to a CMS is: Another thing that can break Another attack vector to defend Another migration to manage Another permission model to audit By refusing to add features that compromise the core Git-based model, HexCMS stays small, auditable, and secure. What HexCMS doesn’t have (intentionally): User authentication Role-based access control Web-based media uploads (images go through Git) Plugin system Admin dashboard API keys or tokens What it does have: Git as the source of truth Webhook-triggered sync PostgreSQL cache for fast reads Next.js ISR for CDN-friendly delivery Full content versioning (via Git) Lessons Learned 1. “Secure by default” means removing features, not adding them. I spent more time deciding what NOT to build than what to build. Every feature request got filtered through: “Does this compromise Git as the source of truth?” 2. The best admin panel is no admin panel. HexCMS Studio proved that you can have a great editing experience without a web-based admin system. Running locally eliminates entire categories of vulnerabilities. 3. Simplicity scales better than flexibility. HexCMS can’t do everything WordPress can do. That’s the point. It does one thing well: serve Markdown content from Git. That constraint makes it reliable. 4. Two tools are sometimes better than one. I initially resisted building HexCMS Studio. “Just use Git!” But separating the CMS (server-side) from the editor (client-side) actually made both simpler. HexCMS stayed focused on content delivery. Studio stayed focused on content creation. Future Enhancements Some challenges I’m working through as HexCMS matures: Multi-author workflows: HexCMS framework supports multi-author via Git (branches, PRs, code review), but HexCMS Studio is single-user focused. Exploring ways to make collaborative editing feel natural in the Studio app without compromising the Git-first model. Image optimization: HexCMS framework includes Next.js image optimization and PostgreSQL storage for images. HexCMS Studio handles local file management and commits to Git. Still exploring the ideal balance between Git storage (simple) and external CDN (performance) for large image libraries. Real-time preview: HexCMS Studio shows Markdown preview, but not the actual rendered site. A local Next.js preview would be ideal - balancing the complexity tradeoff. What’s Next HexCMS is currently in beta testing internally at Hexaxia and powering our production sites. We’re refining the workflow and hardening edge cases before a public release. If you’re building a blog or documentation site and want: Security without complexity Git-based workflow No admin panel to maintain Simple deployment (Next.js + Git, optionally + PostgreSQL) Start simple, add complexity only when needed HexCMS might be worth exploring once we go public. Status: Beta (internal testing) Code: GitHub repos coming with public release Stack: Node.js, Git webhooks, Next.js, PostgreSQL (optional) Security note: HexCMS leverages Next.js security best practices for API routes and data fetching. I’ll cover the specific security architecture in a future post on hardening Next.js applications. About the Author: Aaron Lamb is the founder of Hexaxia Technologies, specializing in cybersecurity consulting, infrastructure engineering, and AI product development.
Aaron Lamb
The One File That Stopped My AI Agent From Asking ‘Which Project?’ When you work across 30+ projects, your AI agent’s biggest problem isn’t intelligence. It’s context. my AI executive assistant kept asking the same questions: “Which project is this for?” “Where does that file live?” “Is XYZ the client company or the internal project?” That last one actually happened. my AI assistant confused a high-value client with an internal research initiative that shared the same acronym. Not because it was stupid. Because the context was scattered across memory files, old notes, and my head. The fix: A single structured file that serves as the authoritative source of project truth. The Problem: Context Confusion Here’s what a typical interaction looked like before: Me: Check the contract status my AI assistant: Which contract? I see references to Client-A, Project-B, and several others. Me: Client A my AI assistant: Got it. Looking at client-a-app… Me: No, the contract is in the company folder, not the code folder. This happened constantly. Every task required 2-3 rounds of clarification. The friction added up fast. The Solution: Project Registry The Project Registry is a structured markdown file at .ai/memory/projects.md. It contains everything my AI assistant needs to understand my project landscape. Project Types Matter Not all projects are the same. A client company is different from a codebase is different from a marketing site. I defined six types: Type Description Company Client/customer entity (contracts, proposals, business docs) Code Software development project (source code, deployable) Site Website or marketing site Product Internal product under development Research R&D and experimental projects Personal Personal projects and planning This distinction is critical. When I say “check the Client A files,” my AI assistant now knows to look in the company folder for contracts, not the code folder for source files. Owner Mapping Every project has an owner or client. The registry maps this explicitly: Owner Company Folder Code Projects Client A client-a ClientApp, ClientGPT Client B client-b (none) Internal company-internal ProductX, ProductY, CMS Now when someone mentions a key contact’s project, my AI assistant immediately knows which company it refers to, and whether it has both company folders (contracts) and code projects. Relationship Mapping Projects don’t exist in isolation. They depend on each other, share technology, and serve business hierarchies. Dependencies: Blog A → CMS (content management) Blog B → CMS (content management) AI Assistant → Framework (operational protocols) Business Hierarchy: Company (parent) ├── AI Division │ ├── Product X (product) │ ├── Product Y (product) │ └── ai-site (marketing site) ├── Media Division │ └── Event Project (first project) └── Websites ├── main-site (main company) └── splash (splash) Shared Tech Stacks: Stack Projects Next.js + Tailwind Site A, ClientApp, CMS Studio Next.js + CMS Blog A, Blog B Next.js + Database ProductX, ClientApp Understanding these relationships means my AI assistant can apply fixes to shared dependencies, understand upgrade impacts, and route questions correctly. How It Works The registry is embedded into my RAG (Retrieval Augmented Generation) system using local Ollama embeddings. When I ask my AI assistant a question, it semantically searches the registry for relevant context before responding. Example workflow: I say: “What’s the status on the construction client’s project?” my AI assistant searches RAG, finds the contact associated with “Client A” Registry shows: company folder + code project paths my AI assistant reads the relevant files and gives an accurate status update After the registry: Me: Check the Client A contract status my AI assistant: Contract v0.9.1 is drafted, awaiting signatures. $12k total, 20/40/40 payment structure. That’s the difference between an assistant and a friction generator. Technical Implementation Embedding Strategy: 1000 character chunks with 200 character overlap Sentence-aware splitting keeps logical units together HTTP API for embeddings (not CLI) avoids parsing bugs Local Ollama (nomic-embed-text model) for zero API costs Performance: Total embedding time: ~17 seconds for 17 chunks Search latency: <150ms Cost: $0 (100% local) Maintenance: Adding a new project takes 30 seconds: Add entry to appropriate section Add to Quick Reference table Update relationships if applicable Re-embed Compare this to training the agent through conversation, which is slow, inconsistent, and doesn’t persist across sessions. Results Since implementing the project registry: Zero project confusion errors in two weeks Faster responses - no clarifying questions needed Better task routing between company folders and code projects Easier onboarding when resuming work after breaks The registry also serves as documentation for me. When I forget which client uses which tech stack, I check the registry instead of digging through project folders. Key Lessons 1. Structure Beats Volume A well-structured 300-line registry beats a 3000-line brain dump. The relationships table alone saves more time than pages of prose descriptions. 2. Types Are Essential The company/code/site distinction wasn’t obvious at first. But an AI agent needs to know the difference between a folder of contracts and a folder of source code. 3. Relationships Are High Value The dependency graph and business hierarchy took 10 minutes to write but provide disproportionate value. When my AI assistant understands organizational structure, it navigates conversations without getting confused. 4. Embed Locally Using local embeddings (Ollama) instead of API-based embeddings means: Zero marginal cost for updates No rate limits Data never leaves the machine Works offline The quality tradeoff is minimal for this use case. We’re matching project names and relationships, not doing nuanced semantic analysis. Pattern for Your Team The Project Registry pattern is reusable. Any team working with AI agents across multiple projects, clients, or domains would benefit from a similar approach. Minimum viable registry: Project Types - Define 4-6 types relevant to your work Owner Mapping - Who owns what, where it lives Quick Reference Table - Name, type, path, status Relationships - Dependencies and hierarchies Embed - Make it searchable via RAG Start simple. Add complexity only when you hit friction. The Real Insight AI agents are only as good as their context. You can have the most capable model in the world, but if it doesn’t know which project you’re talking about, it will waste your time asking clarifying questions or make mistakes that cost more time to fix. The smarter the agent, the more it suffers from context gaps. By building a structured, searchable, authoritative source of project truth, we turned my AI assistant from a capable but confused assistant into one that actually knows what I’m talking about. The best part: This solves a human problem too. The registry is now my go-to reference when I need to remember project details. Good tools make both humans and AI more effective. About the Author: Aaron Lamb is the founder of Hexaxia Technologies, specializing in cybersecurity consulting, infrastructure engineering, and AI product development.