Thinking about AI and hallucination control

The post discusses AI hallucination - when AI generates incorrect information. It explores two main problems: user frustration with incorrect outputs and uncertainty about managing these errors long-term. Using a geodetic network analogy, it explains how AI errors can propagate like measurement errors in surveying, suggesting we need better frameworks for detecting and managing hallucinations.

Creating architecture diagrams with C4 and AI

In this experiment I used AI to automate architecture documentation by testing Aider (an AI coding assistant). After just 5 minutes and 5 prompts, I generated a decent C4 diagram for a Streamlit web application. While not perfect, this experiment shows the promising future of AI-assisted documentation.

Playground: A book about climate crisis and AI

"Playground" by Richard Powers explores the interplay of technological ambition and environmental concerns, highlighting tensions between progress and preservation through diverse characters on a remote island.