Thinking about AI and hallucination control

The post discusses AI hallucination - when AI generates incorrect information. It explores two main problems: user frustration with incorrect outputs and uncertainty about managing these errors long-term. Using a geodetic network analogy, it explains how AI errors can propagate like measurement errors in surveying, suggesting we need better frameworks for detecting and managing hallucinations.

Thinking about AI and hallucination control

Creating architecture diagrams with C4 and AI

In this experiment I used AI to automate architecture documentation by testing Aider (an AI coding assistant). After just 5 minutes and 5 prompts, I generated a decent C4 diagram for a Streamlit web application. While not perfect, this experiment shows the promising future of AI-assisted documentation.

Creating architecture diagrams with C4 and AI