Sunday, 26 April 2026

Agentic coding

We have all experienced the "maintenance wall"—that point where a hobby coding project becomes so complex that adding a single feature feels like a massive effort and a gamble. For years, my side projects and hobbies lived in this state of stagnation: functional, but fragile and difficult to extend. Everything changed when I integrated Claude Code into my development workflows.

By treating Claude Code as an agentic partner, a co-programmer remember XP programming, and rather than a simple chatbot, I moved beyond just "writing code" and began deploying fast, really fast. Before getting started, a quick reminder: these tools can be token-intensive. You may end up spending a lot of tokens (new money). Always keep your repository structure in mind, optimise and whenever possible, feed your repository context to Claude Code to keep interactions efficient and economical. Use CLAUDE.MD to restrict scope and .claudeignore to ignore files. 

Beyond Syntax: Learning Why

The most significant shift in my workflow was transitioning from "fixing bugs" to "improving architectural understanding." I stopped asking how to write a line of code and started asking why my current implementation was failing to scale and what would the improvements be.

By prompting Claude Code to analyze my existing modules and explain it to me, I uncovered limitations I hadn't even realised were there. I learned what I was doing wrong and what I was omitting which was important. It pointed out anti-patterns like tightly coupled state management and hidden resource bottlenecks. It highlighted things that were security flaws, bad session management, inefficient and bloaty code. Patiently it explained to me things, recommended alternatives, implemented them on mly code and help me deploy them. 

Expanding the Horizon

Once I understood the structural limitations of my projects, Claude Code became the engine that allowed me to expand their scope significantly:

  • UI/UX Enhancement: I leveraged the tool to audit my frontend components. By describing my goals for user intuition and accessibility, I received actionable critiques on layout and CSS architecture that transformed my interfaces from "functional" to "polished."

  • Security by Design: Security is often the first casualty in hobby development. I used Claude Code to scan my deployment scripts and configuration files for exposed secrets and insecure defaults. It taught me to bake security into my deployment pipeline as a baseline requirement, not an afterthought.

  • The Refactor Loop: Perhaps the most liberating benefit I saw was the ability to refactor non stop and fearlessly. By having an agent that understands the relationships across my entire repository, I could rewrite core logic, simplify complex functions, and modularise monolithic loooong blocks with the confidence that Claude had analyzed the ripple effects.

  • Velocity Through Testing: With the Model Context Protocol (MCP), I connected my codebase to my testing environment. This bridge allowed Claude to write code that was verified by real-time test execution. Writing code that is "correct by construction" with a safety net of tests drastically increased my development velocity.

The real game-changer was shifting from treating Claude as a chatbot to treating it as an autonomous agent, a guru co-programmer like in the XP and pair-programming days.  By leveraging CLAUDE.md for persistent architecture rules and creating custom Skills for my repetitive workflows, I’ve essentially given my projects an 'institutional memory' context. I can even start utilising Agent Teams (the hiring of programmers), where I can orchestrate sub-agents to handle security audits and testing in parallel, while I focus on the high-level design. But, I haven't done that yet, I am thinking what would that do to my mere hand counted tokens in my simple subscription 😊

Now the exciting part, using Model Context Protocol (MCP) - the eyes, hands and legs of AI, the tools - I was able to connect claude-code to my !test database, get claude query the tables, turning my terminal into a command center of in insights that doesn't just write code, but actively manages, tests and executes it. Be careful of this, know what you are doing! To be honest, I don't let it near production or to deploy my code yet, without me checking it,  I do it myself with vscode. 

 Final Thoughts

Using AI didn't make me a passive coder; it made me a more rigorous one. I spend less time wrestling with syntax, CSS and statics and more time exploring advanced architectural concepts that previously felt out of reach. If you have a project sitting on the shelf, I encourage you to bring it into the terminal. Just remember to be specific, keep an eye on your token economy

Let agentic coding infuse your projects with the velocity to innovate rapidly and the structural strength to evolve fast.



 

Wednesday, 14 January 2026

Why trustworthy AI isn’t optional - it’s the foundation

In the rapidly evolving landscape of generative AI, data security and data privacy are not just compliance requirement - they are the bedrock of trust and innovation. 

After all who would want to waste time with a dodgy and fake AI?😑

While data security focuses on protecting AI models and datasets from breaches, tampering, or unauthorised access (through measures like encryption, access controls, and secure APIs), data privacy ensures that the data powering these models is collected, processed, and used ethically and legally. This distinction becomes particularly critical in grounding techniques, where AI models are anchored to external knowledge bases, APIs, or real-time data sources. The more an AI model is trained on accurate, well-structured data, the fewer inaccuracies and hallucinations it produces.

What is grounding?

In simple terms, grounding connects AI models to external sources of truth - such as sensors, databases, or APIs - to ensure responses are accurate, context-aware, and reliable. Without grounding, AI risks hallucinations or relying on outdated or biased data.

For example, imagine grounding an AI customer support chatbot in a SQL database of product manuals and FAQs. The chatbot queries the database in real-time to provide accurate, up-to-date answers—rather than inventing responses. However, if the database connection isn’t secured with encryption or the data isn’t anonymized, the grounded AI system could expose sensitive customer information or fall victim to prompt injection attacks.

And without robust security, grounded AI systems risk exposing sensitive data or being manipulated through techniques like prompt injection or data poisoning. Without privacy safeguards, they may inadvertently violate regulations like GDPR or CCPA by misusing or retaining personal data embedded in their knowledge sources.

For data architects, data engineers and AI engineers, grounding introduces unique challenges. When an AI model queries external databases, APIs, or data lakes, every interaction must be both secured (e.g., using TLS 1.3, OAuth 2.0, or zero-trust architectures) and privacy-preserving (e.g., via differential privacy-federated learning, or data anonymization). For example, if you’re grounding a large language model (LLM) in a relational database or Delta Lake, you MUST ensure that:

Security: The connection is encrypted, access is role-based (e.g., RBAC in Snowflake or Azure Synapse), and queries are logged for audits.

Privacy: The underlying data is scrambled, anonymised or pseudonymized, and the model only retrieves data it’s authorised to use - aligning with the principle of least privilege. Tools like Python’s faker or SQL’s dynamic data masking can help strip PII from responses, your grounding frameworks MUST enforce strict data access policies.

The Future: Trustworthy Grounding in AI

The future of AI hinges on trustworthy grounding - where models don’t just perform well but also respect data sovereignty and user consent. As you design AI systems that interact with databases, data lakes, or Lakehouses, prioritise :

Privacy-by-Design: Embed consent checks into API calls, database connections and ensure data minimisation, authorised data only. 
Security-by-Default: Encrypt even vectors-embeddings in your vector databases and all related grounding data assets and enforce strict access controls.

If you’re experimenting with grounding, start by auditing your data sources

  • Are they secure? 
  • Are they privacy-compliant? 

The answers will define whether your use of AI with your database is not just smart, but also trustworthy.


Thursday, 1 January 2026

Exploring C4 Models with Structurizr DSL, VSCode, and Diagramming Tools

Introduction

As a Data Architect, creating clear and effective diagrams is crucial for communicating and documenting software and data architectures. The C4 model, with its focus on abstraction-first design—a principle I firmly believe is the backbone of software engineering—immediately caught my interest. To explore this further, I recently began experimenting with C4 modeling using Structurizr DSL, (DSL=Domain Specific Language) VSCode, and popular diagramming tools like PlantUML and Mermaid. I used Cairo and Graphviz in the past but these newer libraries require less tinkering. Here’s a look at my journey and the insights I gained along the way while trying the diagram as code approach.

Why C4 Models?

The C4 model is a powerful way to describe software systems at Context, Containers, Components, and Code—often referred to as the "4 Cs." Its simplicity, scalability, and developer-friendly approach make it a perfect fit for both new (greenfield) and existing (brownfield) projects.

Since I prefer to avoid cloud-based tools for richer experience and control, initially I set up a local environment using VSCode and Docker on my trusty old but fast Ubuntu laptop. This way, I can create while keeping everything offline and efficient. Looking at it again, I decided that even Docker is an overkill. I decided Vscode is enough to code and diagram.

My Setup

I took a quick look at the Structurizr DSL Python wrapper, but I also skipped it—I wanted to dive straight into the native DSL syntax and see my diagrams render with minimal overhead. After all, treating diagrams as code means I can , keeping everything clean and reproducible.

While I could have spun up Structurizr Lite in a Docker container (because who doesn’t love local, self-hosted solutions?), I went lighter—just VSCode extensions to get the job done. My philosophy? . No unnecessary layers, no cloud dependencies, just code and diagrams, the way it should be. 

They integrate seamlessly with wiki platforms (like Confluence, Notion, or GitLab/GitHub Wikis) and Git repositories, allowing you to embed dynamic, version-controlled diagrams directly in your documentation.

Tools in Action

  • Structurizr DSL: Writing diagrams as code in DSL in vscode and for better previews run their server on localhost
  • VSCode: With extensions for PlantUML and Mermaid, I could preview diagrams instantly in vscode.
  • PlantUML & Mermaid: Both tools integrated seamlessly with VSCode via extensions, though I found Mermaid’s syntax more intuitive for quick sketches and wiki integration. Mermaid has its own markup.

Outcomes

I successfully created Context, Container, and Component diagrams for a sample imaginary project. The ability to generate diagrams locally ensured full control and flexibility, no SaaS. Here are two examples of what I built:


Figure 1: Output from Structurizr server running on localhost:8080 in docker with code on the left generating the C4 model diagram on the right



Figure 2: Output from Mermaid vscode extension showing Mermaid code on the left generating the diagram on the right


Final Thoughts

I find the C4 model and tools like PlantUML and Mermaid are a game-changer for architecture documentation—it shifts the process from static, manual diagrams to code-driven, version-controlled clarity. By leveraging Structurizr DSL in VSCode and pairing it with Mermaid/PlantUML, I’ve crafted a workflow that’s both flexible and precise, giving me full control over how my systems are visualized.

There’s something deeply satisfying about coding your diagrams or misaligned Bézier curves. Just clean, maintainable DSL and instant visual feedback. I’m officially done with joining rectangles by hand; from now on, it’s code all the way.

Sunday, 23 February 2025

Building a Real-Time Weather App with Streamlit and Open-Meteo

see app here: https://data-exchange.streamlit.app/

I recently embarked on a project to build a real-time weather application, and I wanted to share my experience using Streamlit and Open-Meteo. The goal was to create a dynamic web app that provides users with up-to-date weather information and 10 day weather forecasts, all while leveraging the convenience of cloud-based development.

Streamlit: Rapid Web App Development in the Browser

One of the most compelling aspects of Streamlit is its ability to facilitate rapid web application development directly within the browser. For this project, I utilized GitHub Codespaces, which provided a seamless development environment. This eliminated the need for complex local setups and allowed me to focus solely on coding.

Key Advantages of Using GitHub Codespaces:

  • Browser-Based Workflow: All development activities were performed within a web browser, streamlining the process.
  • Dependency Management: Installing necessary Python packages was straightforward using pip install.
  • Version Control: Integrating with Git enabled efficient version control with git commit and push commands.

Data Acquisition with Open-Meteo

To obtain accurate and current weather data, I employed the Open-Meteo API. This API offers a comprehensive set of weather parameters, allowing for detailed data visualization.

Visualizing Weather Data with Streamlit's Graph Capabilities

Streamlit's built-in graph visualization tools proved to be highly effective. Creating dynamic charts to represent weather data was quick and efficient. The clarity and responsiveness of these visualizations significantly enhanced the user experience.

Technical Implementation:

The application was developed using Python, leveraging Streamlit for the front-end and the requests library to interact with the Open-Meteo API. The workflow involved:

  1. Fetching weather data from the Open-Meteo API.
  2. Processing the data to extract relevant information.
  3. Utilizing Streamlit's charting functions to create graphical representations.
  4. Deploying the application via the streamlit sharing platform.

Observations:

  • Streamlit's simplicity and ease of use allowed for rapid prototyping and development.
  • GitHub Codespaces provided a consistent and reliable development environment.
  • Open-Meteo API provided accurate data.
  • Streamlit sharing made deployment very easy.

Conclusion:

This project demonstrated the power of Streamlit and Open-Meteo for building data-driven web applications. The ability to develop entirely within a browser, combined with powerful visualisation tools, made the development process efficient and enjoyable.

You can view the final app here: https://data-exchange.streamlit.app/


Saturday, 8 February 2025

GitHub Codespaces: A Fast-Track to Development with Minimal Setup

Do you like coding but you hate the scaffolding and prep-work?

As developer, I often spend a considerable amount of time setting up development environments and the project scaffolding before I even write a single line of code. Configuring dependencies, installing tools, and making sure everything runs smoothly across different machines can be tedious. IF you find this prep work time consuming and constraining then...

Enter GitHub Codespaces 

GitHub


GitHub Codespaces is  cloud-based development environment that allows you to start coding instantly without the hassle of setting up a local machine on your browser!

Whether you’re working on an open-source project, collaborating with a team, or quickly prototyping an idea, Codespaces provides a streamlined workflow with minimal scaffolding.

Why GitHub Codespaces?

  1. Instant Development Environments
    With a few clicks, you get a fully configured development environment in the cloud. No need to install dependencies manually—just launch a Codespace, and it’s ready to go.

  2. Pre-configured for Your Project
    Codespaces can use Dev Containers (.devcontainer.json) to define dependencies, extensions, and runtime settings. This means every team member gets an identical setup, reducing "works on my machine" issues.

  3. Seamless GitHub Integration
    Since Codespaces runs directly on GitHub, pushing, pulling, and collaborating on repositories is effortless. No need to clone and configure repositories locally.

  4. Access from Anywhere
    You can code from a browser, VSCode desktop, or even an iPad, making it an excellent option for developers who switch devices frequently.

  5. Powerful Compute Resources
    Codespaces provides scalable cloud infrastructure, so even resource-intensive projects can run smoothly without overloading your local machine.

A Real-World Example

Imagine you’re starting a new Streamlit project on their community. Normally, you’d:

  • Install Streamlit and other packages
  • Set up a virtual environment
  • Configure dependencies
  • Ensure all team members have the same setup

With GitHub Codespaces, you can define everything in a requirements.txt and .devcontainer.json file and launch your environment in seconds. No more worrying about mismatched Python versions or missing dependencies—just open a browser and start coding. 

See below how I obtained this coding environment to built a Weather Streamlit app quickly and for FREE using the Streamlit community Cloud 




All in one browser page using GitHub, browser edition of VScode and access to a free machine on Streamlit Community Cloud with GitHub Codespace for development.

To see the above app visit https://click-weather.streamlit.app/

Final Thoughts

GitHub Codespaces is a game-changer for modern development I think. It eliminates the friction of setting up environments, making collaboration effortless and speeding up development cycles. If you haven’t tried it yet, spin up a Codespace for your next project—you might never go back to traditional setups on your laptop anymore. 

There is another tool I want to look at which does all the scaffolding automatically with AI. Is the IDE called 'Windsurf' from Codeium, but that's another blog post.