No results found.

AI in Engineering

Transparent about how AI fits into my engineering workflow. Where it helps, where it doesn't, and the guardrails that keep quality high.

I use AI tools in my day-to-day engineering work. This page is a transparent account of how & what tools I use, where they genuinely help, where they don't, and the processes that keep quality and safety high.

This isn't a blanket endorsement of AI. It's also not a rejection of it. Like any tool, value comes from knowing when to reach for it and when not to. The guardrails matter as much as the capability.

Tools I use

GitHub Copilot

Primary day-to-day coding assistant, backed by Claude. Used for code completion, scaffolding, boilerplate generation, and general intelligence.

ChatGPT

Used for quick questions, brainstorming, and exploring unfamiliar APIs or concepts. Not directly integrated into my coding workflow, but a helpful companion for research and ideation.

AI Code Review

Automated AI review runs on pull requests before human review. Catches inconsistencies, obvious bugs, and naming issues, reducing the noise in human review cycles.

Scoped Agents

Task-scoped AI agents for specific operations: validating bug reports, writing documentation from code, summarising long threads, and drafting technical specs from rough notes.

How I use it responsibly

Every piece of AI-generated code or content goes through the same quality gates as everything else. The AI does not have a shortcut to production.

πŸ§ͺ

Sandboxed environments

All AI tooling is run within a sandboxed environment, preventing a rogue agent from affecting production systems or accessing sensitive data.

βš™οΈ

CI pipeline

Static analysis, automated test suites, and linting run on every change. AI-generated code that doesn't pass CI doesn't merge - no exceptions.

πŸ‘€

Human in the loop

Every AI-generated change is reviewed by a human prior to release. Shifting responsibility to the human reviewer, not the AI - the AI is a tool, not an authority.

πŸ”’

No secrets or sensitive data

Nothing sensitive - always mount the bare minimum to the sandbox AI environments. Significantly reducing the scope of credentials and proprietary business logic available.

βœ…

Verify don't trust

Testing is important, AI confidently hallucinates. Automated test suites, manual testing and code reviews are essential to ensure fixes and features function correctly.

Where it genuinely helps

  • Boilerplate-heavy work that follows known patterns β€” Magento admin grids, DI wiring, scaffolding
  • Test generation for existing logic β€” produces a solid first draft to extend and verify
  • Code review on pull requests β€” fast, consistent, catches things humans miss when tired
  • Documentation from code β€” extracting intent and writing it clearly, faster than writing from scratch
  • Exploring unfamiliar APIs and codebases β€” faster than reading docs cold
  • Refactoring and making code consistent β€” pattern matching across large files

Where it still falls short

  • Complex, stateful business logic β€” AI doesn't have the full domain context
  • Security-sensitive code β€” never accept AI output without deep manual review here
  • Long-term CSS and frontend maintenance β€” additive by nature, gets messy fast
  • Reasoning about existing production systems it has not seen
  • Anything requiring real-world operational judgment or incident experience

AI in Practice

Posts documenting real AI usage β€” experiments, observations, and practical takeaways.

All AI posts

Using AI responsibly in engineering

Happy to talk through how AI fits into modern development workflows, compare notes on what's actually working in production, or discuss where the tooling still has gaps.