AI

AI Tools - Summer 2025 Edition

Eric Thanenthiran·14 September 2025·8 min read

Here's some insight into the AI tools I'm using right now (alas the end of Summer 2025). Like everyone else, I'm still working out how to these tools and processes fit into my work and personal projects. Right now I'd characterise my use of AI tools (really most tools) as somewhere between "Early Adopters" and "Early Majority" stages in the Diffusion of Innovations Theory as proposed by Everett M. Rogers in his book "Diffusion of Innovations" (1962). As such I try not to buy into current AI hype but also leave myself open to experimentation with new technologies and tools into my daily work. For me, AI is should be an accelerator and amplifier of my work, rather than a generator of work. I want it to help me become better at what I do and increase the impact and value I can make.

Current Favourite Model

Right now the LLM I turn to most is Anthropic's Claude Sonnet. In my experience this gives great results for both code and writing and it has a very good reasoning model. I am a big fan of the simplicity of the interface and how stable this model has been in the last year.

Research

I use an app called RayCast to automate a lot of purposes across my Mac. It's a great tool that you can use to simplify common workflows and tasks. One of the most useful is the Snippet Expansion feature, which will expand a shortcut into a bit of boiler plate text - for example standard code snippets, or report headings. This year they released probably my favourite AI feature of any software. It puts AI models at your fingertips through simple keyboard shortcuts. This means I rarely break my flow to research topics or find out some information, and the speed with which I can summon this AI means I'm very rarely distracted by the internet. There are two ways to access this AI feature:

Quick AI

Calls up a floating window from anywhere on my Mac which allows me to ask a question to a specific LLM provider. The question is then sent to that service and the answer is displayed in the floating window. With a shortcut I can then paste the response into the software I'm working in or just dismiss it if I'm just looking for some quick answers. For most quick information gathering, this has replaced Google searches. The responses given are ephemeral and disappear after a few minutes if you close the window down.

AI Chat

Gives a larger chat window and is meant to be used for longer-form interactions with an LLM. In this mode I have set up some persistent Chats which have specific system prompts (and use specific models). For example I have a Software Engineer AI that is tuned to software engineering projects, a Researcher AI that I use for deeper and more comprehensive research tasks, A Business Mentor that I can ask questions on business strategy and other matters and a Marketing AI to edit my marketing content. The chats here help me get more specific and longer form information and also provide me an option to do multi-shot prompts (or discussions). I find this great if I want to go deeper on a topic, and also as each chat can use a different model, I can compare responses easily.

Code

Here I may be lagging behind on general adoption of AI by software and data engineering folks. I use VS Code as my main Integrated Development Environment (IDE). Up until last year, I had been a long time PyCharm user, but the slow introduction of AI features prompted me to try out VS Code and here I remain. VS Code has a built in AI Chat panel that has access to your active file as context. Switching between code and chat is pretty quick through keyboard shortcuts. Paired with this IDE is a Github CoPilot subscription which gives me access to the best models from the leading AI companies (admittedly I've only really used OpenAI and Anthropic's offerings)

VS Code gives you 3 options for Chat ask, edit and agent.

  • ask gives you the option to run it in interactive chat mode only.
  • edit allows the Chat to make edits to your files
  • agent vies the Chat full autonomous control of your code base (you still have the ability to accept or deny changes)

It probably gives you insight into my views of automated AI (agentic) development and vibe coding that I exclusively run chat in ask mode. For me this allows me to interact with LLMs on a frequency and level that keeps them as a coding partner. I find running them in edit and agent mode disconcerting as it feels too easy to blindly accept changes without understanding. Oh and I have AI code completions turned off because it usually interrupts my flow of work. For the time being I still want to use the AI as a partner rather than an agent engineer who I delegate tasks to and check in on after completion.

Lately I've also been learning more Javascript, to date I've successfully avoided learning this (ugly) language that powers much of the internet because it feels so different to my preferred programming language in syntax and mechanics. LLMs have been really great at helping me understand better some of these language nuances better and reduce the strain of trying them out.

Writing

I've been experimenting with AI more and more for generating summaries of meetings that I haven't attended, as a thought partner, to give me context on a new domain that I need to understand quickly to move on with work. I think I write technical content and documents well but my marketing content lacks some punch. Here I have some AI Chats set up in Raycast with the instruction to make my dry,technical text more interesting (but not too flashy).

It's taken me a while to hone the prompt so that it mirrors a tone that resembles what I would like to write like... if I were a better marketer. Again here Claude Sonnet fits my needs best but a realisation I've come to is that the model is less important than the System Prompt you use. Here I used a two pass approach to get to a better prompt. First I started with an initial prompt to turn some text I provided into more marketing-appropriate text. When I felt I got to the limit of this, I found some example text that had the right tone and language, and asked the LLM to tell me what prompt I should give to get the current output closer to tone and style of the marketing texts. This then gave me some additional instructions I added to the prompt. There is still some refining that could be done but right now (and for my needs) we've got to a good enough place that my marketing content isn't boring and dry.

On some client projects I use Anthropic's website to create Projects with additional information about a project. We have an enterprise account with Anthropic and have opted out of sharing our chats and assets with them for training purposes. We have a similar subscription with OpenAI which means none of the information we use is accessed outside our account. Despite these guarantees, I don't upload any confidential or PII data. These projects allow me to build a better understanding of our clients' business and their pain points. Here I use it as a thought partner to talk through the information we know of them. I very rarely use this to create reports or communications material, but these discussions help me create an outline for a report that I can then populate myself. I may use the model to further improve some sections of my writing but this is more akin to using it like an editor, rather than getting it to re-write specific sections.

Offline Mode

I quite often unplug from the internet and work off-grid. I find removing all distractions even for a few hours increases my productivity tenfold, trains my concentration, writing and programming muscles to not rely on outside help (and to think through problems, without immediately jumping to an internet search). It also helps me test how productive I can be without the internet or when I'm travelling. However, I am not a complete Luddite, and sometimes (after some thinking) it can be really useful to help me get some task over the line and move on with my work. I run models locally on my machine using Facebook's Ollama. The models I've used the most here are llama3.2 and qwen3. I am currently using a 2023 Macbook M1 Pro and 32GBs of memory. It can comfortably run 7B or 8B models and gives me answers that are good enough to get me unstuck in these deep focus periods, without introducing the risk of falling down an internet hole.

Wrap Up

It will be interesting to see how my AI use evolves over time, and how AI models improve (or plateau) over the next years. Right now, it's fair to say I'm using AI on a daily basis for various tasks. The aim at the moment is to help me maintain high quality output while moving faster and automating the boring stuff.

aitools