October 19, 2025 AI, Opinion No comments
October 19, 2025 AI, Opinion No comments
I would like to discuss a dilemma between near science fiction predictions of development of AI and grounded practical applications of AI.

First of all, it is completely undeniable that AI is changing our lives and will have a transformative effect on the future. We can argue that humanity has lived through many transformative events over and over again: invention of fire, agriculture, writing, electricity, industrialization, information technologies, so AI can be seen as just one more invention on our part. Now, is AI really just one more invention or something that would absolutely change what it is to be a human as we know it? Is this our last invention?
I just finished reading the book “The Singularity is Nearer”. The book is arguing that we will eventually extend the capabilities of our biological brains and go beyond the limits of our organic bodies. At first we would come up with inventions that would greatly extend and improve our lives (reaching “longevity escape velocity” in mid 30s) and that we will build brain-computer interfaces (think of phones now, AR glasses or something of the like next, brain implants next, nanorobots next, with eventual consciousness upload to the information network). As another book “Homo Deus” (my review) argues – we eventually become god-like and gain the ability to control life and environment and Homo Sapiens go extinct. We might eventually lose our carbon-based existence and just become information.
To my way of thinking, while much of that, like nanorobots repairing our bodies, may sound like science fiction, as long as it doesn’t break the laws of physics I’m on board that it can and may happen.
Now, let’s look at some more practical examples.
This same pattern of hitting a practical limit is not just a historical curiosity because I can see it already happening in the world of AI. Let’s have a look at some examples:
My point is that in many individual fields where AI is applicable we will be reaching the some kind of optimal point between theoretical possibility and practical application. In the process we will be seeing major transformations, like the entire sector of jobs associated with driving will be replaced by self-driving vehicles. There is a good chance this could create socio-economic disruptions and ripple effects. Just imagine, some rich “haves” can give their child superpowers while some poor “have nots” could not afford that. But I agree to the point that this is only “in-transite”, because now people in some poor countries can afford a phone that would be multi-billion worth of technology if this was mid last century.
My own predictions are that:
October 11, 2025 AI, Fun 2 comments
The other day my daughter showed me the typing game her teacher encouraged kids to play. My daughter was impressed with my typing speed. This blog post is to impress her even more.
The game she played was online typing practice – you type text and compete with other players for speed and accuracy. Players are represented as racing cars. If you win races you qualify to higher league of players. In this post I want to do few things:
Here is the result of about 30 minutes of work. You can play it yourself (if reading from e-mail you may need to open the blog).
This was achieved with 13 prompts, summarized like this:
1. Initial project creation prompt
2. CORS issue fix request
3. Container class addition request
4. HTML demo update request
5. Simple version (typing.html) request
6. Visual version creation request
7. Error display duration adjustment
8. Error display fix attempt
9. Revert request for error display
10. UI enhancement with "click to activate" label
11. Visual version adjustment request
12. Final revert request
13. History documentation requestNow instead of working with Claude, I started working with Gemini to create a design for the multiplayer game.
Here is the high level system design document: opens in another page.

Given that I explicitly prompted it to be fully stateless, relying only on client side local history, no login and no other complications this seems to be a fairly good start. My prompt was:
Now I want you to create a comprehensive system design to build the game described above. We need:
- simple website with JS logic
- backend that can create rooms of gamers based on their levels
- the game should protect user privacy so there is no user info stored on backend
- game history is only stored as long as user has local cookies
- backend should handle at least 10k users
- single game has max 6 players
- if user wins the game they are placed into higher leagueAs a next step I fed the generated system design document back to Claude in Visual Code. This time I had to fight a lot more with the AI as it was running into issues connecting players but finally arrived at the multiplayer game:

Prompting history:
- Initial setup of multiplayer backend server
- Setup Node.js with Socket.IO and databases
- Create basic server structure
- Implementation of matchmaking system
- Create skill-based queue system
- Handle player matching logic
- Game room and state management
- Implement room creation and management
- Handle player synchronization
- Create typing texts table
- Add sample data
- Visual keyboard integration
- Add keyboard visualization to multiplayer version
- Implement key highlighting
The backend is powered by:
- Node.js
- Socket.IO (for real-time WebSocket communication)
- Redis (for server-side data/caching)
- PostgreSQL (for the database)I have complete confidence that if I had 2 or 3 days to spend on this I could actually create a game that can be put out there on some servers and be actually playable by actual human beings! The new world of AI is nuts. I’m out of my time allocated to blogging, but I’m convinced over and over again, that the old times of programming are over – the only way to survive is to adopt.
P.S. Another thought: I have a friend who works on Linux kernel stuff. You would imagine that hardcore stuff like that would not be affected by the era of AI, but no, he says that AI helps to properly review pull requests to the kernel and catch real issues, moreover it does help to build more complex things. Caveat, of course, is that the proper knowledge is still required. Who knows, if I knew nothing about programming, maybe, I wouldn’t be able to build this typing game so quickly? Or, maybe, this is just a matter of time?
September 14, 2025 AI, Uncategorized No comments
Today I’m Vibe Coding something and inviting you to follow along. This is not a very serious post. The purpose of this blog post is just to see how quickly we can build a NN that plays snake game and if we can improve it:

Steps we we will follow:
Obviously, I need to write a prompt to generate such a game, but being lazy (and not knowing too much about IA) I offloaded prompt generation to GPT:
Generate a good prompt for copilot AI in Visual Code so it generates code for the snake game. Snake game can be the simplest possible terminal based game on a small field.The output was pretty reasonable prompt, which I could have written (but, hey, that takes time). The only thing I updated in the prompt was the very last line to keep track of the score of the game:
# Write a simple snake game in Python that runs in the terminal.
# Requirements:
# - Keep the game as simple as possible.
# - Use a small fixed grid (e.g., 10x10).
# - The snake moves automatically in the last chosen direction.
# - Use WASD keys for movement (no fancy key handling needed, blocking input is fine).
# - Place food randomly; eating food makes the snake longer.
# - The game ends if the snake runs into itself or the walls.
# - Print the field after each move using simple ASCII characters:
# - "." for empty space
# - "O" for snake body
# - "X" for snake head
# - "*" for food
# - Keep the code in a single file, no external libraries beyond Python standard library.
# - Keep it short and readable.
# - Keep the score of the game. The score equals the total number of food eaten.The generated code (with gpt-4o) was 73 lines of code and I could play the game in the terminal: https://github.com/andriybuday/snake-ia/blob/main/snake_game.py
Again, prompt to get the prompt:
Now we need another prompt. This time we want to use pytorch and will be building a simple 2 hidden layers neural network with reinforcement learning. Use large punishment for loosing the game and small rewards for each eaten food. We want to achieve quick learning without too many iterations.
The prompt it generated this time was much more extensive. Here are all of the prompts: https://github.com/andriybuday/snake-ia/blob/main/README.md I then fed that prompt to both GPT-4o and Claude.
Claude generated a much better AI. GPT generated something that couldn’t even get more than one food score, which Claude was in the territory of 10-20 score. Note, that max theoretical score on 10×10 is 99. You can see above a gif showing last few epochs of training and game play of the Claude version.
The code for this version: https://github.com/andriybuday/snake-ia/blob/main/snake_game_ai_claude.py
Ok, so what can be done to make this reach better scores? I asked GPT to recommend some improvements. It gave me general recommendations out of which I created a prompt for prompt:
Generate prompt I can give to Claude to improve performance of the Snake AI, potentially with these improvements: Change head to Dueling DQN, Add Double DQN target selection, Add PER (proportional, α=0.6, β anneal 0.4→1.0), Add 3-step returns, Add distance-delta shaping + starvation cap.To be honest, at this point I don’t know if these improvements make sense or not, but I took the generated prompt and fed it to Claude. And what I got was broken code, which crashes on the “IndexError: Dimension out of range”. I was hoping to run into something like this. Finally. Now I can probably debug the problem and try to find where we are running out of range, but no, I’m sharing the error and stack trace to Claude again. It was able to fix it BUT things got worse, the snake would run into infinite loops.
Turns out generated “upgraded” version is much worse. So I decided to take a different path and get back to simple first version and see what can be updated. The only things I did were increasing training time (# episodes), allowing for more steps for training, and slightly decreasing time penalty. This is the change: https://github.com/andriybuday/snake-ia/commit/796ad35924700dcb73ac6aaecf8df39ec8069940
With the above changes the situation was much better but still not ideal.
Sorry for the abrupt ending, but I don’t really have time to fine-tune the generated NN or create new models to achieve the best results. The purpose here was to play and see what we can get really quickly. Also another purpose of this post is to show that people, like me in this case, who just do Vibe Coding without knowing underlaying fundamentals cannot really achieve best results really quickly. Happy Vibe Coding!
Try it yourself:
git clone https://github.com/andriybuday/snake-ia.git
cd snake-ia
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
python snake_game_ai_claude.py
August 24, 2025 AI 2 comments

AI is the asteroid. Your job is the dinosaur. The question is: will your career evolve like mammals, or go extinct like T-Rex? I’m not saying that software engineering jobs will disappear, I am saying they will transform from their current form. Dinosaur jobs are things like writing boilerplate CRUD. New mammal jobs are things like designing AI-integrated systems. There is a lot in between that evolves.
Tech companies are now piloting AI powered interviews where you have to build something using AI during the interview. Are you ready for such an interview? I’m definitely not ready. Would you be able to survive this change next time you are on the lookout for the new job?
These days, with AI, being productive as a software engineer is not the same. A bit of a challenge with AI tools is that they are all new and rapidly evolving. One day, writing good prompts is good skill, next day building AI agents to do the job for you is the next thing, one model is good at this, another one is good at that. The amount of things available is also quite overwhelming.
I remember at one point in my career I felt I got really good at using Visual Studio with Resharper, so good that it actually felt like a significant differentiator in my speed compared to others. Then when I had to switch to other tools/tech (frontend, java, aws, other IDEs, etc) it felt unnatural and was leveling the playing field or placing me at disadvantage compared to people who already knew how to use the other tools. At the same time, the more I had to learn new tools the easier it was to switch the next time.
Adaptability is probably one of the best skills to work on during this rapid evolution in tech. We simply cannot afford to ignore AI, that would be the biggest career mistake you could make right now.
And to make one more point very clear: I believe that software engineering requires strong fundamental knowledge that doesn’t change: understanding of how computers work and interact with each other, understanding of how software runs, algorithms. There will always be a need to figure out how to translate business needs into these fundamental concepts, it is just that the translation tooling landscape is changing and we need to get good at them.
The asteroid has already hit. Your career’s survival depends on adaptability and fundamentals. Learn fast, stay curious, and don’t bet your career on yesterday’s tools. I’m writing this as much for myself as for you. I need to step up a lot.
What is a new AI tool/concept you learned last month?
(for me it was about the architecture of AI agents, incl. MCP protocol)
August 2, 2025 AI, RandomThoughts No comments
Let’s debate over the question: Is AI redefining what software craftsmanship is?
To answer this question we must first define and expand on what software craftsmanship is.
Software Craftsmanship (SC) can be defined as a mindset and approach to creating software that puts emphasis on quality, elegance, adherence to best practices, and continuous skill development.
Many ideas behind SC are shaped by books, like “Clean Code”, “The Pragmatic Programmer”, “Code Complete” and more. I’ve personally read these books in the past and have considered myself to be sort of a Software Craftsman, mainly because of taking pride in producing high quality code.
Let’s explore the “Yes” argument with an example. In my 2012 blog post “100% code coverage – real and good!” I argued that striving for absolute test coverage is not only realistic but professionally responsible saying it pays off in the long run. Recently, I worked on logic that needed new unit tests. AI generated over 100 tests for me, covering all the edge cases. It saved me tons of hours of work. I now treat these tests as a black-box safety harness. If I change the logic and introduce a bug, I expect one of them to fail. If I need to refactor heavily or modify API signatures, I simply ask AI to regenerate the tests. I no longer care if helper methods in the tests are extracted or follow perfect conventions because that’s now a solved problem. So, yes, AI is redefining what a software craftsman does.
Let’s explore the “No” argument. A colleague of mine, gave this example: in the privacy space, AI can generate some “good” code, but it might not go as far as to care about whether using a raw pointer in C++ code is higher risk because of the privacy context, and if you are not a SC you would simply not pay attention to that part and let it slip, similarly how I would not care about that extracted method in unit tests. So the argument goes, that AI cannot truly produce SC’s level of quality. Playing a bit of devil’s advocate, I think, AI will actually get good at caring about raw pointers, extracted methods, and other things like that. Perhaps we’re not replacing craftsmanship but rather we’re just shifting it to a higher level of abstraction.
To finish off, there was a time when people wrote in absolute binary (01110110) using absolute machine addresses and many programmers of that time resisted using symbolic approaches (like FORTRAN). The adoption by professionals was slow, because, hey, that’s “not true programming”. To replace another popular statement:
Software craftsmen won’t be replaced by AI, but those who use AI will replace those who don’t.
P.S. The idea for this post originated from a random conversation over dinner with a random co-worker I’ve never met before.
July 13, 2025 AI No comments
I, mistakenly, have never given the entire AI trends that much consideration and even at one point suggested that it might be one of those overhyped technology trends that will fade away with time (like AR, Bitcoin, etc), . This post is about some of my personal experiences that made me reconsider this. This post is NOT written with LLM, though, lol.
My first ever experience actually doing something AI related, was during my studies around 2007-8, in fact, I did have quite a few AI courses at the university, including building a simple NN framework and even visualizing its internal layers structure and learning process with backpropagation. I didn’t give it much thought back then. It seemed to be quite a niche technology and just part of studies. I could see it classified some data I fed it with and saw how my classmate used it to recognize numbers from car number plates.
After that, for quite a long time, it hasn’t really shown on my radar that much. I guess I might have inadvertently used some tools that were utilizing some ML algorithms, like I remember using some open source library for a prototype project to match images. At Amazon I knew some teams that were working on AI related things, such as brand protection, recommendations, but never really worked on anything AI myself. Google is known to have been on the forefront of using AI long before anyone else. Advertising at Google has been running ML models for a very long time and I had to support their efforts by working on an experimentation platform that allowed those teams to verify their hypothesis and slowly roll out new models to the world.
During my years at Google, AI has risen in its popularity. Big tech companies started to invest extremely heavily into AI (Pichai-AI meme), oftentimes pushing for efficiency and cost cutting at one end, and at the same time expanding operations on the other end. I think it was also the time when I started making use of GenAI a lot more. More and more tools started to be available, coding has become somewhat easier, doing some summarizations has become easier and so on. Since I moved to META early 2025, the entire AI trend continues, and it’s clear that META is very aggressive in hiring top AI talent (media coverage). There is more and more interaction with ML at work, some of the projects my team is driving are to integrate with ML platforms, etc.
My main reservation with using LLMs was that I usually felt the quality of results it produced did not justify the effort I put into prompting it. Especially given I always had to correct the result. Though, I believe this changed.
Last week I had to write a few new classes in C++, that would evaluate some expressions from configs, so instead of adding files manually, I just talked to AI, “hey, create me this and that and make sure interface has this signature”, “hey, add a UT class”, “hey, update dependencies”, and then it actually did a very fairly good job at all of that, not perfect, but really good enough to save me time. This is when realization came to me, this is now crossing that personal threshold I had in mind. It is more useful and worth a bit of effort fighting with it.
On a more personal front, recently, I wanted to replan some of my life goals, learning strategies, so I made heavy use of GPT and it’s just astonishing how good it has become at reasoning, structuring things, and actually producing what I want. I’m now a paid subscriber of GPT and am trying to use it more like a true personal assistant. I did use it before for financial advise, travel planning, summarization, etc, etc.
Last night, I was like, how about I ask GPT to learn something together, so I asked, “let’s create an AI learning plan, here is my background: …., make it personalized”. “AI refresher” was on the first week with suggested deliverables of building a small convolutional neural network on the CIFAR-10 data set. So… drum-roll, I asked it to build a notebook with code for all of it and it produced a bunch of code, which I followed up pasting into Colab, training the model and verifying its accuracy. It is just mind boggling how in just 20 minutes or so, I can build some stuff that would have taken weeks not too long ago, plus if there were things I didn’t understand I could clarify and it gave me really good answers.
One other thing, everybody knows, LLMs are good at travel planning. I usually prefer to plan everything myself and just get starting point from GPT, but this time we wanted to go camping spontaneously, so I asked LLM, “Lookup campgrounds within 2 hours drive from Seattle, that have plenty of first come first serve spots, access to lake, with activities including paddleboarding and biking. Create a list of 5 campgrounds with a short description.” – so it basically did all of the Googling for me based on that prompt, provided pointers to sources, etc, etc. Mindboggling.
One other thing I asked a specialized AI tool to do was to generate a 3D model to print. It did fairly good job – that’s it – giving in.
There are still a few things that I want it to be better at. For example, being less of a “yes man”. Contradicting what LLMs say makes them change their mind and say “yes, you are absolutely correct, let me update the answer”. Other things are: better reasoning, understanding context even better, etc. Arguably, this would be a very tricky challenge for LLMs to be like true humans, but it appears we are definitely on that direction.
For me personally, LLMs and AI have now clearly crossed the threshold of being not just good enough—but genuinely useful. The time and effort it takes to engage with them are now well worth the return. Whether it’s writing code faster, mapping out life goals, or planning a camping trip, the tools have become practical enough to build into daily routines.
Having finally “given in” to their usefulness, I’m also embracing AI: having fresh curiosity and investing time to study it deliberately. It feels like the right moment to not just use the technology—but to understand it, shape how I interact with it, and grow with it.