AI

On AI and Adaptability: AI is an asteroid and your software engineering job is a dinosaur

August 24, 2025 AI 2 comments

AI is the asteroid. Your job is the dinosaur. The question is: will your career evolve like mammals, or go extinct like T-Rex? I’m not saying that software engineering jobs will disappear, I am saying they will transform from their current form. Dinosaur jobs are things like writing boilerplate CRUD. New mammal jobs are things like designing AI-integrated systems. There is a lot in between that evolves.

Tech companies are now piloting AI powered interviews where you have to build something using AI during the interview. Are you ready for such an interview? I’m definitely not ready. Would you be able to survive this change next time you are on the lookout for the new job?

These days, with AI, being productive as a software engineer is not the same. A bit of a challenge with AI tools is that they are all new and rapidly evolving. One day, writing good prompts is good skill, next day building AI agents to do the job for you is the next thing, one model is good at this, another one is good at that. The amount of things available is also quite overwhelming.

I remember at one point in my career I felt I got really good at using Visual Studio with Resharper, so good that it actually felt like a significant differentiator in my speed compared to others. Then when I had to switch to other tools/tech (frontend, java, aws, other IDEs, etc) it felt unnatural and was leveling the playing field or placing me at disadvantage compared to people who already knew how to use the other tools. At the same time, the more I had to learn new tools the easier it was to switch the next time.

Adaptability is probably one of the best skills to work on during this rapid evolution in tech. We simply cannot afford to ignore AI, that would be the biggest career mistake you could make right now.

And to make one more point very clear: I believe that software engineering requires strong fundamental knowledge that doesn’t change: understanding of how computers work and interact with each other, understanding of how software runs, algorithms. There will always be a need to figure out how to translate business needs into these fundamental concepts, it is just that the translation tooling landscape is changing and we need to get good at them.

The asteroid has already hit. Your career’s survival depends on adaptability and fundamentals. Learn fast, stay curious, and don’t bet your career on yesterday’s tools. I’m writing this as much for myself as for you. I need to step up a lot.

What is a new AI tool/concept you learned last month?

(for me it was about the architecture of AI agents, incl. MCP protocol)


2 comments


Is AI Redefining Software Craftsmanship?

August 2, 2025 AI, RandomThoughts No comments

Let’s debate over the question: Is AI redefining what software craftsmanship is?

To answer this question we must first define and expand on what software craftsmanship is.

Software Craftsmanship (SC) can be defined as a mindset and approach to creating software that puts emphasis on quality, elegance, adherence to best practices, and continuous skill development.

Many ideas behind SC are shaped by books, like “Clean Code”, “The Pragmatic Programmer”, “Code Complete” and more. I’ve personally read these books in the past and have considered myself to be sort of a Software Craftsman, mainly because of taking pride in producing high quality code.

Let’s explore the “Yes” argument with an example. In my 2012 blog post “100% code coverage – real and good!” I argued that striving for absolute test coverage is not only realistic but professionally responsible saying it pays off in the long run. Recently, I worked on logic that needed new unit tests. AI generated over 100 tests for me, covering all the edge cases. It saved me tons of hours of work. I now treat these tests as a black-box safety harness. If I change the logic and introduce a bug, I expect one of them to fail. If I need to refactor heavily or modify API signatures, I simply ask AI to regenerate the tests. I no longer care if helper methods in the tests are extracted or follow perfect conventions because that’s now a solved problem. So, yes, AI is redefining what a software craftsman does.

Let’s explore the “No” argument. A colleague of mine, gave this example: in the privacy space, AI can generate some “good” code, but it might not go as far as to care about whether using a raw pointer in C++ code is higher risk because of the privacy context, and if you are not a SC you would simply not pay attention to that part and let it slip, similarly how I would not care about that extracted method in unit tests. So the argument goes, that AI cannot truly produce SC’s level of quality. Playing a bit of devil’s advocate, I think, AI will actually get good at caring about raw pointers, extracted methods, and other things like that. Perhaps we’re not replacing craftsmanship but rather we’re just shifting it to a higher level of abstraction.

To finish off, there was a time when people wrote in absolute binary (01110110) using absolute machine addresses and many programmers of that time resisted using symbolic approaches (like FORTRAN). The adoption by professionals was slow, because, hey, that’s “not true programming”. To replace another popular statement:

Software craftsmen won’t be replaced by AI, but those who use AI will replace those who don’t.

P.S. The idea for this post originated from a random conversation over dinner with a random co-worker I’ve never met before.


No comments


AI: From Skeptic to Power User to (Re) learner

July 13, 2025 AI No comments

I, mistakenly, have never given the entire AI trends that much consideration and even at one point suggested that it might be one of those overhyped technology trends that will fade away with time (like AR, Bitcoin, etc), . This post is about some of my personal experiences that made me reconsider this. This post is NOT written with LLM, though, lol.

University Years

My first ever experience actually doing something AI related, was during my studies around 2007-8, in fact, I did have quite a few AI courses at the university, including building a simple NN framework and even visualizing its internal layers structure and learning process with backpropagation. I didn’t give it much thought back then. It seemed to be quite a niche technology and just part of studies. I could see it classified some data I fed it with and saw how my classmate used it to recognize numbers from car number plates.

First Jobs

After that, for quite a long time, it hasn’t really shown on my radar that much. I guess I might have inadvertently used some tools that were utilizing some ML algorithms, like I remember using some open source library for a prototype project to match images. At Amazon I knew some teams that were working on AI related things, such as brand protection, recommendations, but never really worked on anything AI myself. Google is known to have been on the forefront of using AI long before anyone else. Advertising at Google has been running ML models for a very long time and I had to support their efforts by working on an experimentation platform that allowed those teams to verify their hypothesis and slowly roll out new models to the world.

Rasing Trend

During my years at Google, AI has risen in its popularity. Big tech companies started to invest extremely heavily into AI (Pichai-AI meme), oftentimes pushing for efficiency and cost cutting at one end, and at the same time expanding operations on the other end. I think it was also the time when I started making use of GenAI a lot more. More and more tools started to be available, coding has become somewhat easier, doing some summarizations has become easier and so on. Since I moved to META early 2025, the entire AI trend continues, and it’s clear that META is very aggressive in hiring top AI talent (media coverage). There is more and more interaction with ML at work, some of the projects my team is driving are to integrate with ML platforms, etc.

Crossing Personal Mental Threshold of Usefulness

My main reservation with using LLMs was that I usually felt the quality of results it produced did not justify the effort I put into prompting it. Especially given I always had to correct the result. Though, I believe this changed.

Last week I had to write a few new classes in C++, that would evaluate some expressions from configs, so instead of adding files manually, I just talked to AI, “hey, create me this and that and make sure interface has this signature”, “hey, add a UT class”, “hey, update dependencies”, and then it actually did a very fairly good job at all of that, not perfect, but really good enough to save me time. This is when realization came to me, this is now crossing that personal threshold I had in mind. It is more useful and worth a bit of effort fighting with it.

On a more personal front, recently, I wanted to replan some of my life goals, learning strategies, so I made heavy use of GPT and it’s just astonishing how good it has become at reasoning, structuring things, and actually producing what I want. I’m now a paid subscriber of GPT and am trying to use it more like a true personal assistant. I did use it before for financial advise, travel planning, summarization, etc, etc.

Last night, I was like, how about I ask GPT to learn something together, so I asked, “let’s create an AI learning plan, here is my background: …., make it personalized”. “AI refresher” was on the first week with suggested deliverables of building a small convolutional neural network on the CIFAR-10 data set. So… drum-roll, I asked it to build a notebook with code for all of it and it produced a bunch of code, which I followed up pasting into Colab, training the model and verifying its accuracy. It is just mind boggling how in just 20 minutes or so, I can build some stuff that would have taken weeks not too long ago, plus if there were things I didn’t understand I could clarify and it gave me really good answers.

One other thing, everybody knows, LLMs are good at travel planning. I usually prefer to plan everything myself and just get starting point from GPT, but this time we wanted to go camping spontaneously, so I asked LLM, “Lookup campgrounds within 2 hours drive from Seattle, that have plenty of first come first serve spots, access to lake, with activities including paddleboarding and biking. Create a list of 5 campgrounds with a short description.” – so it basically did all of the Googling for me based on that prompt, provided pointers to sources, etc, etc. Mindboggling.

One other thing I asked a specialized AI tool to do was to generate a 3D model to print. It did fairly good job – that’s it – giving in.

Where I think it can still be much better

There are still a few things that I want it to be better at. For example, being less of a “yes man”. Contradicting what LLMs say makes them change their mind and say “yes, you are absolutely correct, let me update the answer”. Other things are: better reasoning, understanding context even better, etc. Arguably, this would be a very tricky challenge for LLMs to be like true humans, but it appears we are definitely on that direction.

Conclusion

For me personally, LLMs and AI have now clearly crossed the threshold of being not just good enough—but genuinely useful. The time and effort it takes to engage with them are now well worth the return. Whether it’s writing code faster, mapping out life goals, or planning a camping trip, the tools have become practical enough to build into daily routines.

Having finally “given in” to their usefulness, I’m also embracing AI: having fresh curiosity and investing time to study it deliberately. It feels like the right moment to not just use the technology—but to understand it, shape how I interact with it, and grow with it. 


No comments