There’s a lot of hype around these days about the new inevitable - the arrival of entirely independent AI developer. With a risk of this getting terribly outdated terribly fast – so please check the date of this post before reading – let me share a few thoughts on why I’m not holding my breath just yet.
The Promise
Industry has been looking for ways to replace expensive and “slow” software engineers with tools that never break, never get tired, don’t need time off, and never get sick—probably for the last decade already. With the rise of LLMs, this promise of such an independent system looks to be closer than ever.
The dream user journey for an AI developer should look like this:
- You specify what you want to get built, e.g., “I want a mobile app, with registration and login, 5 screens: <..>.”
- An AI developer, like an actual human, clarifies uncertainties with you, e.g., “Do you want to use a username or email to log in?”
- You take a coffee break, and once you’re back - here’s your new $1B app ready to roll. It might need a few more iterations to adjust things where an AI developer, like an actual human, took the wrong direction, but in principle, it’s a rinse-and-repeat of the steps above.
The Doubt
Artificial intelligence models, or Machine Learning, or Neural Networks, have been around for a very long time already. Specialized models have become so good that they outperform humans in some tasks many times over. Some of those models are already participating, even if invisibly, in our daily lives in facial/fingerprint recognition, traffic control, threat assessment, etc.
However great those models get, some areas do not get AI-fied despite colossal amounts of effort. For example, self-driving cars. With incredible amounts of training data and the fact that humans are pretty bad drivers on average, it seems like a no-brainer with today’s technology. We have some limited applications, yes, but it’s nowhere close to a mass-spread adoption as everyone was expecting and dreaming about 10 years ago.
The Real World
I guess that the problem is… the real world. Our natural world, with complex humans around it and their interactions, is far less predictable than it might look. As with self-driving cars, is that object a cat or just a cat-like shadow on the ground? That pause a mortal human was making - is it a doubt about the requirements (clarification should be issued), or did they just get distracted because of a message on the phone? How do we explain aesthetics? Don’t even get me started with cultural differences so prominent in different parts of the world.
Existing Tools
We already have so many tools that should remove “developer” from some aspects of application development. Website builders got so incredibly powerful that, in theory, one should never bother building a new ones from scratch; should that be a news website, e-commerce, or just a representational site. Enterprise systems like BPMs have also been around for decades, where you should be able to re-adjust business processes on the fly without the touch of a single engineer. However, we still do so many things manually.
In Humans We Believe
AI developer should have been trained on such vast amounts of data, it should know pretty much everything software engineering has to offer. This sounds (and will be!) impressive, but it’s a limitation at the same time. It can repeat what has already been done more efficiently than a human many times over. But what if you’re trying to do something new? Trying to innovate, merge areas never merged before, and experiment with approaches that were considered anti-patterns a few years ago. How specific will you need to be to achieve that? Or is everyone going to be very happy with the similarly-looking apps and websites? Just because research data showed that a specific layout maximizes engagement, therefore it is likely only one layout, one type of design, one type of interactivity model is here to remain. (It seems like something of this kind is happening already without AI overlords anyway)
Humans understand loosely connected systems intrinsically. Cultular and real-life systems, too, can manipulate their experience of real life to know what they’re building, why, and how it will be used. Aspects of physical and psychological safety, interactions, heritage, and when it’s actually appropriate to cross some of those lines (well, not always) all with the ability to put this knowledge into use when we’re discussing requirements and solutions to it.
The Right Problem
And perhaps this is not even the right problem to solve? If a professional software engineer is writing code for 30-40% of their working time, it’s nearly incredible. The rest goes to getting and understanding requirements, reviewing, debugging, testing, deploying, refining, discussing. All those activities are not a waste - they would have been eliminated long ago if so. They’re all there to ensure the software that is getting created solves the problem it is supposed to solve. And more than what has been written down.
So, as long as somebody is ready to be so incredibly precise, ubiquitous, and descriptive with their requirements, an AI developer will work just fine.
But aren’t we all just writing precise enough instructions to the computer what to execute already?
Photo by Tara Winstead
Comments