I find most developers fall into one of two camps:
1. You treat your code as a means to an end to make a product for a user.
2. You treat the code itself as your craft, with the product being a vector for your craft.
The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good your code is.
The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
nocman 22 hours ago [-]
> No one has ever made a purchasing decision based on how good your code is.
absolutely false.
> The general public does not care about anything other than the capabilities and limitations of your product.
also false.
People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
latexr 21 hours ago [-]
> You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
Exactly, thank you for putting it like that.
So far it’s been my observation that it’s only the people who think like the OP who put the situation in the terms they did. It’s a false dichotomy which has become a talking point. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.
For an exaggerated, non-comparable example meant only to illustrate the power of such framing devices, one could say: “there are people who think guns should be regulated, and there are people who like freedom”. It puts the matter into an either/or situation. It’s a strategy to frame the conversation on one’s terms.
doug_durham 19 hours ago [-]
I agree with OP's distinction. However just because you see software as a means to an ends, doesn't mean that you don't feel that quality and craft are unimportant. You can see the "craft" oriented folks as being obsessed with the form of their software. A "craft" oriented engineer might rewrite a perfectly functioning piece of software to make it what they perceive to be "easier to reason about". I consider most software rewrites to be borderline malpractice.
deltaburnt 18 hours ago [-]
I think the kind of surface level rewrites that people rag on are pretty rare, at least in my experience. Realistically code that's impossible to understand, underdocumented, and lacking in proper abstractions is also deficient code. If you've ensured that the code is "good enough", you will likely hit a bug or feature request that is hindered by the poor structure and understanding of the code.
It's totally fine to say "the code works, that area is stable, let's not mess with that code". I make those kinds of tradeoffs on a near daily basis. But let's be real, "perfectly functioning code" is an ill defined, moving target. What looks like perfectly functioning code to a sibling team or a PM, could be a massive liability to someone who actually knows the code.
But then again I'm writing OS and performance critical code. A 1 in 1 million bug is easier to ignore in a throwaway log viewer website.
tikhonj 16 hours ago [-]
The most productive teams I've seen (eg at Jane Street) rewrite things all the time, and still move faster than any "normal" teams I've seen. I remember when I interned there over a decade ago, they were already on like the seventh? version of their incremental computing framework, and were building a new build system. But they were also incredibly effective at getting things done both on a per-engineer basis and in terms of making money.
16 hours ago [-]
jimbokun 16 hours ago [-]
With bad code it’s often almost impossible to improve the functionality or correctness or performance of the code, without first rewriting parts of it.
bit-anarchist 19 hours ago [-]
That example doesn't work well. All regulations come at the cost of freedom, and every freedom comes at the cost of regulations. While it isn't a strict binary (either 100% freedom or 100% regulation), enacting regulations do interfere with freedom. So this isn't just framing, it demonstrates a relationship between the two concepts, which may become relevant down in the discussion, if it already hasn't.
layer8 18 hours ago [-]
Regulation can cause freedom to be balanced differently between parties. For example, regulating smartphone manufacturers can result in more freedom for users. It’s not true that regulation necessarily reduces freedom overall (to the extent that that can even be graded). Just like rights, freedoms aren’t absolute, and one’s freedom often impinges on someone else’s freedom.
bit-anarchist 15 hours ago [-]
The increase of freedom of the users is an indirect side effect, intentional or not, which, as you put it, can happen, or not. But a direct effect, which is guaranteed to happen, is the loss of freedom of the manufacturer. Whether that's a good thing, that's another topic.
qotgalaxy 13 hours ago [-]
The increase of freedom of the slaves is an indirect side effect. But a direct effect, which is guaranteed to happen is the loss of freedom of the slaveholder.
bit-anarchist 11 hours ago [-]
That's certainly a perspective, specially given how slavery is often regulated into law.
But I digress, there's a plentiful discussion to be had about the ethics and morality of freedom/regulations, but my point is how there is, in fact, a dichotomy between both and it isn't just framing. Which, in a sense, you just corroborated.
ttamslam 22 hours ago [-]
> People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
I would classify all of those as "capabilities and limitations of your product"
I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.), and in that sense I agree no customer who's just using the product actually cares about that.
Another definition of "good code" is probably "code that meets the requirements without unexpected behavior" and in that sense of course end users care about good code, but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
westoncb 21 hours ago [-]
>but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
The reality of software products is that they are in nearly in all cases developed/maintained over time, though--and whenever that's the case, the black box metaphor fails. It's an idealization that only works for single moments of time, and yet software development typically extends through the entire period during which a product has users.
> I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.)
The above is also why these properties you've mentioned shouldn't be considered aesthetic only: the software's likelihood of having tractable bugs, manageable performance concerns, or to adapt quickly to the demands of its users and the changing ecosystem it's embedded in are all affected by matters of abstraction selection, code organization, and documentation.
arijun 21 hours ago [-]
But those aesthetics stem from that need for fewer bugs, performance, maintainability. Identifying/defining code smell comes from experience of what does and doesn’t work.
> I wouldn't care so long as I wasn't expected to maintain it.
But, if you’re the one putting out that software, of course you will have to maintain it! When your users come back with a bug or a “this flow is too slow,” you will have to wade into the innards (at least until AI can do that without mistakes).
Xirdus 21 hours ago [-]
Good abstractions translate directly into how quickly the devs can fix bugs and add new features.
myrak 20 hours ago [-]
[dead]
jimbokun 16 hours ago [-]
The single line single character program, after a certain level of complexity, will no longer have the same behavior because it cannot be maintained.
If it’s software that will never be modified, sure it doesn’t matter.
skydhash 21 hours ago [-]
But the thing is that someone has to maintain it. And while beautiful code is not the same as correct code, the first is impactful in getting the second and keeping it.
And most users are not consuming your code. They’re consuming some compiled, transpiled, or minified version of it. But they do have expectations and it’s easier to amend the product if the source code is maintainable.
mbesto 20 hours ago [-]
> obviously created by people the care deeply about the quality of the product they produce
This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
> There is no substitute for high quality work.
You're right because there really isn't a consistent definition of what "high quality" software work looks like.
MrRadar 20 hours ago [-]
> This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
Those first three are "enterprise" or B2B applications, where the person buying the software is almost never one of the people actually using the software. This disconnect means that the person making the buying decision cannot meaningfully judge the quality of any given piece of software they are evaluating beyond a surface level (where slick demos can paper over huge quality issues) since they do not know how it is actually used or what problems the actual users regularly encounter.
mbesto 20 hours ago [-]
Which might be true, but is totally irrelevant to the OP's comment.
MrRadar 20 hours ago [-]
Users care about quality, even if the people buying the software do not. You can't just say "well the market doesn't care about quality" when the market incentives are broken for a paricular type of software. When the market incentives are aligned between users and purchasers (such as when they are the same person) quality tends to become very important for the market viability of software (see Windows in the consumer OS market, which is perceptibly losing share to MacOS and Linux following a sustained decline in quality over the last several years).
mbesto 17 hours ago [-]
> "well the market doesn't care about quality"
You literally just told me the market doesn't care about quality. I don't get what point you're trying to make?
> When the market incentives are aligned between users and purchasers (such as when they are the same person) quality tends to become very important for the market viability of software
Right, but this magical market you're talking about doesn't exist. That's my point.
tokioyoyo 17 hours ago [-]
Have you seen large consumer products’ codebases?… Companies like Google are tiniest exceptions when it comes to code gatekeeping and quality.
jimbokun 16 hours ago [-]
Funny how the biggest, most profitable software companies emphasize code quality.
deaux 45 minutes ago [-]
Riot Games around 2014(?) had the most profitable (non-mobile?) game in the world with an infamously horrific codebase, even by game standards.
tokioyoyo 16 hours ago [-]
Have you seen Facebook's code quality? Have you seen any-big Chinese corpo code? There are a lot of very profitable businesses in the world with endless amount of tech debt. But tech debt is not necessarily a big deal in most scenarios. Obviously I'm not talking about mission critical software, but for general consumer/business software, it's fine. The hard part is understanding where you can cut the costs / add debt, and that comes from requirement gathering.
pixl97 3 hours ago [-]
I'm pretty sure that Google is an advertising company that just happens to write software.
alternatex 4 hours ago [-]
At Microsoft this totally depends on the org and team. I've seen the full spectrum of quality here.
16 hours ago [-]
theshrike79 3 hours ago [-]
> You're right because there really isn't a consistent definition of what "high quality" software work looks like.
And if you can deterministically define "high quality software" with linters, analysers etc - then an AI Agent can also create high quality software within those limits.
mbesto 2 hours ago [-]
ding. ding.
Terr_ 19 hours ago [-]
> You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product.
What about caring and being depressed because quality comes from systems rather than (just) individuals?
tkiolp4 20 hours ago [-]
SAP, Salesforce, Booking.com… all awful products. We use them because monopolies.
cdrnsf 20 hours ago [-]
I couldn't book travel at a previous company because my address included a `.`, which passed their validation. Awful, awful software. I wouldn't expect slop code to improve it.
jimbokun 16 hours ago [-]
Now imagine how much they would make if their software was good.
Google, Facebook, Apple clearly care deeply about the quality of their code. They have to because bugs, bad performance, outages, vulnerabilities have very direct and immediate costs for them. I know Amazon and Microsoft have their critics but I bet they are also better than we give them credit for.
There are factors besides software quality that affect their success. But running bad software certainly isn’t going to help.
kode-targz 4 hours ago [-]
>Google, Facebook, Apple clearly care deeply about the quality of their code.
Yea, idk about that one.
They definitely did care in the past. They had to if they wanted to get users. But they've stopped caring a good while ago. Especially Microsoft. The costs that bad code would bring them is lower than the cost of developping good code, because they can mostly rely on monopolies and anti-competitive practices for user retention. Their users are more like hostages than anything else.
tyingq 33 minutes ago [-]
Is Google much better? I don't see, for example, the care that used to go into the quality of organic search results.
They seem fine with the output of the current hodge-podge of the original algorithm results plus massaging by many downstream ML pipelines that run one after the other without context of how each stop might affect the next.
christophilus 16 hours ago [-]
> Google, Facebook, Apple clearly care deeply about the quality of their code
Not the impression I get these days.
tau5210 9 hours ago [-]
Thank you for putting out a clear message. I completely agree.
> parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
and of course, this isn't even the worst. A lot worse can happen, such as data loss and corruption. Things that can directly affect people's lives in real life.
As a developer, these things are constantly on my mind, and I believe this is the case for people who do care about the quality.
As has been said elsewhere many times, AI producing code is not the same as say, a compiler producing machine code. It is not such a well-defined strong abstraction, hence why code quality still is highly relevant.
It is also easily forgotten that code is as much a social construct (e.g. things that have to be done in certain ways due to real life restrictions, that you wouldn't have to do in an ideal world).
Sometimes I feel very powerless though.
It feels as if some of us are talking past each other, even if we seemingly are using the same words like "quality".
Or in a way, that is what makes this more futile-- that we are using the same words and hence seemingly talking about the same thing, when we are referring to completely different things.
It is difficult to have a conversation about a problem when some of us don't even see it as a problem to begin with-- until it reaches a point when it starts affecting their lives as well, whether it be directly or indirectly.
But that takes time.
Time will tell.
delbronski 20 hours ago [-]
Garbage software that is slow as a dog has been winning. While we’ve been obsessing over our craft and arguing about what makes software beautiful, slow crappy software has taken over the world.
Quality of code is just not that important of a concept anymore for the average web developer building some saas tool. React code was always crap anyways. Unless you are building critical systems like software that powers a plane or medical equipment, then code quality just doesn’t really matter so much in the age of AI. That may be a hard pill to swallow for some.
jimbokun 16 hours ago [-]
I believe a lot if large software companies like Google, Facebook, Apple, Netflix, Microsoft have very high quality code. Past a certain level of complexity, bad code will just collapse under the weight of its low quality.
pixl97 3 hours ago [-]
> Past a certain level of executions
I don't think complexity is the right metric. The listed companies still do write a lot of shit software, but mostly smaller things you don't see much. Anything in the 'fast path' of these companies is getting executed trillions or quadrillions of times and because of that any bugs will become shallow.
modo_mario 7 hours ago [-]
I think you can scrap Microsoft there when it comes to their pc stuff and their mainly B2B stuff (dynamics and the like)
As for the rest other than apple I think pageloads are an obvious metric to track and there's always going to be a contingent of their programmers incentivized to care about their code quality because slow code and the like runs on their machines at such scale that it often costs them a lot of money, same with uptimes of various services. Their amount spent on wages is comparatively very low.
allajfjwbwkwja 18 hours ago [-]
There's a lot of space between web dev and medical equipment. I've never met a user that loved how often their work was lost or how long it took to implement feature requests amid all the ship-it-quick duct tape.
girvo 18 hours ago [-]
> then code quality just doesn’t really matter so much in the age of AI
Except at scale it really does, because garbage in garbage out. The crappier the code you feed the current models, the worse and more confusing the broken leaky abstractions, the more bugs the AI will generate.
layer8 18 hours ago [-]
The question is whether it has to be that way. Developers who are against slop don’t believe that the current state of software is the best possible world.
boplicity 15 hours ago [-]
> There is no substitute for high quality work.
That's where you get it wrong. The world is full of mediocre and low quality work in many, many fields. We all, in fact, depend on mediocre work in many ways.
Many, many people would prefer a solution with mediocre or even bad code than no solution at all while they wait for "high quality work" that never appears.
The magic of LLMs, especially as the technolgy improves, is that a truly mind boggling number of solutions to problems will be created with thoroughly mediocre (or worse!) LLM generated code. And the people benefitting from those solutions won't care! They'll be happy their problems are being solved.
deaux 36 minutes ago [-]
No, you see, this is impossible, as I've been taught here on HN by the wise elders over and over. As soon as a pristine codebase is tainted by even 1% of LLM-generated code, any chance at potential user value instantly disintegrates. Especially if it claims to do anything remotely novel!
But seriously, the denial is incredible to watch. Our product wouldn't exist without LLMs, and our users are vocally thankful that it does, saving them time and money and helping them reach their offline goals.
> The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
Exactly. A lot of devs optimizing for whether the feature is going to take a day or an hour, but not contemplating that it's going to be out in the wild for 10 years either way. Maybe do it well once.
ttamslam 22 hours ago [-]
> but not contemplating that it's going to be out in the wild for 10 years either way
I think there are a lot of developers working in repos where it's almost guaranteed that their code will _not_ still be there in 10 years, or 5 years, or even 1 year.
coldtea 21 hours ago [-]
>I think there are a lot of developers working in repos where it's almost guaranteed that their code will _not_ still be there in 10 years, or 5 years, or even 1 year.
And in almost all of those cases, they'd be wrong.
nostrademons 19 hours ago [-]
I think I calculated the half-life of my code written at my first stint of Google (15 years ago) as 1 year. Within 1 year, half of the code I'd written was deprecated, deleted, or replaced, and it continued to decay exponentially like that throughout my 6-year tenure there.
Interestingly, I still have some code in the codebase, which I guess makes sense because I submitted about 680K LOC (note: not all hand-authored, there was a lot of output from automated tools in that) and 2^15 is 32768, so I'd expect to have about 20 lines left, which is actually surprisingly close to accurate (I didn't precisely count, but a quick glance at what I recognized suggested about 200 non-deprecated lines remain in prod). It is not at all the code that I thought would still be there 15 years later, or that I was most proud of. The most durable change appears to be renaming some attributes in a custom templating language that is now deeply embedded in the Search stack, as well as some C++ code that handles how various search options are selected and persisted between queries.
I think this both proves and disproves the original point. Most of your code is temporary. You have no idea which parts of your code is temporary. It's probably not the parts that you wish were temporary, which will almost certainly be made permanent.
TheCoelacanth 2 hours ago [-]
Good code is easy to replace and bad code is hard to replace, so bad code is disproportionately long-lived.
benoau 21 hours ago [-]
In my experience the code will, but by year 5 nobody is left who worked on it from inception, and by year 10 nobody knows anybody who did, and during that time it reaches a stage where nobody will ever feel any sense of ownership or care about the code in its entirety again.
contextfree 21 hours ago [-]
I come into work and work on a 20 year old codebase every day, working on slowly modernizing it while preserving the good parts. In my experience, and I've been experimenting with both a lot, LLM-based tools are far worse at this than they are at starting new greenfield projects.
ryandrake 20 hours ago [-]
This conversation shows how diverse the field is!
When it comes to professional development, I've almost never worked on a codebase less than 10 years old, and it was always [either silently or overtly] understood that the software we are writing is a project that's going to effectively live forever. Or at least until the company is no longer recognizable from what it is today. It just seems wild and unbelievable to me, to go to work at a company and know that your code is going to be compiled, sent off to customers, and then nobody is ever going to touch it again. Where the product is so throwaway that you're going to work on it for about a year and then start another greenfield codebase. Yet there are companies that operate that way!
aplomb1026 19 hours ago [-]
[dead]
steveBK123 22 hours ago [-]
It's important to know which type of repo/project you are in and hire/code accordingly.
I've seen mismatch in each direction..
AlotOfReading 21 hours ago [-]
How can you possibly know which type of repo you're in ahead of time? My experience is that "temporary" code frequently becomes permanent and I've also been on the other side of those decisions 40 years later.
skydhash 21 hours ago [-]
Unless you’re producing demos for sales presentation (internally or externally), it’s always worth it to produce something good. Bad code will quickly slow you down and it will be a never ending parade of bug tickets.
steveBK123 20 hours ago [-]
indeed, being on-call cleanses many developers of slopulist habits
abelitoo 17 hours ago [-]
That depends on how quick the feedback loop is for your decisions. If it takes weeks or months to find the impact of your changes, or worse, if you're insulated somehow from those changes, you may not be pushed toward improving the quality of your code.
jimbokun 16 hours ago [-]
A company where it takes weeks and months to deploy a code change is not a company with a long term success horizon.
pixl97 3 hours ago [-]
Lolololol, sorry, I can't help but laugh a bit because some of the most entrenched companies are also the slowest moving.
gjadi 19 hours ago [-]
It depends on their sleep habit, work-life requirements and compensation when they need to be on-call.
When you get a fatter check because your code break, the incentives are not in favor of good code.
strogonoff 21 hours ago [-]
If a product looks pretty and seems to work great at first experience, but is really an unmaintainable mess under the hood, has an unvetted dependency graph, has a poorly thought through architecture that no one understands, perhaps is unsustainable due to a flawed business model, etc., to me it simply suffers from bad design[0], which will be felt sooner or later. If I know this—which is, admittedly, sometimes hard to know (especially in case of software products compared to physical artifacts)—I would, given alternatives, make the choice to not be a customer.
In other words, I would, when possible, absolutely make a purchasing decision based on how good the code is (or based on how good I estimate the code to be), among other things.
[0] The concept of design is often misunderstood. First, obviously, when it’s classified as “how the thing looks”; then, perhaps less obviously, when it’s classified as “how the thing works”. A classification I am arriving at is, roughly, “how the thing works over time”.
raw_anon_1111 4 hours ago [-]
This is provable false. First in B2B software the user is not the buyer. The reasons the buyer buys software is usually big because how well it performs but for a lot of other reasons.
Second, shitty electron apps are pervasive.
Ekaros 21 hours ago [-]
Demos might ne nice an flashy. But eventually you actually have to have generally working product. Too many issues with too many annoyances and eventually users of even enterprise software will be heard. Especially so if there is some actual loss of money or data that is not corrected very fast.
In the end software is means to the end. And if you do not get to end because software is crap it will be replaced, hopefully by someone else.
tokioyoyo 17 hours ago [-]
The argument is “it’s not crap if it works and does what it’s supposed to do relatively well, and it doesn’t matter how it’s written under the hood”.
Aeolun 16 hours ago [-]
> There is no substitute for high quality work.
Have you taken a look at the world in the past… I dunno, at least several decades. That ceased to be true somewhere around the time I was in high school, maybe before.
ozgrakkurt 9 hours ago [-]
It is a spectrum and the perspectives of people change based on where they are on the spectrum. It is genuinely hard to keep up with things and also keep my emotions in check to stay objective
jimbokun 16 hours ago [-]
I would add to that for software that has scaled to the point that it needs to run in a distributed fashion, code quality will show up very quickly in dollars and sense.
High latencies, outages, memory leaks, security vulnerabilities, will be seen in your AWS bill or whatever hardware or service you deploy your software to. If your code isn’t clear enough to understand what it’s really doing, you have no chance at preventing or addressing the above.
EFreethought 21 hours ago [-]
>> No one has ever made a purchasing decision based on how good your code is.
> absolutely false.
Actually, you are both correct.
Nobody makes a purchasing decision based on code quality.
But they may later regret a purchasing decision based on code quality.
singpolyma3 17 hours ago [-]
There's a big gap though between bug free, performance, etc and craft
oxag3n 19 hours ago [-]
I treat the code I produce as my craft and I appreciate I can afford it and enjoy the output.
I know engineers who aren't that lucky and struggle in "enterprise" software development, where slop was a feature for decades - people making decisions won't use the software (their low paid employees will) and software monstrosities need a hell a lot of support which sometimes brings more revenue than the original purchase.
fooker 21 hours ago [-]
> The longer your product exists the more important the quality of the code will be
From working on many many old and important code bases, the code quality is absolutely trash.
_the_inflator 10 hours ago [-]
Is it really? I disagree strongly on all points, but know where you are coming from by an intuitive sense. I am a dev since the 80th and know how to make 64kb do wonders. It just took us years to find out.
You present no proof only touchy feely it must be so otherwise pseudo proof like software longevity is on the line.
Please first define software longevity quality in detailed terms: what is it exactly and how to you assess it regarsing quality and quantity?
Doom is judged by myself and by its versatility a masterpiece however, we all know and the Black Book is quite open about it the code itself is arguably not hitting modern standards as of today and there is a infamous WTF hardcoded value used to do speed code. So what? It inspired a whole generation. Second reality? A mindbender demo that accelerated quite a few finish developer careers has sadly forgotten after it was once considered for decades to be for the demo scene on PC what Doom was for the underground gamer scene. A nice match anyway.
Is Windows a masterpiece or not? Judging by its UX/UI definitely not, by its versatility and transposable potential I would rate it a masterpiece.
1000th of developers work on some code basis which change chipsets and compiler settings quite significantly - so there are ups and downs even in domains where a simple deadline and a requirement change suddenly makes technological prowess useless trash.
And the most heretical question ever: what if your so called number one quality software product might fool you and - could be way better done? You didn’t even consider that option which makes your point shaky to say the least.
paxys 22 hours ago [-]
And yet somehow the shittiest buggiest software ends up being the most popular.
Look through the list of top apps in mobile app stores, most used desktop apps, websites, SaaS, and all other popular/profitable software in general and tell me where you see users rewarding quality over features and speed of execution.
applfanboysbgon 21 hours ago [-]
You have it backwards. Excellent software becomes popular, and then becomes enshittified later once it already has users. Often there is a monopoly/network effect that allows them to degrade the quality of their software once they already have users, because the value in their offering becomes tied to how many people are using it, so even a technically superior newcomer won't be able to displace it (eg. Youtube is dogshit now but all of the content creators are there, and all of the viewers are there, so content creators won't create content for a better platform with no viewers and viewers won't visit a better platform with no content).
If your goal is to break into the market with software that is dogshit from day 1, you're just going to be ones of millions of people failing their get-rich-quick scheme.
thfuran 22 hours ago [-]
I don’t think this search will really reveal speed of execution and feature set rewarded over quality either.
jimbokun 16 hours ago [-]
Have you read the source for those products?
m0llusk 15 hours ago [-]
That is easy. Those top apps in mobile stores are constructed in part with compilers which are under great pressure from developers to generate good code as quickly as possible. There is often bloat at the level of the development environment or some critical library, but core developer tools are a big part of the software space and are made and sold with very different criteria from the most popular apps vended from walled gardens.
NoMoreNicksLeft 20 hours ago [-]
The history of technology is filled with examples where between two competing analogous products, the inferior always wins. It does not matter if it is only slightly inferior or extraordinarily inferior, both win out. It's often difficult to come up with counter-examples. Why is this? Economic pressure. "Inferior" costs less. Sometimes the savings are passed on to the customer... they choose the inferior. Other times the greedy corporate types keep all of it (and win simply because they outmarket the competitor). It does not matter.
If there are people who, on principle, demand the superior product then those people simply aren't numerous enough to matter in the long run. I might be one of those people myself, I think.
saltyoldman 15 hours ago [-]
You're responding quite harshly to a pretty reasonable take. I think he means "for the most part" for those statements.
I mean Jira is probably one of the most purchased software packages in the world that is specifically used by developers that care about their craft - you don't want to look at the code, trust me.
dzhiurgis 14 hours ago [-]
> The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
For now. We only call it slop when we notice it. Majority of AI text, music, images, videos and code is indistinguishable and you use it every day.
This whole "treat your code as craft" reminds me of organic farming, zero waste, etc movements. Cute movements that only minority of people care about.
Peritract 7 hours ago [-]
> Majority of AI text, music, images, videos and code is indistinguishable and you use it every day.
I really don't think this is true. If it was, we'd be able to point to countless examples of things assumed not to be AI that actually were, but there's a dearth of such examples.
zer00eyz 17 hours ago [-]
> People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly
The assumption that people CARE about your product is the most Silicon Valley, Hacker News, forgot what the world out side of tech looks like thing ever.
People CARE about their software as much as the CARE about their bank, or a new finance product... People excited over software is more of a historic footnote than any thing real people think in 2026.
The vast majority of Software is one of two things:
A) a tool
B) a source of entertainment
As a tool it either needs to provide value or it's something that is shoved on you by work.
The user experience of your average mobile game today is fucking awful. People put up with a massive amount of garbage for a trickle of fun. So much of the web looks like a mid 90's Hong Kong back alleyway --- biking ads, videos screaming at you, and someone trying to steal your wallet. And the majority of things people are forced to use for work... well... Show me someone who is excited about their ERP or CMS or Network drive... Show me someone who thinks that anything to do with Salesforce is something to be excited over.
> The general public does not care about anything other than the capabilities and limitations of your product.
A segment of our industry is screaming about the security of open claw. People dont care (because we have also made a mockery of security as well) - they are using it as a tool that will deliver them a solution. It strips away all the arcanea that made people think we were wizards and writes the dam spells for them. It's a dumpster fire, and people are thrilled about it and what its delivering them. And thats software not made by you or I.
paganel 8 hours ago [-]
> People dont care (because we have also made a mockery of security as well) -
They will start caring when a security bug on one of their phone's apps (or on the phone's software itself) empties their savings accounts. At that point the law officials/the Government might also start to care.
yabutlivnWoods 20 hours ago [-]
If code is craft and minimalism is hip then why ruby, and python, and go and... when it's electrical state in machines?
That's the minimalism that's been lost.
That's why I find the group 2 arguments disingenuous. Emotional appeal to conservatism, which conveniently also props up their career.
Why all those parsers and package systems when what's really needed is dials min-max geometric functions from grand theft auto geometry to tax returns?
Optimization can be (and will be) engineered into the machine through power regulation.
There's way too many appeals to nostalgia emanating from the high tech crowd. Laundering economic anxiety through appeals to conservatism.
Give me an etch a sketch to shape the geometry of. Not another syntax art parser.
iamcalledrob 22 hours ago [-]
Sloppy technical design ends up manifesting in bugs, experiential jank, and instability.
There are some types of software (e.g. websites especially), where a bit of jank and is generally acceptable. Sessions are relatively short, and your users can reload the webpage if things stop working. The technical rigor of these codebases tends to be poor, but it's generally fine.
Then there's software which is very sensitive to issues (e.g. a multi-player game server, a driver, or anything that's highly concurrent). The technical rigor here needs to be very high, because a single mistake can be devastating. This type of software attracts people who want to take pride in their code, because the quality really does matter.
I think these people are feeling threatened by LLMs. Not so much because an LLM is going to outperform them, but because an LLM will (currently) make poor technical design decisions that will eventually add up to the ruin of high-rigor software.
Benjammer 22 hours ago [-]
> the quality really does matter.
If this level of quality/rigor does matter for something like a game, do you think the market will enforce this? If low rigor leads to a poor product, won't it sell less than a good product in this market? Shouldn't the market just naturally weed out the AI slop over time, assuming it's true that "quality really does matter"?
Or were you thinking about "matter" in some other sense than business/product success?
iamcalledrob 21 hours ago [-]
Yes, I think the market will enforce this. A bit. Eventually. But the time horizon is long, and crummy software with a strong business moat can out-compete great software.
Look at Windows. It's objectively not been a good product for a long time. Its usage is almost entirely down to its moat.
tabwidth 18 hours ago [-]
How long does that take though? Technical debt from sloppy code doesn't show up in the product until way later. By the time users notice, the team is already three features deep and can't back out.
tokioyoyo 17 hours ago [-]
All these arguments somehow disregards that we’ve all been adding technical debt left and right, every other day to every single codebase in existence. Humans also write sloppy code.
mhitza 5 hours ago [-]
Same as when writing notes by hand, the information is internalized. When you pass the thinking to the LLM, you become the copilot that doesn't even know the code, and the minute decisions madd. Good luck rearchitecturing a bad decision in such a design, and prepare your double d6 for a full functionality reroll.
FridgeSeal 20 hours ago [-]
A lot of software is forced upon people against their will, and purchased bu people who will never use it.
This obscures things in favour of the “quality/performance doesn’t matter argument”.
I am, for example, forced to use a variety of microslop and zoom products. They are unequivocally garbage. Given the option, I would not use them. However, my employer has saddled us with them for reasons, and we must now deal with it.
bloppe 22 hours ago [-]
Yes, both the article and GP are making that exact point about it mattering from a customer's perspective.
22 hours ago [-]
SpicyLemonZest 22 hours ago [-]
Even if you're confident you can stop your own company from shipping terrible products, I worry the trend is broad enough and hard enough to audit that the market will enforce it by pulling back on all purchases of such software. If gamers learn that new multiplayer games are just always laggy these days, or CTOs learn that new databases are always less reliable, it's not so easy to convince them that your product is different than the rest.
theossuary 22 hours ago [-]
Yes, there's every reason to believe the market will weed out the AI slop. The problem is, just like with stocks, the market can stay irrational longer than you can stay solvent. While we all wait for executives to learn that code rigor matters, we still have bills to pay. After a year when they start trying to hire people to clean up their mess, we'll be the ones having to shovel a whole new level of shit; and the choice will be between that and starving.
As someone who also falls into camp one, and absolutely loves that we have thinking computers now, I can also recognize that we're angling towards a world of hurt over the next few years while a bunch of people in power have to learn hard lessons we'll all suffer for.
Darmani 22 hours ago [-]
No-one comes out of the womb caring about code quality. People learn to care about the craft precisely because internal quality -- cohesion, modularity, robustness -- leads to external quality (correctness, speed, evolvability).
People who care about code quality are not artists who want to paint on the company's dime. They are people who care about shipping a product deeply enough to make sure that doing so is a pleasant experience both for themselves and their colleagues, and also have the maturity to do a little bit more thinking today, so that next week they can make better decisions without thinking, so that they don't get called at 4 AM the night after launch for some emergency debugging of an issue that that really should have been impossible if it was properly designed.
> No one has ever made a purchasing decision based on how good your code is.
Usually they don't get to see the internals of the product, but they can make inferences based on its externals. You've heard plenty of products called a "vibe-coded piece of crap" this year, even if they're not open source.
But also, this is just not true. Code quality is a factor in lots of purchasing decisions.
When buying open source products, having your own team check out the repo is incredibly common. If there are glaring signs in the first 5 minutes that it was hacked together, your chances of getting the sale have gone way down. In the largest deals, inspecting the source code
It was for an investment decision rather than for a purchase, but I've been personally hired to do some "emergency API design" so a company can show that it both has the thing being designed, and that their design is good.
mkehrt 19 hours ago [-]
> People who care about code quality are not artists who want to paint on the company's dime. They are people who care about shipping a product deeply enough to make sure that doing so is a pleasant experience both for themselves and their colleagues, and also have the maturity to do a little bit more thinking today, so that next week they can make better decisions without thinking, so that they don't get called at 4 AM the night after launch for some emergency debugging of an issue that that really should have been impossible if it was properly designed.
Speak for yourself. This is exactly the GPs point. Some people care more about the craft of code than the output. I personally find writing good code to be what motivates me. Obviously its a spectrum; shipping is good too. But it's not why I get up in the morning.
Darmani 19 hours ago [-]
Okay, I admit I went too hard on that one to fight against the OP. I too get extreme pleasure from condensing a 300-line function into 30 lines of simplicity, even when practical considerations (it has a clean interface and no-one's even needed to open this file in 2 years) would dictate elegance there is not helpful.
doug_durham 18 hours ago [-]
Code quality is a side-effect of caring. The most important part of product design is caring at all levels. However it's caring about the external details that is the most important. Coding language is largely a function of the population of good coders in your areas. Code evolvability is almost entirely subjective.
If it's easy to add new features without creating bugs, the code is evolvable. Else, it's not. Does not seem very subjective.
ambicapter 22 hours ago [-]
This is like when people decided that everyone was either "introvert" or "extrovert" and then everyone started making decisions about how to live their life based on this extremely reductive dichotomy.
There are products that are made better when the code itself is better. I would argue that the vast majority of products are expected to be reliable, so it would make sense that reliable code makes for better product. That's not being a code craftsman, it's being a good product designer and depending on your industry, sometimes even being a good businessman. Or, again, depending on your industry, not being callous about destroying people's lives in the various ways that bad code can.
renewiltord 22 hours ago [-]
I’m an introvert. I make sure that all my “welcome to the company” presentations are in green. I am also an extrovert in that I add more green than required.
clan 22 hours ago [-]
I respect your opinion and especially your honesty.
And at the same time I hope that you will some day be forced to maintain a project written by someone else with that mindset. Cruel, yes. But unfortunately schadenfreude is a real thing - I must be honest too.
I have gotten to old for ship now, ask questions later projects.
jstanley 22 hours ago [-]
I'm in camp 1 too. I've maintained projects developed with that mindset. It's fine! Your job is to make the thing work, not take on its quality as part of your personal identity.
If it's harder to work with, it's harder to work with, it's not the end of the world. At least it exists, which it probably wouldn't have if developed with "camp 2" tendencies.
I think camp 2 would rather see one beautiful thing than ten useful things.
couchand 22 hours ago [-]
I think camp 1 would rather see ten useless things than one useful thing.
theshrike79 3 hours ago [-]
Customer pays me to make it work, not make a pretty thing that doesn't work and is over budget - but pretty.
I optimise for "make it work", that's what the deal says.
If there's extra time, I might go to step two which is "make it pretty". Meaning that I go through the code and see that it's all good and proper if we need to add features later on.
bdangubic 5 hours ago [-]
100% not what Camp 1 is or does. Their #1 goal is make it work. It is your #1 priority. So quite the opposite, Camp 2 will spin and make 100 "useful" (not) abstraction with the slickest imaginable code doing things you go "OMFG, how on Earth did you come up with this, insane" while during that development Camp 1 shipped 37 new features for its customers
d0mine 12 hours ago [-]
I don't get how camp 1 can ship more than one version (do they jump teams/companies each time?). If your code is immovable mess then how do you add features/fix bugs in time?
22 hours ago [-]
ytoawwhra92 13 hours ago [-]
> I think camp 2 would rather see one beautiful thing than ten useful things.
Unashamedly, I would, but this is a false dilemma. We can have ten beautiful and useful things.
The thing that drives the camp 2 folk crazy is that often it would have taken no extra effort (or perhaps even less effort overall) to make a good version of the thing but the people who made thing simply couldn't be bothered.
The attitude you're describing here has led directly to our world being full of profoundly disappointing objects that proliferate because they meet a minimum bar of usefulness.
People don't like the minimum bar. They'll take it if it's the only thing on offer, but they like better things.
ambicapter 22 hours ago [-]
> At least it exists, which it probably wouldn't have if developed with "camp 2" tendencies.
Ah yes, if you aren't shitting code out the door as fast as possible, you're probably not shipping anything at all.
Nevermark 22 hours ago [-]
That isn't a fair reading.
jimbokun 16 hours ago [-]
Seems fair to me, responding to someone mocking people for caring about their craft.
jplusequalt 21 hours ago [-]
Neither is the original assertion. There are thousands of examples of exceptionally well crafted code bases that are used by many. I would posit the Linux kernel as an example, which is arguably the most used piece of software in the world.
Nevermark 19 hours ago [-]
> [...] one beautiful thing than ten useful things
They didn't say beautiful/crafted things were not necessary.
They were critiquing viewpoints that all code needs to be.
Even if we (for humorous purposes) took their 1 in 10 ratio as a deadly serious cap on crafting, 10% of projects being "exceptionally well crafted code" would be a wonderful world. I would take 1% high craft to 99% useful! (Not disjointly of course.)
globular-toast 9 hours ago [-]
You are not your job. Take pride in your work. Be kind to others. This is the true path.
Imustaskforhelp 22 hours ago [-]
I think I fall in camp 1.5 (I don't fall in camp 1 or camp 2) as in I can see value in prototyping (with AI) and sometimes make quick scripts when I need them, but long term I would like to grow with an idea and build something genuinely nice from those prototypes, even manually writing the code as I found personally, AI codebases are an hassle to manage and have many bugs especially within important things (@iamcalledrob message here sums it up brilliantly as well)
> I think camp 2 would rather see one beautiful thing than ten useful things.
Both beautiful and useful are subjective (imo). Steve job's adding calligraphy to computer fonts could've considered a thing of beauty which derived from his personal relation to calligraphy, but it also is an really useful thing.
It's my personal opinion that some of the most valuable innovations are both useful and beautiful (elegant).
Of course, there are rough hacks sometimes but those are beautiful in their own way as well. Once again, both beauty and usefulness is subjective.
(If you measure Usefulness with the profit earned within a purely capitalistic lens, what happens is that you might do layoffs and you might degrade customer service to get to that measure, which ultimately reduces the usefulness. profit is a very lousy measure of usefulness in my opinion. We all need profit though but doing solely everything for profit also feels a bit greedy to me.)
xienze 17 hours ago [-]
> If it's harder to work with, it's harder to work with, it's not the end of the world.
Yeah it just takes longer and makes you miserable in the process. No biggie!
tokioyoyo 17 hours ago [-]
We will still work ~8ish hours that day, and time will pass anyways.
ytoawwhra92 14 hours ago [-]
People with this attitude never end up maintaining anything. When projects get to that phase the work becomes too difficult and boring for them, so they decide that they've "set the team up for long-term success" and move on to a new opportunity.
bloppe 22 hours ago [-]
I mostly agree with this. Part of the confusion with the discourse around AI is the fact that "software engineering" can refer to tons of different things. A Next.js app is pretty different from a Kubernetes operator, which is pretty different from a compiler, etc.
I've worked on a project that went over the complexity cliff before LLM coding even existed. It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken, but their use-cases are supported by a Gordian Knot of tech debt that practically cannot be improved without breaking something. It's not about a single bug that an LLM (or human) might introduce. It's about a complete breakdown in velocity and/or reliability, but the product is very mature and still makes money; so abandoning it and starting over is not considered realistic. Eager uptake of tech debt helped fuel the product's rise to popularity, but ultimately turned it into a dead end. It's a tough balancing act. I think a lot of LLM-generated platforms will fall eventually into this trap, but it will take many years.
9rx 1 hours ago [-]
> It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken
LLMs are often poor at writing tests that provide useful information to human readers and poor at writing tests that can survive project evolution. To be fair, humans are also poor at these tasks if done in hindsight, after all the information you normally want to capture in tests has been forgotten. That boat has been missed for the legacy code no matter how you slice it. But LLMs are quite good at writing tests that lock in existing functionality in the rawest way. It seems like LLM-generation is actually the best hope of saving such a project?
davnicwil 20 hours ago [-]
I keep seeing this idea repeated, but I don't accept the dichotomy between those who care about 'crafting code' and those who care about 'building products' as though they are opposite points on a spectrum.
To me, the entire point of crafting good code is building a product with care in the detail. They're inseparable.
I don't think I've ever in my life met someone who cared a lot about code and technology who didn't also care immensely about detail, and design, and craft in what they were building. The two are different expressions of the same quality in a person, from what I've seen.
carlosjobim 4 hours ago [-]
Millions of companies work out of spreadsheets, which is incredibly low quality and inefficient code by programming standards (not Excel, but the spreadsheet itself).
Would those companies be better off just using pen and paper? Because "craft code" programmers don't have time for this, and not all companies can pay for bespoke software. Well, maybe now they can, with AI "slop".
Likewise, should people who don't have the skills or means to cook delicious and nutritious meals just starve without food? Or is it okay that they eat something which isn't perfect?
mhitza 5 hours ago [-]
This is an often repeated reductionist classification, which I've yet to match with my real life experience.
Vibe coding will fill in more "new feature" checkboxes, faster, but the level of quality averages out, to something often mediocre, or worse (like all my OSS projects in which I experiment; because such projects are the training data). It skips liberally on maintainability, accessibility, security and privacy considerations.
Code is a liability, I want to have less of it at a higher abstraction level (for which natural language isn't a good fit due to inherent ambiguity). For products, simplicity and user utility is how I approach the problem when given wiggle room.
We are on HN, so there's bound to be many startup people that only need to bang out features to lure in users and then pass on that pile further onto someone else, when they cash out.
What I have seen however, are mid-managers+ that haven't coded in a decade or so, and now with LLMs they feel that they deliver equal quality results, whereas they have been so long out of the game and haven't picked up the modern skills on how to maintain and build applications.
theshrike79 3 hours ago [-]
Sometimes a mediocre solution is better than no solution, especially if the mediocre solution is done in days.
Yes, there are risks (Lethal Trifecta and all that), but AI assisted developmet by non-programmers isn't that much worse than letting the same people do complex macro/function/VBA setups in shared Excel sheets.
I'm running out of fingers on my hands on the number of 100% vibe-coded applications we've built internally that save double digits of % of time from people's day to day work. All created by the people who use them to fix a very specific workflow they've had to do by hand over hours. Now It's a click of a button on a bespoke application they made.
alecbz 21 hours ago [-]
Craft, in coding or anything else, exists for a reason. It can bleed over into vain frivolity, but craft helps keep the quality of things high.
Craft often inspires a quasi-religious adherence to fight the ever-present temptation to just cut this one corner here real quick, because is anything really going to go wrong? The problems that come from ignoring craft are often very far-removed from the decisions that cause them, and because of this craft instills a sense of always doing the right thing all the time.
This can definitely go too far, but I think it's a complete misunderstanding to think that craft exists for reasons other than ensuring you produce high-quality products for users. Adherents to craft will often end up caring about the code as end-goal, but that's because this ends up producing better products, in aggregate.
qaid 21 hours ago [-]
Professionally, I've always been in camp #2. The quality of your code at least partially represents you in the eyes of your peers. I imagine this is rapidly changing, but the fact will always remain that readable code that you can reason about is objectively better.
For personal projects, I've been in both camps:
For scripts and one-offs, always #1. Same for prototypes where I'm usually focused on understanding the domain and the shape of the product. I happily trade code quality for time when it's simple, throwaway, or not important.
But for developing a product to release, you want to be able to jump back in even if it's years later.
That said, I'm struggling with this with my newest product. Wavering between the two camps. Enforcing quality takes time that can be spent on more features...
devin 51 minutes ago [-]
I do both 1 and 2, because 1 is frequently served by some attention to 2 at times. This notion that it’s one or the other is unserious.
raincole 1 hours ago [-]
> The general public does not care about anything other than the capabilities and limitations of your product.
The developers don't care that either. If developers cared the whole npm ecosystem wouldn't exist.
yoyohello13 16 hours ago [-]
It seems pretty clear to me that the AI coding revolution has not lead to better software. In fact, software seems more brittle than ever now.
Code quality is absolutely important. It’s just not a quality that’s easily visible to a layman. I can definitely feel the difference as a user when a program has been crafted with care.
bayindirh 9 hours ago [-]
> No one has ever made a purchasing decision based on how good your code is.
As stated by others, this is very false. Most if not all software I use is selected by its disk/memory footprint and performance. Having a small disk/memory footprint and having good performance at the same time is a good indicator of a good code quality.
Moreover, after using computers for more than three decades, you get a feeling about the performance of a particular software suite. So an inefficient piece of code makes itself known in a loud way if you look the right way.
One of my favorite applications, Obsidian, is generally performs well, but when you hit it just the right way (e.g. add a couple of PDFs and enable previewing), you can feel how sluggish it becomes.
Having a suite of well written applications which have high performance/footprint ratio also allows me to do more with less resources and in less time. So, good code quality matters. It also almost guarantees the software suite will be maintained in a longer time.
Incidentally, I'm also in camp #2, and write my code with the same attention to detail. I have also written code which squeezed all performance from systems, approaching theoretical IPC limits of the processor the code is running on.
theshrike79 3 hours ago [-]
I've built dozens of services and applications for companies.
ZERO times has anyone even mentioned disk/memory footprint. Performance maaybe, but no hard limits were defined in any contracts. And even those were "these things have to be processed within 24 hours because the law says so", not microsecond precision.
Even Obsidian is 440MB. It's a markdown editor with a built-in javascript scripting system. There's no reason for it to be almost half a gigabyte. Zero people have checked the directory size and went "nah, too big, won't use it".
bayindirh 2 hours ago [-]
> Zero people have checked the directory size and went "nah, too big, won't use it".
Nope, at least one. I refused to install software just because it's too big. I resisted using Obsidian exactly of these reasons until I failed to find a credible alternative.
Well, there's ZimWiki, but it wasn't working on macOS well enough to use that daily.
asutekku 2 hours ago [-]
> Most if not all software I use is selected by its disk/memory footprint and performance.
If that would be true, electron apps would not exists and everything would be a native software. But alas, most modern products, even before vibe-coding are horrible performance-wise.
Of course it depends on the context, but consumer facing products have been awful in terms of performance for a while now.
Nevermark 22 hours ago [-]
> it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
It mystifies me when people don't intuit this.
For any suitably sized project, there are parts where elegance and friction removal are far more important than others. By an order or two of magnitude.
I have shipped beautifully-honed, highly craft code. Right alongside jank that was debugged in the "Well it seems to work now" and "Don't touch anything behind this in-project PI" category.
There are very good reasons and situations for both approaches, even in one project.
farmeroy 22 hours ago [-]
I think developers fall into two camps:
1. you care about shipping working, tested code that solves a specific business/user problem
2. you care about closing tickets that were assigned to you
singpolyma3 16 hours ago [-]
Some of us also care about craft and code quality instead
leptons 17 hours ago [-]
I think developers fall in to way more than 2 "camps".
roland35 23 hours ago [-]
That's true, but I think there is a gray area in between. As things scale up in one way or another, having high quality is important for both #1 and #2. Its hard to extend software that was designed poorly.
The question where experience comes in is when quality is and isnt worth the time. I can create all sorts of cool software I couldn't before because now I can quickly pump out "good enough" android apps or react front ends! (Not trying to denigrate front end devs, it's just a skill I dont have)
tikhonj 16 hours ago [-]
> No one has ever made a purchasing decision based on how good your code is.
This is very much a "it's not the fall that kills you, it's the sudden stop at the end" sort of thing. (Same with the other variant I've heard, which is something like "no company has gone out of business because of tech debt".)
Code is as much a tool for developing and expressing conceptual models as it is for making computers do things. So not only does code quality have proximate impacts on engineering productivity and reliability, but, done well, it also improves the holistic design of the system you're building. You get better tools, faster, by putting some thought and care into your codebase and, especially, your core abstractions. Teams with good code move faster even in the short term and produce better tools and products.
Of course, it's not just a matter of code; you also need a culture that gives engineers the agency to make real, long-term decisions about what you're building (not just how) which, unfortunately, is rare to find in the modern tech industry :/ The dominant "high-output management" paradigm where code is seen as a virtually fungible "output" to be "delivered" loses the higher-order advantages of good code and good conceptual design, and leaves us with something much closer to the trade-off you describe. But there are other ways of approaching technical work that don't make this trade-off at all!
DesaiAshu 17 hours ago [-]
We crossed a threshold in the past couple months where I'm not sure I'd agree that "libraries other devs depend on" is the right cutoff for LLMs. In many cases, LLMs will write more concise, faster, and more accurate code. Not by one-shooting, but through a deeper search of possibility space - not dissimilar to how stockfish and other pre-llm search algorithms worked
Modern harnesses are systems built with LLMs as one of many building blocks (incl. regex, test suites, linters). If it can be measured and verified, there's a good chance LLMs will optimize it
This is not a new concept. Humans stopped writing "artful" assembly many years ago, because Lattner and others made it much more efficient to rely on LLVM than hand-optimizing assembly
It's also been demonstrated in other domains within Google (4x4 matmul, silicon photonics, protein folding)
Interface heavy apps are not purely about objective function, they are about feel, comfort, usability - those apps will benefit heavily from humans. But subcomponents of these apps (eg. an algorithm to route packets efficiently) can often be better solved (somewhat objectively) by LLM-based solvers or other forms of RL.
However, writing assembly for the sake of art sounds rather interesting in an 2026. Many of my favorite musicians and DJs are driving a resurgence in vinyl to help balance the computed future - and I think that's a great thing
nostrebored 11 hours ago [-]
what types of problems are you seeing this for? i'm wondering if it's just a difference in domains, but this is not our experience at all. llms will write something, but the odds that it is efficient for any problem with deep complexity has been basically zero.
if it really were fuzzing and finding different candidate spaces, then I'd expect it to be good at things like dynamic programming, where I've only seen it fail. usually i see it get stuck in a bad solution and just thrashes around in that minima. these are problems where we can construct a verifiable test space, and it will eventually wind up with a solution, but one that is thousands of lines long and uses no structure of the problem space
xigoi 2 hours ago [-]
You are assuming that people only write software to sell it to someone. Most software I write is either for myself or for an academic project, and in both cases, code sualita definitely matters.
Waterluvian 20 hours ago [-]
It’s much more complex. Part of your value as an engineer is having a good feel for balancing the trade offs on your code.
Code is usually a liability. A means to an end. But is your code going to run for a minute, a month, a year, or longer? How often will it change? How likely are you going to have to add unforeseen features? Etc. Etc. Etc.
allknowingfrog 22 hours ago [-]
I think "make a product" is the important point of disagreement here. AI can generate code that users are willing to pay for, but for how long? The debate is around the long-term impact of these short-term gains. Code _is_ a means to an end, but well-engineered code is a more reliable means than what AI currently generates. These are ends of a spectrum and we're all on it somewhere.
You ever notice how everyone who drives slower than you is a moron and everyone who drives faster than you is a maniac? Your two camps have a similar bias.
zozbot234 19 hours ago [-]
> Code _is_ a means to an end, but well-engineered code is a more reliable means than what AI currently generates.
AI can help you make well-engineered code, but you have to ask for it because it's not what it will do by default. Prompt it with "Figure out how this crappy piece of code really works and document it in depth. Propose viable refactorings that could improve code quality" and it will do a much better job than the usual vibe-coded result.
ryandrake 20 hours ago [-]
Trying to describe craftsmanship always brings me back to the Steve Jobs quote:
“When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.”
mikkupikku 20 hours ago [-]
Steve Jobs didn't really know anything about cabinetry, because using plywood / MDF in places where it won't be seen but which would benifit from dimensional stability is absolutely common and there's no reason it shouldn't be.
ryandrake 20 hours ago [-]
You might be missing the point of the quote. He's not talking about whether it is commonly done. He's talking about the mentality that justifies sacrificing the aesthetic and artistry, in favor of things like "dimensional stability". He is giving a reason why it shouldn't be done, and people can disagree about whether the reason is a good one.
mikkupikku 19 hours ago [-]
He's saying that carpenters who make nice things wouldn't use a material like that, but they objectively do and this doesn't sacrifice the aesthetic or artistry.
He specifically calls out the use of the material in a place it wouldn't be seen as something that the carpenter making beautiful cabinets wouldn't do, but that's actually something they absolutely do. He's trying to argue some other point by way of this analogy, but his analogy is wrong because he doesn't know anything about it. It would be like if I was trying to argue my point by way of car analogy but the example I chose was to say that car enthusiasts insist on genuine OEM parts, I'm only exposing myself as somebody who doesn't understand car people.
tehnub 17 hours ago [-]
There are plenty of furniture makers that use all hardwood construction. It's just that they charge like $10K USD for a cabinet
ryandrake 12 hours ago [-]
He's saying "carpenters who don't do X are craftsmen". And you're saying "But other carpenters do X." That's true, but those other carpenters are not craftsmen.
slopinthebag 13 hours ago [-]
> He's saying that carpenters who make nice things wouldn't use a material like that
If they do, then they aren't making truly great things. It's as simple as that really.
If you cut corners, even if you think nobody will notice, some people will, and your product will never be truly great. Steve understood that.
throw310822 10 hours ago [-]
There is also type 3, the worst kind: the developer who enjoys writing code to "get things done", no matter what the requirements are and the quality of the code. Produces tons of spaghetti code that barely works, without much thought about architecture or quality, and duly accompanied by tons of unit tests that test the exact implementation, bugs and all.
steveBK123 22 hours ago [-]
I think type1 vs type2 dev requirements are also dependent on lifecycle / scale of your project, not just that its library / framework / mission critical software.
If you aren't even sure if your idea is even gonna work, whether you have PMF, or the company will be around next year.. then yeah.. speed over quality all day long.
On the other hand, I've never done the startup thing myself, and tend to work on software project with 10-20 year lifecycles. When code velocity maximalism leads to outages, excess compute cost and reputational issues.. good code matters again.
Re: "No one has ever made a purchasing decision based on how good your code is."
Sonos very much could go out of business for agreeing with this line. I can tell you lots of people stopped buying their products because of how bad their code quality became with the big app change debacle. Lost over a decade of built up good will.
Apple is going through this lately with the last couple major OS releases across platforms and whatever is going on with their AI. This despite having incredible hardware.
roncesvalles 10 hours ago [-]
This is a false dichotomy. Good code creates good products.
>No one has ever made a purchasing decision based on how good your code is.
Disagree. Users tend to be very sensitive to quality, in software more than anything else. (Of course there isn't always perfect transmission from "user" to "purchaser" but most likely for an upstart product, your user will be your customer. You're not Oracle or Microsoft yet.)
In fact, quality matters now especially more than before because the barrier to entry has reduced. If you have a great idea, but executed it poorly, it attracts second movers like a magnet. Quality is intimidating. Quality is moat.
Good software engineers (your camp 2) can anticipate the ways in which bad code results in a poor quality product, or one that is difficult to debug and evolve. That is the crux of good code. Good code is not craftsmanship in the same sense as making a beautiful painting or making colorful notes.
cortesoft 12 hours ago [-]
It's funny because I feel like I am both. Sometimes I am in it for the code itself, and the process of writing it. I see how the code will perfectly fit what I want to do, and I am so excited to make that happen. I want to make that thing.
Other times, I have something specific I want to accomplish, but I dread the amount of time it will take to make it happen.
Now, it is never that I don't know HOW to make it happen, it is that I know how, and I know how many steps it is and how many components there are to build to even get the simplest version running and I just dread it. I want the thing, but I don't want to spend the time to make the thing.
I have had so much fun recently making so many things that I have never gotten around to over the years, because I just couldn't justify the time.
I also have the time to tell the AI to add all the nice to haves, and handle all the edge cases that weren't worth the time before, etc.
I am having a blast. I still stop to write the fun bits when I want to, though. It is great because I only have to code the bits I want, that are fun.
voidfunc 19 hours ago [-]
It's the end of era where the plucky code crafter gets to have a seat at the table of production. Those skills are going to become less and less useful going forward. Industry is going to stop hiring those types.
The future of software looks a lot more like factory production lines with a small group of architect-tier engineers working on a design with product management and then feeding it into the factory for prototyping and production.
If you're not an experienced late senior or principal engineer at your career stage by now there is basically no future for you in this industry. Lower end roles will continue to be reduced. People who can build and maintain the factory and understand its outputs are going to be the remaining high-value software talent.
marstall 7 hours ago [-]
| iterate on their product faster.
i've run into a situation where it really slowed iteration down, because i wasn't able at some point to explain in english what I wanted it to do and had to go into the code, which, lo and behold, I didn't understand. Ended up scrapping everything that had been generated and starting over by hand!
lelanthran 2 hours ago [-]
> I find most developers fall into one of two camps:
> 1. You treat your code as a means to an end to make a product for a user.
> 2. You treat the code itself as your craft, with the product being a vector for your craft.
Among the vocal devs, maybe. Most devs choose a trade-off between #1 and #2, leaning heavily towards #2.
And the reason is, very few people actually want to the output of their labour to be poor, no matter how superficially good it looks.
I find, like the poster below me said, the people presenting the false dichotomoty you present are desperate to legitimise their production of lovecraftian code horrors.
It's a trick, a verbal one usually, that people who espouse woo and who know that they are BSing, use to sort of "borrow" legitimacy from a field that is already respectable. Like... ghost-believers referring to themselves as occult scientists. They throw in the word "scientist" in there to borrow the legitimacy and respectability of actual scientists[1].
Throwing in "user delight" or "useful to the user" into their arguments for vibe-coding is their way of borrowing the respectability of actual developers, who had always been developing for an actual user, and who cared about their user enough to target that specific use-case.
The folks in #1 are simply borrowing what they can from the respectable practitioners to paper over the fact that all they care about is themselves, not actual users.
The clear majority of them are hoping to hit a jackpot; the borrowed terms, phrases and words is simply a poor attempt to cover up their naked greed.
---------------------------
[1] There's probably a joke in there somewhere about "software engineers" :-)
k__ 9 hours ago [-]
"No one has ever made a purchasing decision based on how good your code is."
Because to determine good code, you need to see it and I'd presume most open source code is free.
konschubert 9 hours ago [-]
I fall strictly into camp 1, but I disagree on code quality.
Code quality makes the difference between a janky system that works most of the time and a rock solid system that is an enjoyment to use.
QA can only apply duct tape. If your state management isn’t clean, the UX will suck. If your functions aren’t clean, you will keep chasing bugs.
Luckily, AI is capable of writing good code. Today, that still requires some amount of hand holding, but it’s getting better.
gmerc 15 hours ago [-]
Is this like some kind of disorder? I’m asking because as a healthcare tech professional, In reasonably sure you don’t want me to adopt “this product is for entertainment only” copilot and software that’s of bad code quality. People have gotten radiation overdoses from those.
So I’m confused. I really am, help me understand your world view.
rainmaking 7 hours ago [-]
That's a fascinating way to look at it.
I've always found my self at the clean code end of the spectrum (which also means simple and flexible to me) because it makes it easier to be flexible for customer needs. So I like good code but it's a means to an end.
est 9 hours ago [-]
> No one has ever made a purchasing decision based on how good your code is.
But many ppl will refuse a purchasing because the product breaks randomly
theshrike79 3 hours ago [-]
I've used programs for 30 years. They have broke down randomly, despite being artisanally programmed by actual humans.
Being created by a human doesn't imbue any specific amount of reliability to a piece of software.
chunkyguy 4 hours ago [-]
> You treat your code as a means to an end to make a product for a user.
This sounds more like a product owner not a developer
amelius 21 hours ago [-]
> With that said, I do have respect for people in the latter camp.
Well, you certainly should. Those people made AI based coding a possibility in the first place.
pm 20 hours ago [-]
It is possible to exist in both camps. The quality of the process affects the quality of the product, and the quality of your thought affects the quality of the process. It's a cycle of continual learning, and from that perspective, thought, process and product are indivisible.
Treating code as a means to an end doesn't guarantee success for your product anymore than treating code as a craft.
Rapzid 16 hours ago [-]
It's not possible for two-camp believers to conceive of two-camp dwellers. That would be tantamount to a third, potentially superior camp.
The two-camp construct is a tool to establish the believer as a member of the supreme one camp group; apart from the lesser campers. Their entire identity and self worth is built around one-camp membership.
_the_inflator 10 hours ago [-]
The dude is right and I strongly side with him however, what is missing is that I oscillate like the TV megahertz that powered the first commercial Pang games between states and this open the space for more grey in between altered states like “Ok, will fix later. Let’s put a #TODO and ship it.”
So instead of criticizing the gentleman for its dichotomy I feel like adding more states to it to complete the picture. And I mean it, this is not making fun of someone it means I tamed myself in stopping to do premature optimization knowing Gigahertz won’t care about me saving a cycle and might even hinder the masterfully crafted compilers from optimizing even more.
3. Partly awesome, partly not so much but don’t need to be awesome.
4. Myself has to understand this masterpiece of human thinking in six months and more from here and after a 20 hour stint I myself marvel at the result so better comment before committing the code into oblivion.
5. Embarrassing, but people are delighted.
6. This made headlines years ago, but some code doesn’t age well.
7. OMG, OOP might looks right, but a new paradigm rushes me into refactoring frenzy and makes the code look better without breaking any features!
8. I used tool to check for bottlenecks and it runs well, but looks crap. So what do I do?
9. Loop unrolling is still a thing or not? Do compilers have headaches just like I do? Do they really care or simply follow orders or adjust to the target platform and settings?
solid_fuel 20 hours ago [-]
It's easy to write off critics of this slop development as just "caring about the wrong thing", but that is couched in incorrect assumptions. This is the unfortunately common mistake of confusing taking responsibility with some sort of "caring about the code" in an artistic sense. I can certainly appreciate the artistry of well-written code, but I care about having a solid and maintainable code-base because I am accountable for what the code I write does.
Perhaps this is an antiquated concept which has fallen out of favor in silicon valley, but code doesn't just run in an imaginary world where there are no consequences and everything is fun all the time. You are responsible for the product you sell. If you sell a photo app that has a security bug, you are responsible for your customers nude photos being leaked. If you vibe-code a forum and store passwords in plaintext, you are responsible for the inevitable breech and harm.
The "general public" might not care, but that is only because the market is governed by imperfect information. Ultimately the public are the ones that get hurt by defective products.
agentultra 4 hours ago [-]
True. I’ve never bought a piece of commercial software and wanted to inspect the source first before making that decision.
But I will demand my money back or sue you if your crappy code leaks my personal information, destroys my property, performs worse than advertised, or otherwise harms me in some way.
There was sloppy code before LLMs. It’s what they were trained on. And it’s why they generate slop.
All that code that was rushed out to meet an arbitrary deadline made up by the sales team, written by junior and lazy senior developers, pushed by the, “code doesn’t matter,” folks. Code written by the enterprise architecture astronauts with a class hierarchy deeper than the Mariana Trench. A few years down the line you get bloated, slow, hard to maintain spaghetti piles of dung. Windows rendering text that stutter when you scroll them. Virtual keyboards that take seconds to pop up. Browser tabs that take more available memory than was available to send astronauts to the moon and back.
When humans write it you generally have a few people on a team who are concerned with these things. They try to reign in the slop generating, “always be shipping,” people. You need a mix of both. Because each line of code is a liability as much as it’s a new feature.
Rapzid 16 hours ago [-]
I find there are two types of people.
People who think developers fall into one of two camps.
And people worth listening to.
smusamashah 18 hours ago [-]
I dont think 1 and 2 are that clear cut. We recently started using Codex at work. I never thought it would be able to do even closer to what it has been doing for me in our legacy code base.
But I am not the fan of code it writes most of the times. I want my code to read and behave certain way. I can not submit that code, even if it works, if I can't explain or just don't like it.
I then iterate over that code myself or ask AI until it has the shape I agree with.
For my personal side projects I don't care as much what code looks like as long as it works correctly and easily modifiable. But for work, it still remains my responsibility no matter which tool was used.
aryehof 10 hours ago [-]
I feel like your missing a third alternative:-
You (are required to) treat your code as having to fulfill both functional requirements and declared non-functional requirements, including measures of maintainability, reliability, performance, and security and (regulatory/legal) compliance.
Swizec 22 hours ago [-]
> The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
I am in both camps. Always have been.
Code janitors about to be in high demand. We’ve always been pretty popular with leadership and it’s gonna get even more important.
Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things
My output is org velocity.
joshmarlow 22 hours ago [-]
> Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things
I'm currently of the opinion that humans should be laser focused on the data model. If you've got the right data model, the code is simpler. If you've got the relevant logical objects and events in the database with the right expressivity, you have a lot of optionality for pivoting as the architecture evolves.
It's about that solid foundation - and of course lots of tests on the other side.
Swizec 20 hours ago [-]
> I 'm currently of the opinion that humans should be laser focused on the data model
yes. good programmers talk about data structures, bad programmers talk about code
JambalayaJimbo 22 hours ago [-]
How do you even converge on the right data model without refining code? Elegant code and elegant data model are the exact same thing!
lmm 12 hours ago [-]
"Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious."
Lots of people try to make their code better by revising their procedures and don't even think about changing their data model, and generally fail. You might not be able to change your data model without changing your code, but they're different activities.
theshrike79 3 hours ago [-]
Or not using the data model properly, zero foreign keys in databases, no triggers checking column contents etc.
"We'll do it on the app level".
sigh
bitwize 22 hours ago [-]
It's called "systems analysis". Programmers are generally pretty terrible at it because it requires holistic, big-picture thinking. But it used to take up the bulk of the design activity for a new enterprise system.
skydhash 15 hours ago [-]
And the result was usually a complex system that no one needed and could maintain. Successful refinement is needed when you try to accomplish something from the ground up, refining it as you're adding features. Not locking yourself in the ivory tower drawing UML diagrams and drafting requirements. Doing and thinking go in pair, not separately.
jcgl 8 hours ago [-]
Yeah, the field of software engineering has come a long way since then. But just because previous implementations of the analysis phase were flawed doesn't mean that the phase itself was flawed.
benatkin 22 hours ago [-]
I agree and I like how you describe it. The phrase from Django, "perfectionists with deadlines", also resonates with me.
acedTrex 22 hours ago [-]
> My output is org velocity.
Amen, slow and steady and the feature fly wheel just keeps getting faster.
seamossfet 22 hours ago [-]
>slop cannons
I am stealing that phrase haha
ppqqrr 20 hours ago [-]
i've learned in two years of unemployed hacking that there is a camp 3: you treat code like an evolving life form, requiring both ingenious introspective craft and utilitarian grit to survive and thrive. a product is a trajectory of software in time, and software is a trajectory of code in time; this is no different from how our genetic code determines our physical existence, which then shapes our lives. there is no spectrum between code vs product; if you want to remain relevant in software design, you must see them as layers of a larger whole (and there are many, many more layers once you go beyond the binary of code VS product).
CodeMage 22 hours ago [-]
> No one has ever made a purchasing decision based on how good your code is.
There are two reasons for this. One is that the people who make purchasing decisions are often not the people who suffer from your bad code. If the user is not the customer, then your software can be shitty to the point of being a constant headache, because the user is powerless to replace it.
The other reason is that there's no such thing as "free market" anymore. We've been sold the idea that "if someone does it better, then they'll win", but that's a fragile idea that needs constant protection from bad actors. The last time that protection was enacted was when the DOJ went against Microsoft.
> Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
Any semblance of accountability for that has been diluted so much that it's not worth mentioning. A bug someone wrote into some cloud service can end up causing huge real-world damage in people's lives, but those people are so far removed from the suits that made the important decisions that they're powerless to change anything and won't ever see that damage redressed in any way.
So yeah, I'm in camp #2 and I'm bitter about AI, because it's just accelerating and exacerbating the enshittification.
Someone on the HN wrote recently that everyone who's foaming at the mouth about how AI helps us ship faster is forgetting that velocity is a vector -- it's not just about how fast you're going, but also in what direction.
I'd go further and say that I'm not even convinced we're moving that much faster. We're just cranking out the code faster, but if we actually had to review that code properly and make all the necessary fixes, I'm pretty sure we would end up with a net loss of velocity.
yakattak 22 hours ago [-]
> No one has ever made a purchasing decision based on how good your code is.
If you have buggy software, people don’t use it if there are alternatives. They don’t care about the code but hard to maintain, buggy code will eventually translate to users trying other products.
theshrike79 3 hours ago [-]
Good code can also be buggy, bad code can be buggy.
The thing here was that if you have two boxes that take the same input and produce the same output at the same speed, do you care what the insides look like?
What if one is delivered in 4 days and the other in 30 days and costs more? Which one will you pick?
SwtCyber 4 hours ago [-]
Most real-world work sits somewhere in the middle
giancarlostoro 20 hours ago [-]
> The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
I'm weird, I'm part of camp 2, but I think AI can be used to really craft some interesting things. While I appreciate camp 1, camp 2 is what produces better codebases that are easier to maintain, myself and others have realized, that the best practices for humans are also the best practice for AI models to fit your code.
fasterik 20 hours ago [-]
I think this is a false dichotomy. If you're passionate about your craft, you will make a higher quality product. The real division is between those who measure the success of a project in:
- revenue/man-hour, features shipped/man-hour, etc.
- ms response time, GB/s throughput, number of bugs actually shipped to customers, etc.
People in the second camp use AI, but it's a lot more limited and targeted. And yes, you can always cut corners and ship software faster, but it's not going to be higher quality by any objective metric.
culi 19 hours ago [-]
Personally I fall into the first camp and have quite a lot of criticisms of AI-usage. The parts of my job that were the easiest are now done by AI and the parts of my job that were the worst have exploded and are most of what I do now.
Code quality isn't just a fetish. It has real implications for security and the final product.
I've also found that unmaintainable codebases aren't just hard to maintain for humans. LLMs seem to struggle with them just as much
sornaensis 19 hours ago [-]
It all depends on the tools. AI will surely give a competitive advantage to people working with better languages and tooling, right? Because they can tell the AI to write code and tests in a way that quashes bugs before they can even occur.
And then they can ship those products much faster than before, because human hours aren't being eaten up writing out all of these abstractions and tests.
The better tooling will let the AI iterate faster and catch errors earlier in the loop.
Right?
singpolyma3 17 hours ago [-]
Can I be both? When I wish I can take great pride in the craft of a particular piece of code.
But also I know when to put up and make the damn thing work.
peacebeard 20 hours ago [-]
I think this is a false dichotomy. Maybe there is some theoretical developer who cares about their craft only due to platonic idealism, but most developers who care about their craft want their code to be correct, fast, maintainable, usable, etc. in ways that do indeed benefit its users. At worst, misalignment in priorities can come into play, but it's much more subtle than developers either caring or not caring about craft.
9rx 2 hours ago [-]
> The people who typically have the most negative things to say about AI fall into camp #2
I think more often they simply picked the wrong programming language as a target. In my experience, AI is especially bad at writing Typescript/Javascript, which happens to overlap with the most widely used language. I have negative things to say about AI too when developing in that ecosystem. If I only ever used AI in that ecosystem I'd probably tell you it is useless with the rest of them.
But my daily work sees me working in more than one language and when I am in some other language environments I have no reservations about AI whatsoever. AI vs good code is no longer even at odds with each other. In those certain languages, the models write good, stable, production-ready code pretty much all the time. It is really quite amazing.
dirkc 21 hours ago [-]
I view every single line of code as a liability, the best solution is if you can avoid writing any code. Does that put me into group 1 or group 2?
coldtea 21 hours ago [-]
>No one has ever made a purchasing decision based on how good your code is.
That's however what makes for stable systems, deep knowledgable engineers, and structurally building the basis for the future.
If all you care about is getting money for your product slop, it's not different than late night marketed crap, or fast fashion...
mamami 19 hours ago [-]
This type of thinking is exactly how you end up with 50MB webpages and is core to rhe general sloppification of software
shinycode 19 hours ago [-]
That’s non sense. Every software people pay or use as their professional tool should be carefully crafted. Would you buy a house/car/anything or value knowing that people who built it don’t care about their craft as long as you got it in your hands and pay for it ? Or maybe you produce something cheap and worthless
jmyeet 21 hours ago [-]
I'm going to re-characterize your categorization:
1. The people who don't understand (nor care) about the risks and complexity of what they're delivering; and
2. The people that do.
Widespread AI usage is going to be a security nightmare of prompt injection and leaking credentials and PII.
> No one has ever made a purchasing decision based on how good your code is.
This just isn't true. There's a whole process in purchasing software, buying a company or signing a large contract called "due diligence". Due diligence means to varying degree checking how secure the product is, the company's processes, any security risks, responsiveness to bugfixes, CVEs and so on.
AI is going to absolutely fail any kind of due diligence.
There's a little thing called the halting problem, which in this context basically means there's no way to guarantee that the AI will be restricted from doing anything you don't want it to do. An amusing example was an Air Canada chatbot that hallucinated a refund policy that a court said it had to honor [1].
How confident are we going to be that AIs won't leak customer information, steal money from customers and so on? I'm not confident at all.
I'm in neither camp. I write code to make our product work, which we can then sell as a unit to the end user.
If I do a bad job, I get a bunch of bug reports, I get called out for writing bugs, etc. We've been pushed to use AI, and it's hurt more than it's helped with our code base.
yobbo 21 hours ago [-]
> No one has ever made a purchasing decision based on how good your code is.
I routinely close tabs when I sense that low-quality code is wasting time and resources, including e-commerce sites. Amazon randomly cancelled my account so I will never shop from them. I try to only buy computers and electronics with confirmed good drivers. Etc.
throwatdem12311 21 hours ago [-]
> No one has ever made a purchasing decision based on how good your code is
Because the ones that sell crappy code don’t sell to people that can tell the difference.
You think I’d pay for Jira or Confluence if it wasn’t foisted upon me by a manager that has got it in with the Atlassian sales rep?
I don’t even need to see Atlassian’s source code to know it’s sh*t.
mememememememo 20 hours ago [-]
Yes great so we sell shit. No one buys a ticket because ofnhow safe the 737 Max plane is or buys a post office franchise based on how good the software Fujitsu sold the post office is, but fuck lets take some pride in outselves and try to ship quality work.
quantummagic 22 hours ago [-]
This is a very useful insight. It nicely identifies part of the reason for the stark bifurcation of opinion on AI. Unfortunately, many of the comments below it are emotional and dismissive, pointing out its explanatory limitations, rather than considering its useful, probative value.
solid_fuel 20 hours ago [-]
I find most home inspectors fall into one of two camps:
1. You treat the house as a means to an end to make a living space for a person.
2. You treat the building construction itself as your craft, with the house being a vector for your craft.
The people who typically have the most negative things to say about buildings fall into camp #2 where cheap unskilled labor is streamlining a large part of what they considered their art while enabling people in group #1 to iterate on their developments faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good the pipes inside the walls are.
The general public does not care about anything other than the square footage and color of your house. Sure, if you mess up and one of the houses collapses then that'll manifest as an outcome that impacts the home owner negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for homes where that level of craftsmanship is actually useful (think: mansions, bridges, roads, things I use, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of construction we're talking about.
Gigachad 20 hours ago [-]
The general public does not know how to identify or care about the pipes in the walls. They do care when they bust and cause tens of thousands of dollars of damage. Thats why they hire someone with a keen eye to it to act on their behalf.
solid_fuel 19 hours ago [-]
The general public does not know how to identify or care about good code. They do care when their data gets leaked or their computer gets hacked or their phone gets ransomware. That’s why they hire software engineers, who are supposed to care about the quality of the code they ship.
Gigachad 13 hours ago [-]
Agreed. No one need care about obsessively neat and orderly code, but there becomes a point where very real world issues surface that users actually care about. Contrary to some of the opinions here. Rushing out vibeslop at rapid pace will eventually have consequences.
tehnub 17 hours ago [-]
Brilliant
>Sure, if you mess up and one of the houses collapses then that'll manifest as an outcome that impacts the home owner negatively.
lol
bakugo 21 hours ago [-]
"I have an opinion and everyone on the planet agrees with me, if you disagree, you don't matter" is not a useful insight, and is, in fact, far more emotional and dismissive than any of the replies to it.
quantummagic 21 hours ago [-]
That's a horribly broken misrepresentation of what was said in the original post. If that's what you took away from it, you're not reading carefully or critically.
emp17344 19 hours ago [-]
That is, in fact, how it comes across. You’re labeling perceived opponents as “emotional” and “dismissive”.
bakugo 13 hours ago [-]
No, that's exactly what the original post said.
> No one has ever made a purchasing decision based on how good your code is.
This has since been edited (I suspect OP later realized how ridiculous of a claim it was) but it's an objectively false statement that blindly projects OP's stance on software quality onto the entire population.
People have, in fact, stopped using software because it was too slow or buggy. I've done it, many others have done it. To give you a real example, I'm a fan of JetBrains IDEs functionality-wise, but I've been seriously considering moving away from them due to how bloated and unoptimized they've become in recent years - when your IDE feels more sluggish than one built on top of Electron, something has gone seriously wrong. I don't have to actually read the code itself to know it's bad code, I can feel it by simply using the software.
Just because you don't care doesn't mean others don't. Modern society was not built by people who didn't care about their craft, nor was it built by designing everything with the lowest common denominator in mind.
ratrace 20 hours ago [-]
[dead]
antonvs 6 hours ago [-]
> It nicely identifies part of the reason for the stark bifurcation of opinion on AI.
It might do that, if it had any basis in reality. Why do you believe it does?
coffeefirst 21 hours ago [-]
This is absolutely false. The purpose of craft is a make a good product.
I don’t care what kind of steel you used to design my car, but I care a great deal that it was designed well, is safe, and doesn’t break down all the time.
Craft isn’t a fussy thing.
theredsix 20 hours ago [-]
With AI you actually don't need to choose anymore. Well laid out abstractions actually make AI generate code faster and more accurately. Spending the time in camp 2 to design well and then using AI like camp 1 gives you the best of both worlds.
Shorel 5 hours ago [-]
People do these kind of decisions every day, every second.
Shit bloated code is one of the reasons Epic Launcher is extremely behind in market share when compared to Steam.
Sure, they ship their product fast. They can iterate faster than Valve. They also add technical debt with each iteration.
Also: we are almost all using a Chrome derived browser instead of Firefox, old IE, old Opera, because of performance and quality. They just won the internet because of the quality of their code. Besides that, all browsers let you browse the internet.
When people can choose, they choose quality most of the time.
meheleventyone 22 hours ago [-]
You realize you’re essentially building a false dichotomy? I work in video games where code really is a means to an end but still see that authorship is important even if that code is uuuuugly due to it being the expression of the game itself. From that perspective I’m neither worried about craft or product but my ability to express myself though code as the game behaves. Although if you really must have only two categories I’d be in camp one.
As such AI is a net negative as it would be in writing a novel or making any other kind of art.
tpoacher 8 hours ago [-]
"Made with Slop" is the equivalent of "Made in China" (in the olden derogatory sense).
Just because one falls in the "Cha bu duo" camp and (potentially) looks down on the "Kaizen" types doesn't mean the two products are equivalent.
That also doesn't mean that slop / cha bu duo / made in china products are bad, mind you. They have their place, and occasionally a Kaizen approach would be detrimental to getting "something" done quick and dirty that will likely work ok anyway. The danger is in believing that just because they're "ok" this means they are equivalent (or at least largely overlap) with the more refined products, which is demonstrably false and can be a dangerous attitude to have.
JambalayaJimbo 22 hours ago [-]
You need both to be a great software engineer. The "means to an end" people will happily slop out PRs and let the "craft" people worry about it.
Thanemate 20 hours ago [-]
How about the type of developer who comes up with statistics and made up "camps" as if enjoying the craft itself makes you, out of necessity of the false premise, unable to enjoy the fact that what you made is useful enough to people that they choose your product because you are obsessed with making a good work?
John Carmack has talked about it in a podcast a few years ago, and he's the closest popular programmer that I can think of who was simply obsessed with milking every tiny ounce of GPU performance, yet none of his effort would matter if Doom and Quake weren't fun games.
rkagerer 10 hours ago [-]
The best of us have a little bit of both 1 and 2.
squidsoup 20 hours ago [-]
false dichotomy - you need to care about both.
pron 21 hours ago [-]
I generally fall into the first camp, too, but the code that AI produces is problematic because it's code that will stop working in an unrecoverable way after some number of changes. That's what happened in the Anthropic C compiler experiment (they ended up with a codebase that wasn't working and couldn't be fixed), and that's what happens once every 3-5 changes I see Codex making in my own codebase. I think, if I had let that code in, the project would have been destroyed in another 10 or so changes, in the sense that it would be impossible to fix a bug without creating another. We're not talking style or elegance here. We're talking ticking time bombs.
I think that the real two camps here are those who haven't carefully - and I mean really carefully - reviewed the code the agents write and haven't put their process under some real stress test vs those who have. Obviously, people who don't look for the time bombs naturally think everything is fine. That's how time bombs work.
I can make this more concrete. The program wants to depend on some invariant, say that a particular list is always sorted, and the code maintains it by always inserting elements in the right place in the list. Other code that needs to search for an element depends on that invariant. Then it turns out that under some conditions - due to concurrency, say - an element is inserted in the wrong place and the list isn't sorted, so one of the places that tries to find an element in the list fails to find it. At that point, it's a coin toss of whether the agent will fix the insertion or the search. If it fixes the search, the bug is still there for all the other consumers of the list, but the testing didn't catch that. Then what happens is that, with further changes, depending on their scope, you find that some new code depends on the intended invariant and some doesn't. After several such splits and several failed invariants, the program ends up in a place that nothing can be done to fix a bug. If the project is "done" before that happens - you're in luck; if not, you're in deep, deep trouble. But right up until that point, unless you very carefully review the code (because the agents are really good at making code seem reasonable under cursory scrutiny), you think everything is fine. Unless you go looking for cracks, every building seems stable until some catastrophic failure, and AI-generated code is full of cracks that are just waiting for the right weight distribution to break open and collapse.
So it sounds to me that the people you think are in the first camp not only just care how the building is built as long as it doesn't collapse, but also believe that if it hasn't collapsed yet it must be stable. The first part is, indeed, a matter of perspective, but the second part is just wrong (not just in principle but also when you actually see the AI's full-of-cracks code).
zozbot234 19 hours ago [-]
> The program wants to depend on some invariant, say that a particular list is always sorted, and the code maintains it by always inserting elements in the right place in the list.
Invariants must be documented as part of defining the data or program module, and ideally they should be restated at any place they're being relied upon. If you fail to do so, that's a major failure of modularity and it's completely foreseeable that you'll have trouble evolving that code.
pron 19 hours ago [-]
Right, except even when the invariants are documented agents get into trouble. Virtually every week I see the agent write strange code with multiple paths. It knows that the invariant _should_ hold, but it still writes a workaround for cases it doesn't. Something I see even more frequently is where the agent knows a certain exception shouldn't occur, but it does, so half the time it will choose to investigate and half the time it says, oh well, and catches the exception. In fact, it's worse. Sometimes it catches exceptions that shouldn't occur proactively as part of its "success at all costs" drive, and all these contingency plans it builds into the code make it very hard (even for the agent) to figure out why things go wrong.
Most importantly, this isn't hypothetical. We see that agents write programs that after some number of changes just collapse because they don't converge. They don't transition well between layers of abstractions, so they build contingencies into multiple layers, and the result is that after some time the codebase is just broken beyond repair and no changes can be made without breaking something (and because of all the contingencies, reproducing the breakage can be hard). This is why agents don't succeed in building even something as simple as a workable C compiler even with a full spec and thousands of human-written tests.
If the agents could code well, no one would be complaining. People complain because agent code becomes structurally unsound over time, and then it's only a matter of time until it collapses. Every fix and change you make without super careful supervision has a high chance of weakening the structure.
zozbot234 18 hours ago [-]
Agents don't really know the whole codebase when they're writing the code, their context is way too tiny for that; and trying to grow context numbers doesn't really work well (most of it gets ignored). So they're always working piece-meal and these failures are entirely expected unless the codebase is rigorously built for modularity and the agent is told to work "in the small" and keep to the existing constraints.
pron 16 hours ago [-]
> Agents don't really know the whole codebase when they're writing the code
Neither do people, yet people manage to write software that they can evolve over a long time, and agents have yet to do that. I think it's because people can move back and forth between levels of abstraction, and they know when it's best to do it, but agents seem to have a really hard time doing that.
On the other hand, agents are very good at debugging complex bugs that span many parts of the codebase, and they manage to do that even with their context limitations. So I don't think it's about context. They're just not smart enough to write stable code yet.
zozbot234 5 hours ago [-]
> Neither do people, yet people manage to write software that they can evolve over a long time
You need a specific methodology to do that, one that separates "programming in the large" (the interaction across program modules) from "programming in the small" within a single, completely surveyable module. In an agentic context, "surveyable" code realistically has to imply a manageable size relative to the agent's context. If the abstraction boundaries across modules leak in a major way (including due to undocumented or casually broken invariants) that's a bit of a disaster, especially wrt. evolvability.
pron 5 hours ago [-]
Agents just can't currently do that well. When you run into a problem when evolving the code to add a new feature or fix a bug, you need to decide whether the change belongs in the architecture or should be done locally. Agents are about as good as a random choice in picking the right answer, and there's typically only one right answer. They simply don't have the judgment. Sometimes you get the wrong choice in one session and the right choice in another.
But this happens at all levels because there are many more than just two abstraction levels. E.g. do I change a subroutine's signature or do I change the callsite? Agents get it wrong. A lot.
Another thing they just don't get (because they're so focused on task success) is that it's very often better to let things go wrong in a way that could inform changes rather than get things to "work" in a way that hides the problem. One of the reasons agent code needs to be reviewed even more carefully than human code is that they're really good at hiding issues with potentially catastrophic consequences.
zozbot234 4 hours ago [-]
> Agents are about as good as a random choice in picking the right answer, and there's typically only one right answer.
That's realistically because they aren't even trying to answer that question by thinking sensibly about the code. Working in a limited context with anything they do leaves them guessing and trying the first thing that might work. That's why they generally do a bit better when you explicitly ask them to reverse engineer/document a design of some existing codebase: that's a problem that at least involves an explicit requirement to comprehensively survey the code, figure out what part matters, etc. They can't be expected to do that as a default. It's not even a limitation of existing models, it's quite inherent to how they're architected.
pron 4 hours ago [-]
Yes, and I think there's a fundamental problem here. The big reason the "AI thought leadership" claim that AI should do well at coding is because there are mechanical success metrics like tests. Except that's not true. The tests cover the behaviour, not the structure. It's like constructing a building where the only tests are whether floorplans match the design. It makes catastrophic strctural issues easy to hide. The building looks right, and it might even withstand some load, but later, when you want to make changes, you move a cupboard or a curtain rod only to have the structure collapse because that element ended up being load-bearing.
It's funny, but one of the lessons I've learnt working with agents is just how much design matters in software and isn't just a matter of craftsmenship pride. When you see the codebase implode after the tenth new feature and realise it has to be scrapped because neither human nor AI can salvage it, the importance of design becomes palpable. Before agents it was hard to see because few people write code like that (just as no one would think to make a curtain rod load-bearing when building a structure).
And let's not forget that the models hallucinate. Just now I was discussing architecture with Codex, and what it says sounds plausible, but it's wrong in subtle and important ways.
zozbot234 4 hours ago [-]
> The big reason the "AI thought leadership" claim that AI should do well at coding is because there are mechanical success metrics like tests.
I mean, if you properly define "do well" as getting a first draft of something interesting that might or might not be a step towards a solution, that's not completely wrong. A pass/fail test is verified feedback of a sort, that the AI can then do quick iteration on. It's just very wrong to expect that you can get away with only checking for passing tests and not even loosely survey what the AI generated (which is invariably what people do when they submit a bunch of vibe-coded pull requests that are 10k lines each or more, and call that a "gain" in productivity).
pron 1 hours ago [-]
It's not completely wrong if you're interested in a throwaway codebase. It is completely wrong if what you want is a codebase you'll evolve over years. Agents are nowhere close to offering that (yet) unless a human is watching them like a hawk (closer than you'd watch another human programmer, because human programmers don't make such dangerous mistakes as frequently, and when they do make them, they don't hide them as well).
skydhash 20 hours ago [-]
It can be especially bad if the architecture is layered with each one having its own invariant. Like in a music player, you may have the concept of a queue in the domain layer, but in the UI layer you may have additional constraints that does not relate to that. Then the agent decide to fix a bug in the UI layer because the description is a UI bug, while it’s in fact a queue bug
mikkupikku 20 hours ago [-]
Shit like this is why you really have to read the plans instead of blindly accepting them. The bots are naturally lazy and will take short cuts whenever they think you won't notice.
yurimo 12 hours ago [-]
> No one has ever made a purchasing decision based on how good your code is.
When I make a purchasing decision I expect the payment to go trough quickly and correctly and for whatever I purchase to arrive to me in reasonable time. All of this rests on the reputation of software being solid. If a user hears a whiff of purchase not being executed correctly, money or goods going somewhere else, this is the death sentence for your company.
Industry is now pushing for agentic web where agents can do this on your behalf. But if we have slop foundations and then add unstable models that can hallucinate and make mistakes on top of it, then it's just a recipe for catastrophe. I think relegating 2) into category of only mission critical software escapes the reality of how much reliability goes into everyday services people use.
tclancy 22 hours ago [-]
Now you done it! Yeah, one of the difficult things is being able to see both sides. At the end of the day, I happen to write code because that's how I can best accomplish the things I need to do with the minimum of effort. While I do take pride in elegance and quality of code, it is always a means to an end. When I start gold plating, I try to remind myself of the adage I learned in a marketing class: No one ever needed a drill, they needed the ability to make holes.
It is strange, but not really upsetting to me, that I am not particularly anal about the code Claude is generating for me anymore but that could also be a function of how low stakes the projects are or the fact nothing has exploded yet.
20 hours ago [-]
joe_the_user 21 hours ago [-]
I was just using an app that competes with airbnb. That the app's code is extraordinarly unreliable was a significant factor in my interactions with others on the app, especially, I gradually realized I couldn't be sure messages were delivered or data was up-to-date.
That influenced some unfortunate interactions with people and meant that no one could be held to their agreements since you never knew if they received the agreements.
So, well, code quality kind of matters. But I suppose you're still right in a sense - currently people buy and use complete crap.
somewhereoutth 21 hours ago [-]
3. You use the act of writing code to think about a given problem, and by so doing not only produce a better code, but also gain a deeper understanding of the problem itself - in combination a better product all around.
logicchains 22 hours ago [-]
It's perfectly possible to write very clean code with AI, it just takes a lot more time and prompting.
Gigachad 20 hours ago [-]
Easier to just write it yourself.
camgunz 10 hours ago [-]
This binary is old and tired. It's possible to want to ship things and think LLMs make that harder. You don't engage with TFA at all, which is exploring this exact point: that the more slop that gets into prod the harder it will be to add features and fix bugs. If you like LLMs for coding and care about shipping long-term, this should be something you have an answer for.
uduni 10 hours ago [-]
Slop becomes impossible to maintain, and eventually product velocity slows down. Maybe it's ok for an ultra simple todo app, but for most apps code quality absolutely matters... Users expect snappy UX and all the new bells and whistles.
Why did Whatsapp grow so big while thousands of previous chat apps didn't code quality (scalability)
snthpy 11 hours ago [-]
Well put. I agree.
21 hours ago [-]
ggregoire 21 hours ago [-]
3. Coding is fun, prompting not so much
otabdeveloper4 8 hours ago [-]
Are you seriously claiming that technical debt doesn't exist?
hu3 8 hours ago [-]
I read it more like:
Tech deb is over indexed by many and most money doesn't care as long as it works reasonably well.
otabdeveloper4 4 hours ago [-]
It's called "debt" because you will be forced to pay it off eventually.
In other words, it "works reasonably well" until it doesn't. That point might break your business. Many such cases.
paganel 9 hours ago [-]
There's not that much about art and craft but about knowing that less is more and that adding complexity to a thing (almost any thing) will only bring greater troubles down the road. We've had to learn this as an industry for how long now?, I'd say 70 years, give or take (I've only been a professional programmer for about 20 years, if that counts), so a new methodology coming in and saying that there is indeed a silver bullet might not be that well-received.
ramsaybolton 17 hours ago [-]
Okay tell this to game devs
keybored 19 hours ago [-]
This is a Venus v.s. Mars developer trope at this point.
> The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
That’s fine for people argue those things.
My criticisms of AI are mainly
1. The principle of the GenAI approach
2. Political
The first point is about how stupid the GenAI approach is (I could link to the arguments). But I have left open the door for pure results, i.e. demonstrating that it (despite my belief) works in practice. So this is not about craftmanship.
I’ve previously commented that I would respect a more principled approach even though it takes my craft.[1]
> Personally, I fall into the first camp.
Of course you do. Because...
> No one has ever made a purchasing decision based on how good your code is.
In these dichotomies the author typically puts himself forward as the pragmatist and the other side as the ones who care about things that are just irrelevant to the purchasing decision or whatever.
But the AI haters have made real arguments against AI, against the people behind AI, and so on. It’s not a matter of vibes. So maybe respond to those arguments? We don’t need another armchair lesson in psychological inclinations.
Be a pragmatist for all I care. But beware of the bloodless pragmatist who only sees what is, essentially, instant product gratification and not what comes after, or from the sides, or from below.
> No one has ever made a purchasing decision based on how good your code is. The general public does not care about anything other than the capabilities and limitations of your product.
The capabilities and limitations of your product are defined in part by how good the code is. If you write a buggy mess (whether you write it yourself or vibe code it), people aren't going to tolerate that unless your software has no competitors doing better. People very much do care about the results that good code provides, even if they don't care about the code as an end in itself.
efromvt 18 hours ago [-]
I think this is exactly the point though (maybe more of the link than of this comment) - a sufficiently good product by all external quality metrics is fine even if the code is written on one line in a giant file or some other monstrosity. As long as one black box behaves the same way as another in all dimensions, they are competitive. You can argue that internal details often point to an external deficiency, but if they don’t, then there is no competitive pressure.
imiric 22 hours ago [-]
While creating good software is as much of an art as it is a science, this is not why the craft is important. It is because people who pay attention to detail and put care into their work undoubtedly create better products. This is true in all industries, not just in IT.
The question is how much does the market value this, and how much it should value it.
For one-off scripts and software built for personal use, it doesn't matter. Go nuts. Move fast and break things.
But the quality requirement scales proportionally with how many people use and rely on the software. And not just users, but developers. Subjective properties like maintainability become very important if more than one developer needs to work on the codebase. This is true even for LLMs, which can often make a larger mess if the existing code is not in good shape.
To be clear, I don't think LLMs inevitably produce poor quality software. They can certainly be steered in a good direction. But that also requires an expert at the wheel to provide good guidance, which IME often takes as much, if not more, work than doing it by hand.
So all this talk about these new tools replacing the craft of programming is overblown. What they're doing, and will continue to do unless some fundamental breakthrough is reached, is make the creation of poor quality software very accessible. This is not the fault of the tools, but of the humans who use them. And this should concern everyone.
packetlost 23 hours ago [-]
I agree on the software dev camps.
> The general public does not care about anything other than the capabilities and limitations of your product.
It's absolutely asinine to say the general public doesn't care about the quality and experience of using software. People care enough that Microsoft's Windows director sent out a very tail-between-legs apology letter due to the backlash.
It's as it always has been, balancing quality and features is... well, a balance and matters.
seamossfet 22 hours ago [-]
The public doesn't care about the code itself, they absolutely care about the quality and experience of using the software.
But you can have an extremely well designed product that functions flawlessly from the perspective of the user, but under the hood it's all spaghetti code.
My point was that consuming software as a user of the product can be quite different from the experience of writing that software.
Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'd just be careful to separate code elegance from product experience, since they are different. Related? Yeah, sure. But they're not the same thing.
blackbear_ 22 hours ago [-]
There are other players in the game: the business and the market.
Good code makes it easier for the business to move fast and stay ahead of the competition while reducing expenses for doing so.
WarmWash 21 hours ago [-]
That's true, but excel '98 would still cover probably 80% of users use cases.
A lot, and I mean a lot, of software work is trying to justify existence by constantly playing and toying with a product that worked for for everyone in version 1.0, whether it be to justify a job or justify charging customers $$ per month to "keep current".
packetlost 22 hours ago [-]
That's fair!
> Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'm sure that's the case in basically everything, it sorta doesn't matter (until it does) if it's cordoned off into a corner that doesn't change and nominally works from the outside perspective.
But those cases are usually isolated, if they aren't it usually quickly becomes noticeable to the user in one way or another, and I think that's where these new tools give the illusion of faster velocity.
If it's truly all spaghetti underneath, the ability to make changes nosedives.
almostdeadguy 22 hours ago [-]
I have yet to meet anyone whose problem with AI is that the code is not aesthetically pleasing, but that would actually be an indicator to me that people are using these things responsibly.
My own two cents: there's an inherent tension with assistants and agents as productivity tools. The more you "let them rip", the higher the potential productivity benefits. And the less you will understand the outputs, or even if they built the "correct thing", which in many cases is something you can only crystalize an understanding about by doing the thing.
So I'm happy for all the people who don't care about code quality in terms of its aesthetic properties who are really enjoying the AI-era, that's great. But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue. And moreso, anyone like this should ask why anyone should feel the need to employ you for your services in the future, since your job amounts to "telling the LLM what to do and accepting it's output uncritically".
seamossfet 22 hours ago [-]
>But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue.
I think that's actually a good way to look at it. I use AI to help produce code in my day to day, but I'm still taking quite a while to produce features and a lot of it is because of that. I'm spending most of my time reading code, adjusting specs, and general design work even if I'm not writing code myself.
There's no free lunch here, the workflow is just different.
slopinthebag 22 hours ago [-]
Facebook.com is a monstrosity though, and their mobile apps as well are slow and often broken. And the younger generations are using other networks, Facebook is in trouble.
haolez 18 hours ago [-]
Option 1 is a PM.
UltraSane 13 hours ago [-]
Code quality is directly related to how easy it is to modify without creating bugs which customers absolutely DO care about. I've read about an EMR product that was such spaghetti code every bug fix was likely to create 2 new bugs. I suspect this is a major reason Siri has not had any major new features in forever.
m0llusk 15 hours ago [-]
There is a huge space of possible software solutions. A very large amount of software that has real value involves using basic methodologies to build interfaces to databases and enable some known business logic. Writing this kind of software is very different compared to making highly general developer tools that need to have finely honed abstractions. LLMs are especially good at whipping up basic apps, but can actually interfere with the process of coming up with and implementing new concepts that operate differently from existing tools. In this respect I find that working with LLMs is very similar to working with graduate students who have energy, drive, and patience, but are also prone to making bad decisions and doing work that ends up being thrown away.
rolandhvar 17 hours ago [-]
I mean just look at UV.
Did that guy make it because Rust, and because he's passionate about that sort of thing? Probably.
But it's fucking fast. So did he sell out to OpenAI? Of course he did.
And thusly, both camps.
DetroitThrow 18 hours ago [-]
>No one has ever made a purchasing decision based on how good your code is.
I got my company to switch from GitHub to GitLab after repeated outages. I've always moved companies to away from using GCP or Azure because of their reliability problems.
This is a really funny comment.
Pxtl 20 hours ago [-]
As developers we have a unique advantage over everybody else dealing with the way AIgen is revolutionizing careers:
Everybody else is dealing with AIgen is suffering the AI spitting out the end product. Like if we asked AI to generate the compiled binary instead of the source.
Artists can't get AIgen to make human-reviewed changes to a .psd file or an .svg, it poops out a fully formed .png. It usurps the entire process instead of collaborating with the artist. Same for musicians.
But since our work is done in text and there's a massive publicly accessible corpus of that text, it can collaborate with us on the design in a way that others don't get.
In software the "power of plain text" has given us a unique advantage over kinds of creative work. Which is good, because AIgen tends to be clumsy and needs guidance. Why give up that advantage?
slopinthebag 22 hours ago [-]
This is just cope to avoid feeling any shame for shipping slop to users.
throwaway613746 18 hours ago [-]
[dead]
ModernMech 22 hours ago [-]
> You treat your code as a means to an end to make a product for a user.
It isn’t that though, the “end” here is making money not building products for users. Typically people who are making products for users cares about the craft.
If the means-to-end people could type words into a box and get money out the other side, they would prefer to deal with that than products or users.
Thats why ai slop is so prevalent — the people putting it out there don’t care about the quality of their output or how it’s used by people, as long as it juices their favorite metrics - views, likes, subscribes, ad revenue whatever. Products and users are not in scope.
seamossfet 22 hours ago [-]
Yeah, I'm not trying to defend slop.
I don't think all means-to-end people are just in it for money, I'll use the anecdote of myself. My team is working on a CAD for drug discovery and the goal isn't to just siphon money from people, the goal is legitimately to improve computational modeling of drug interactions with targets.
With that in mind, I care about the quality of the code insofar as it lets me achieve that goal. If I vibe coded a bunch of incoherent garbage into the platform, it would help me ship faster but it would undermine my goal of building this tool since it wouldn't produce reliable or useful models.
I do think there's a huge problem with a subset of means-to-end people just cranking out slop, but it's not fair to categorize everyone in that camp this way ya'know?
threethirtytwo 15 hours ago [-]
I fall into camp 2 but I’m different. I cannot delude myself away from reality. What you will find is that people in camp 2 are lying to them selves. They are denying an impending reality where ai slop becomes even better than the code they write themselves.
Also it’s more than an art or a craft. It’s identity. Many people hold their coding skill as an identity they honed it over many years and it put them in the ranking they are in today. This kind of destruction of identity by AI is what causes people to deny reality.
This stuff also covers your job, even if you don’t hold coding as an identity it is still responsible for many people’s livelihoods. Like ai is convenient right now but what happens when it gets even more convenient? What happens to your job and your life especially if software was all you did for over a decade?
I’m in camp 2 and I can’t lie to myself about what’s happening. I’ve embraced ai and I now vibe code daily even though I was originally an artistic functional programmer. This ability comes at high cost. I’m able to do it because I hold zero identity. I dont identity with anything and I don’t put too much pride into anything I do or at least if I put pride into something I’m always conscious of severing the entire thing at a moments notice.
Lying to oneself is a powerful ability but it becomes a liability when society goes through an intense paradigm shift. This is what is happening now.
waterTanuki 17 hours ago [-]
> No one has ever made a purchasing decision based on how good your code is.
RollerCoaster Tycoon.
> The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
People care how fast you're able to ship updates, new features, and bugfixes. If you're working with a pile of vibe-coded spaghetti slop it's going to take longer to deliver these.
globular-toast 11 hours ago [-]
Ah, that'll be why Toyota is struggling. They spent too much effort making their cars reliable and not enough time making them look good.
Oh wait, they're the biggest car builder in the world.
dbshapco 18 hours ago [-]
I feel like any argument that begins by asserting a dichotomy is almost certain to be circular and will proceed as if the dichotomy were a fact rather than an unproven hypothesis.
I don't believe there is a dichotomy, or even a spectrum of developers, but a complex landscape. Of course, that is also an bald assertion, but on a weaker claim, and no less valid than the original assertion.
That said, independent of assertions about developer classification, in my experience there is a clear connection between the quality of the software and the quality of the product, and I've often see evidence of poor quality software compromising the product and user experience. Poor quality leaks out. Remember BSOD? Maybe not.
I've become hesitant to unleash coding agents simply because the code base ends up looking like the victim of drive-by coding, littered with curious lambda functions, poor encapsulation, etc. The only thing I use coding agents for is exploratory and throwaway code, like one off scripts. I love coding agents for all the ancillary work, I protect the critical path like mamma bear her cubs.
Coding agents make all the second order work easier so I have more bandwidth to focus on the critical parts. Again, software is a landscape, but at least for my work I can't abdicate parts to a coding agent and "works" is an inadequate standard. I need bullet-proof and unfailingly correct.
Token generation definitely produces a certain stream-of-consciousness, Kerouac-as-programmer style. As long as I don't ever have to maintain or modify the code myself, am not concerned about cost control (especially in cloud environments where I am billed by compute cycles), I am fine with quick and dirty and done. I sigh when I see what should be a six line change in my head balloon to 300 lines of generated code, revert, and write the six lines myself. Would take longer to write the prompt to get the coding agent to fix it than fix it myself. It would grind away for several minutes and burn up an astonishing number of tokens for simple fixes.
Anything linguistic the coding agents do well. Want to rename a variable in 300 different source files? I mean, it is overkill to be running a 200B parameter model to avoid writing the sed script I might write otherwise, but who am I to turn my nose up at my work being subsidized by investors? I don't think that economic model will go on forever.
Any higher abstraction is being cargo-culted from language. This is where LLMs are weakest, because they don't understand abstraction or encapsulation, only the artifacts as expressed in language.
Outside of exploratory and throwaway code, I use inline prompting to precisely target and scope changes, and then identify the cleanup and refactoring required to bring the code to acceptable quality. Although I do a lot of cleanup by hand as well. Rather than tell the coding agent that a lambda function wrapping a one liner that is used in one place in the code is dumb, I'll just remove the lambda myself. The coding agent can't adopt and generalize lessons from code review comments the way a human software engineer can -- I am forced to burn tokens every single time to get it to dial back its insane love affair with lambda functions. Again, not a big deal while costs remain subsidized.
Operations and maintenance overhead in the type of software I've written through my career dominates over programming cost. Telecom, aerospace, e-commerce, etc. Systems are long lived. Outages are expensive. Regulatory compliance is a large factor. I've worked in shops with 70% cost overhead in operations. A $50K a month cloud compute bill can be reduced to $15K. There's usually some low hanging fruit and poor quality software doesn't account for all of this, but it is a significant fraction. Like a poorly written termination condition in a container that essentially was a busy wait burning thousands of dollars a month doing nothing (true story).
I am currently writing a trading system, and can't afford to hallucinate a bunch of bad trades. Like the developer landscape, the software landscape is complex and not uniform. So I will concede there are probably many types of software outside of my own experience that can be implemented largely by coding agents. Low consequence. Marginal operational overhead.
I might assert that coding agents forte is autogenerating technical debt, but then I am just being a wag. Less waggishly I would say use of coding agents is subject to engineering judgement, like any tool. Who is going to read that headline or give it a billion dollar valuation?
BoorishBears 19 hours ago [-]
I think some people are misunderstanding your point.
Yes, some people left to their own devices would take twice as long to ship a product half as buggy only to find out the team that shipped early has taken a massive lead on distribution and now half the product needs to be reworked to catch up.
And some people left to their own devices will also ship a buggy mess way too early to a massive number of people and end up with zero traction or validation out of it, because the bugs weren't letting users properly experience the core experience.
So we've established no one is entirely right, no one is entirely wrong, it's ying/yang and really both sides should ideally exist in each developer in a dynamic balance that changes based on the situation.
-
But there's also a 3rd camp that's the intersection of these: You want to make products that are so good or so advanced *, that embracing the craft aspect of coding is inherent to actually achieving the goal.
That's a frontend where the actual product is well outside typical CRUD app forms + dashboard and you start getting into advanced WebGL work, or complex non-standard UI state that most LLMs start to choke on.
Or needing to do things quicker than the "default" (not even naive) approach allows for UX reasons. I ran into this using Needleman-Wunsch to identify UI elements on return visits to a site without an LLM request adding latency: to me that's the "crafty" part of engineering serving an actual user need. It's a completely different experience getting near instant feedback vs the default today of making another LLM request.
And it's this 3rd camp's feedback on LLM development that people in the 1st camp wrongly dismiss as being part the 2nd craft-maxxed group. For some usecases, slop is actually terminal.
Intentionally contrived example, but if you're building a Linear competitor and you vibecode a CRDT setup that works well enough, but has some core decisions that mean it'll never be fast enough to feel instant and frontend tricks are hiding that, but now users are moving faster than the data and creating conflicts with their own actions and...
You backed yourself into a wall that you don't discover until it's too late. It's only hypervigilance and strong taste/opinion at every layer of building that kind of product that works.
LLMs struggle with that kind of work right now and what's worrying is, the biggest flaw (a low floor in terms of output quality) doesn't seem to be improving. Opus 4.6 will still try to dynamically import random statements mid function. GPT 5.3 tried to satisfy a typechecker by writing a BFS across an untyped object instead of just updating the type definitions.
RL seems to be driving the floor lower actually as the failure modes become more and more unpredictable compared to even GPT 3.5 which would not even be "creative enough" to do some of these things. It feels like we need a bigger breakthrough than we've seen in the last 1-2 years to actually get to the point where it can do that "Type 3" work.
* good/advanced to enable product-led growth, not good/advanced for the sake of it
throwaway613746 19 hours ago [-]
[dead]
fatata123 10 hours ago [-]
[dead]
eboy 16 hours ago [-]
[dead]
Animats 23 hours ago [-]
Meanwhile, the complexity of the average piece of software is drastically increasing. ... The stats suggest that devs are shipping more code with coding agents. The consequences may already be visible: analysis of vendor status pages [3] shows outages have steadily increased since 2022, suggesting software is becoming more brittle.
We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.
AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.
Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?
"The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.
switchbak 17 hours ago [-]
I think we're heading for a real crisis here. We've got an imperfect system of constraints and bottlenecks, and we've just eliminated one of the main bottlenecks - the speed at which we can add new code. This just puts so much more strain on the rest of the system, I think the industry is going to have a quick lesson on the non-linear costs of software complexity.
I'm glad to see that the author of the article is putting an emphasis on simplicity here, especially given the nature of their business. Those that fully embrace the "code doesn't matter" approach are in for a world of hurt.
Long-term, I expect there will be more tooling and model advancements to help us in this regard - and there will certainly be a big economic incentive for that soon. But in the meantime it feels like a dam has been breached and we're just waiting for the real effects to become manifest.
Cthulhu_ 8 hours ago [-]
I was curious about the claim about those vendor status pages, wondering if there's postmortems that actually single out AI. The source cited as [3] is a Reddit post with a poorly cropped chart, and it doesn't include any data from before 2022: https://www.reddit.com/r/sysadmin/comments/1o15s25/comment/n...
I'm not saying it's wrong, because I haven't actually looked for alternative sources, just that the source isn't great.
> AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself.
That new design-level representation will be code.
It will need to be code, because prompts, while dense, are not nearly deterministic enough.
It will need to be much higher level code, because current code, while deterministic, is not nearly dense enough.
brandensilva 15 hours ago [-]
Even the prompt history is notoriously weak given how little Claude Code and some of the others display to give developers confidence in the process.
There needs to me more design rep indeed.
tdeck 11 hours ago [-]
Claude Code displays plenty in my opinion, if you make it ask you for approval before each code change. You can read the code as it's being built up and understand if it's going in a bad direction before it does that and then piles on more and more slop.
The trouble is people don't want to bother reviewing the changes.
LoganDark 9 hours ago [-]
Claude Code used to stream the thinking process in verbose mode. Now that has been replaced with "transcript mode" which doesn't actually give much more information and also doesn't stream anything. They also recently removed (in certain situations) the counter of how many tokens the model's generated in its response in progress, so the only way to tell if it's stuck is to wait 10 minutes and then retry.
Sure, I can read the diffs as they're generated (and I do). But proper transparency goes further than that, and it's being stripped away.
9dev 18 hours ago [-]
While I also view this development critically, why do you assume AI will be unable to fix the issues eventually?
yoyohello13 16 hours ago [-]
Whether they can or not in the future is kind of irrelevant. The fact is that right now they are not able to, but many are using them as if they are.
mycall 3 hours ago [-]
I tend to disagree. When guiding AI through many rounds of code review, it can self correct if shown where general issues exist. It does take practice for using the language of the model, i.e. drift instead of issues. Human in the loop is good enough to produce useful and accurate code today.
shimman 2 hours ago [-]
If you can actually do this, please sell your services. You will become a multi-millionaire overnight if you can provide a workflow that doesn't result in mass hallucinations or incorrect suggestions you're able to do something no other LLM company can.
The more common use case is that these tools struggle immensely on anything outside the happy path.
advael 11 hours ago [-]
At this point I can't help but conclude that to a certain class of writers, "economic incentives" are a form of benevolent god who will surely, inevitably make things better in directions they care about, and those who believe this also believe the overwhelming and constant evidence to the contrary of their position is just a temporary anomaly
Whatever the hell economics was supposed to do, right now it seems to be causing every industry to produce worse products, lay off more people, and concentrate wealth in an aristocratic subset of the population, and this has been going on for the better part of my entire lifetime. If we're to reverse this trend, we need to stop pretending that economics is a natural force and remember that it is a complex system made of policy decisions that can in fact be the wrong ones
cardanome 1 hours ago [-]
It also assumes free market competition which the AI space is already only in a very limited degree.
The whole business strategy for those companies is to be the one big monopolists that is left standing. That is why they are bleeding money offering token prices well beyond market rate so that they can grow.
Plus they can always lobby the state to ban foreign competition for security reasons.
nicbou 8 hours ago [-]
Economics is a natural force, but it does not specifically optimize for customer satisfaction.
SwtCyber 4 hours ago [-]
Whatever system you design (market, regulated, hybrid) it still ends up creating incentives that shape behavior
rzmmm 11 hours ago [-]
And who knows what the economic incentives are in couple years. Maybe the focus shifts towards one-shotting complex million SLOC codebases which are order of magnitude more sloppy than the current outputs.
reese_john 23 hours ago [-]
Why build each new airplane with the care and precision of a Rolls-Royce? In the early 1970s, Kelly Johnson and I [Ben Rich] had dinner in Los Angeles with the great Soviet aerodynamicist Alexander Tupolev, designer of their backfire Bear bomber. 'You Americans build airplanes like a Rolex watch,' he told us. 'Knock it off the night table and it stops ticking. We build airplanes like a cheap alarm clock. But knock it off the table and still it wakes you up.'...The Soviets, he explained, built brute-force machines that could withstand awful weather and primitive landing fields. Everything was ruthlessly sacrificed to cut costs, including pilot safety.
We don't need to be ruthless to save costs, but why build the luxury model when the Chevy would do just as well? Build it right the first time, but don't build it to last forever. - Ben Rich in Skunk Works
GuB-42 15 hours ago [-]
Soviet engineering wasn't sloppy. It was designed for robustness, loose tolerances and simplicity. It was well thought out design. In the same way that as much thought went into the cheap alarm clock than went into the Rolex watch, maybe even more so, the engineers just had different requirements.
It takes a lot of work to make cheap, low precision parts work together reliably. The Rolex has it easy, all the parts are precisely built at a great cost and everything fits perfectly. With the cheap alarm clock, you don't know what you will get, so you have to account for every possible defect, because you won't get anything better with your budget and the clock still needs to give you an idea about what time it is.
The parallel in software would be defensive programming, fault tolerance, etc... Ironically, that's common practices in critical software, and it is the most expensive kind of software to develop, the opposite of slop.
movpasd 5 hours ago [-]
There's a narrative that gets passed around in physics circles about how the Soviets were better at finding creative and analytical solutions than Americans, because of the relative scarcity of computing versus intellectual labour resources.
It would make sense to me that a parallel mechanism could apply to Soviet engineering. If material and technologically advanced capital are scarce, but engineers are abundant, you would naturally spend more time doing proper engineering, which means figuring out how to squeeze the most out of what you have available.
est 7 hours ago [-]
> Soviet engineering wasn't sloppy. It was designed for robustness, loose tolerances and simplicity
> Everything was ruthlessly sacrificed to cut costs, including pilot safety.
If we translate this analogy back to AI driven software development, what would be the equivalent of "pilot safety"?...
kusokurae 2 hours ago [-]
Be very careful who sources your pacemaker.
imiric 21 hours ago [-]
That's an interesting story, but not a great analogy for software.
If a technology to build airplanes quickly and cheaply existed and was made available to everyone, even to people with no aeronautical engineering experience, flying would be a much scarier ordeal than it already is.
There are good reasons for the strict safety and maintenance standards of the aviation industry. We've seen what can happen if they're not followed.
The fact that the software industry doesn't have similar guardrails is not something to celebrate. Unleashing technology that allows anyone to create software without understanding or even caring about good development practices and conventions is fundamentally a bad idea.
kalaksi 20 hours ago [-]
And then everyone disagrees what counts as luxury in software.
dang 21 hours ago [-]
The authors updated their title so I've updated it here too. Previous title was "Good code will still win" - but it was leading to too much superficial discussion based entirely on the phrase "good code" in the title. It's amazing how titles do that!
(Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)
aledevv 8 hours ago [-]
> good code will win
I don't fully agree this optimistic view. Unfortunately, for now, coding agents produce code that, if not further optimized upon "human" request, often generates more complexity than necessary.
It's true that this requires more computational effort for the agents themselves to debug or modify it, but it's also true that the computational cost is negligible compared to the benefit of having features working quickly.
In other words: agents quickly generate hyper-complex and unoptimized code. And the speed of delivery provides more immediate benefits than the costs resulting from bad code.
On the other hand, it's also true that the "careful eye" of an experienced developer can optimize and improve the output in a few simple iterations.
So overall (and unfortunately) the "bad code", if it immediately works, can wins against (or with) a good code.
Fraterkes 20 hours ago [-]
Never considered that the mods of this site are literally discussing with the people heading yc companies how to game their hn-titles for better interaction. How naive I am.
dang 19 hours ago [-]
We discuss these things all the time with people who email us whether they head yc companies or not.
dangisahomo 18 hours ago [-]
[dead]
everdrive 3 hours ago [-]
Maybe not necessarily, but it'll be difficult to avoid. We're in a period where people are constantly creating and constantly changing software. Such rapid change really precludes the possibility of excellent. Very few people want to say "let's not add features, it would conflict with our ability to maintain quality." It's not that no one does this, but it is something that's in the minority.
All the change and shuffle feels like an inevitable consequence of so much communication and competition between companies, and cultures and such. Gone are the days where a software product can remain stagnant. Someone else will build something that does a bit more, or if nothing else, does something new, and it will take people's attention away.
Everyone is stuck trying to keep up with trends, even if those trends don't make any sense.
simianwords 23 hours ago [-]
People are not emotionally ready to accept that certain layers of abstraction don’t need as much care and effort if they can be automated.
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
yodsanklai 1 hours ago [-]
> the API of the classes should be clean
That's an issue I have with Claude actually. I found it very good at breaking abstractions to get the job done. This is what I'd call slope (more so than the class internals).
yobbo 21 hours ago [-]
"The only reason people disagree with me is because they are emotionally deficient."
andersonpico 13 hours ago [-]
It goes from "llms can do everything as well or better to a human" to "quality doesn't really matter" really fast.
qsera 9 hours ago [-]
So may be a charitable interpretation could be that quality does not matter because LLMs can deal with any complexity that comes with the reduction in quality...
danielbln 10 hours ago [-]
It also went real fast from "GPT hallucinated a library, literally useless" to "this agent has created this entire service up to spec, no notes".
qsera 9 hours ago [-]
It seems that we are getting bitten by the law that says things that can be measured trumps things that cannot be.
How fast it was to create an initial version of a piece of software can be easily measured.
But how efficient it is, how easy it is to make changes to it, how easy it is to debug it, how easy it is to extend in the direction that the domain requires...all these cannot be easily measured or quantified, but is 10 times more important than that initial creation time....For a software that has to run and maintained for decades delivering value all that time, it does not really matter if the initial version was created in 5 minutes or 1 month, if the 5 minute version does not contribute negatively to all those non-measurable, non-marketable traits of the software.
It is like how camera marketing was mostly around the megapixel value, instead of something vastly more important like low light performance, dynamic range, or fast auto-focus. Because the LCD of the market won't be able to grasp the relevance, and would not act on it. So it was all about megapixel, but at least that does not have a lot of negative consequence unlike the marketing around AI...
danielbln 9 hours ago [-]
I said nothing about speed, I said to spec. Speed is a welcome side effect.
alt187 19 hours ago [-]
As opposed to me, who is perfectly rational.
dcchambers 23 hours ago [-]
Does performance not matter?
What if your AI uses an O(n) algorithm in a function when an O(log n) implementation exists? The output would still be "correct"
keeda 20 hours ago [-]
> Does performance not matter?
No, unfortunately. In a past life, in response to an uptime crisis, I drove a multi-quarter company-wide initiative to optimize performance and efficiency, and we still did not manage to change the company culture regarding performance.
If it does not move any metrics that execs care about, it doesn't matter.
The industry adage has been "engineer time is much more expensive than machine time," which has been used to excuse way too much bloated and non-performant code shipped to production. However, I think AI can actually change things for the better. Firstly, IME it tends to generate algorithmically efficient code by default, and generally only fails to do so if it lacks the necessary context (e.g. now knowing that an input is sorted.)
More importantly though, now engineer time is machine time. There is now very little excuse to avoid extensive refactoring to do things "the right way."
NitpickLawyer 22 hours ago [-]
> Does performance not matter?
Performance can be a direct target in a feedback loop and optimised away. That's the easy part. Taking an idea and poof-ing a working implementation is the hard part.
simianwords 22 hours ago [-]
Also most performance optimisations exit at the microservice architecture level, or db and io level
paxys 21 hours ago [-]
As it stands today the average engineer is much more likely to ship an unoptimized algorithm than an AI.
inetknght 16 hours ago [-]
If it's not tested, it's not Engineered.
Test what you care about. If you care about performance, then test your performance. Otherwise performance doesn't matter.
simianwords 23 hours ago [-]
In most cases no. Bottleneck is usual IO.
pagecalm 23 hours ago [-]
Agreed on the economics side. Clean code saves you time and money whether a human or AI wrote it. That part doesn't change.
But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
arnitdo 12 hours ago [-]
> AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back
You are reinventing the wheel again with yet another form of reinforcement learning. I don't use any form of LLM assistance for coding, but if I have to continually tell it what to do, how to do it, what not to do, what assumptions to make - I would rather stimulate my neurons more by doing that damn thing itself.
The narrative of "Yeah it will do everything, provided you tell it how to do everything!" seems baseless, personally. Even if you emulate the smartest human possible, can you emulate an idiot?
danielbln 10 hours ago [-]
You don't have to continually tell it, you tell it once, persist it as convention and move on with your life.
antdke 21 hours ago [-]
Yup - In the end, it’s still just a tool that adheres to the steering (or lack thereof) of the user.
zie1ony 59 minutes ago [-]
Bytecode generated by compilers in many cases is far from optimal, but we don't care anymore.
socalgal2 22 hours ago [-]
When has this ever been true
Did the best processor win? no x86 is trash
Did the best computer language win? no (not that you can can pick a best)
The same is true pretty much everywhere else outside computers, with rare exception.
stephc_int13 21 hours ago [-]
The economic angle is not as clear cut as the authors seem to think.
There is an abundance of mediocre and even awful code in products that are not failing because of it.
The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.
wreath 21 hours ago [-]
Even with enough money, you may not be able to attract/keep talented engineers who are willing to put up with such a work environment (the codebase itself, and probably the culture that led to its state) and who want to ship well built/designed software but are slowed down by the mess.
pixl97 2 hours ago [-]
This completely depends on the current economy.
When you work with F500s you end up seeing code and culture that is absolute balls and that I would never work directly for all the time. And yet roles are always filled. And when the economy gets bad, they have decent engineers.
I call it the fast food quality theory of economics. When the economy is good, low pay jobs tend to have low quality employees and it shows in their products. When the economy gets bad higher quality employees end up downgrading because of layoffs and the quality of these low tier jobs improves.
woeirua 19 hours ago [-]
The most successful software in a field is typically NOT the best software. The authors of the article live in a world that does not exist. Clean code lost, many years ago.
mrbombastic 18 hours ago [-]
these economic incentives for good code would also apply to code before llms no? And we have had plenty of shit code that stayed shit for a long time. I find this idea that economic incentives will necessarily drive the outcomes you desire to be akin to a religious belief for some people.
recursivecaveat 17 hours ago [-]
Yeah you could say the same thing about teleporting to Paris, or infinitely flavorful bananas. Just because the market would reward something doesn't mean it will come to be. There may be too many tradeoffs, or technical limits, or just that the future is 200 years away so you won't be around for it anyways.
socketcluster 14 hours ago [-]
I've been talking about 'complexity' for years but business people just didn't get it. Now software development has become almost entirely about complexity management so now I'm hoping that they will finally understand what software engineering actually is. I hope they will finally start valuing engineers who can reduce complexity.
I was always into software architecture and I was dreaming to be a software architect but after completing university, the position was on the way out.
jumploops 21 hours ago [-]
“John Ousterhout [..] argues that good code is:
- Simple and easy to understand
- Easy to modify”
In my career at fast-moving startups (scaling seed to series C), I’ve come to the same conclusion:
> Simple is robust
I’m sure my former teams were sick of me saying it, but I’ve found myself repeating this mantra to the LLMs.
Agentic tools will happily build anything you want, the key is knowing what you want!
jfreds 20 hours ago [-]
My issue with this is that a simple design can set you up for failure if you don’t foresee and account for future requirements.
Every abstraction adds some complexity. So maybe the PoC skips all abstractions. Then we need to add a variant to something. Well, a single if/else is simpler than an abstract base class with two concrete implementations. Adding the 3rd as another if clause is simpler than refactoring all of them to an ABC structure. And so on.
“Simple” is relative. Investing in a little complexity now can save your ass later. Weighing this decision takes skill and experience
miningape 16 hours ago [-]
I think what matters more than the abstract class vs if statement dichotomy, is how well something maps the problem domain/data structures and flows.
Sure maybe its fast to write that simple if statement, but if it doesn't capture the deeper problem you'll just keep running head first into edge cases - whereas if you're modelling the problem in a good way it comes as a natural extension/interaction in the code with very little tweaking _and_ it covers all edge cases in a clean way.
blanched 16 hours ago [-]
I’m aware I’m about to be “that guy”, but I really like how Rich Hickey’s “Simple Made Easy” clarifies simplicity here. In that model, what you’re describing is easy, not simple.
mememememememo 20 hours ago [-]
Yes. Which is why "I generated X lines of code" "I used a billion tokens this month" sound stupid to me.
Like I used 100 gallons of petrol this month and 10 kilos of rabbit feed!
julenx 9 hours ago [-]
The same people that pursue economic incentives are who I hear speaking about number of lines produced by developers as a useful metric. I sense a worrying trend toward more is better with respect to output, when the north star IMHO should be to make something only as complex as necessary, but as simple as possible. The best code is no code at all.
sph 19 hours ago [-]
People use stupid metrics like those because more useful ones, like "productivity" or "robustness" are pretty much impossible to objectively measure.
mememememememo 5 hours ago [-]
And because the other easy one, revenue, is not so impressive.
SwtCyber 4 hours ago [-]
I'm not convinced "token efficiency" maps cleanly to "good code". You can have very compact code that's extremely hard to understand or modify
ramsaybolton 17 hours ago [-]
What I would like to add is that coding in the flow state is underestimated. When your brain just clicks with every change and variable it's just different AND more efficient than doing with AI.
satvikpendem 14 hours ago [-]
You can have a flow state prompting AI too, which I feel is not talked about enough.
maxFlow 15 hours ago [-]
Coding in the flow state has always been appreciated, it's just been surpassed by a new technology.
Your brain can still "just click" with agentic coding. But it will have to be at a higher level of abstraction. Perhaps the "click" feels different, and will take some adjusting to.
xnx 24 hours ago [-]
If "good code" == "useful code", then yes.
People forget that good engineering isn't "the strongest bridge", but the cheapest bridge that just barely won't fail under conditions.
gizmo686 22 hours ago [-]
Engineers don't build the cheapest bridge that just barely won't fail. They build the cheapest bridge that satisfies thousands of pages of regulatory requirements maintained and enforced by dozens of different government entities. Those regulations range from safety, to aesthetic, to environmental, to economic, to arcane.
Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse. And no care for the impact on any stakeholder other than the one paying them.
CodeMage 22 hours ago [-]
> Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse.
I don't know any real (i.e. non-software) engineers, but I would love to ask them whether what you said is true. For years now, I've been convinced that we should've stuck with calling ourselves "software developers", rather than trying to crib the respectability of engineering without understanding what makes that discipline respectable.
Our toxic little industry would benefit a lot from looking at other fields, like medicine, and taking steps to become more responsible for the outcomes of our work.
chuckadams 20 hours ago [-]
Civil engineers are licensed and carry insurance. When software developers have similar requirements, then I'll call them engineers. In some fields like avionics, the certification regime is a good proxy for licensing -- I think we could extend the "engineer" title to those developers too.
Such a world still has room for unlicensed developers too -- I'd certainly be among them.
CodeMage 18 hours ago [-]
> Such a world still has room for unlicensed developers too -- I'd certainly be among them.
Sign me up. When I started programming as a 7 year old kid, it wasn't because I dreamed of spending my days on endless meetings and documents. But hey, 40 years later, I'm working as a senior "engineer" and with that comes a heavy emphasis on project management.
Sure, you're expected to know how to solve interesting technical challenges, but that's more of a nice-to-have. It's nowhere near important as being able to make a project look successful despite the fact that the middle management convinced the senior "leadership" to do that project out of sheer ambition and without bringing on board the people who actually talk to the users, so now you're stuck without clear requirements, without a clear way to measure success, and with accumulating tech debt gumming up the works while your boss works with various "stakeholders" to "pivot" over and over so he doesn't have to go to the senior leadership to explain why we're delaying launch again.
And what I'm describing is one of the best places I've ever worked at across more than 25 years of my professional career. Hell, I'm lucky that senior "engineer" is what they call a "terminal" position here, i.e. I'm allowed to settle in it without having to work towards a promotion. From what I've been told, there are places where you have to get to be a staff engineer or they'll eventually let you go.
I don't know about anyone else, but I find the whole situation fucking insane.
SchemaLoad 17 hours ago [-]
I don't think that's true. Engineers would largely want to build the best bridge costs be damned. But they would end up undercut by anyone who cuts corners resulting in the only companies getting contracts are the ones who cut the most corners. Even if no one wants to build bridges that collapse, it would be impossible without some counter forces of laws and accountability.
siriusastrebe 23 hours ago [-]
What would happen if we made bridges to last as long as possible, to withstand natural disasters and require minimal maintenance?
What if we built things that are meant to last? Would the world be better for it?
GarnetFloride 23 hours ago [-]
Look up Roman concrete.
There are 2000 year old bridges and aqueducts still in use.
We only recently figured out how to reproduce Roman concrete.
We’d have more but a lot were blown up during WWII.
bombela 23 hours ago [-]
There is nothing special about roman concrete compared to moderns concrete. Modern concrete is much better
The difference is that they didn't have rebar. And so they built gravity stable structures. Heavy and costly as fuck.
A modern steel and concrete structure is much lighter and much cheaper to produce.
It does mean a nodern structure doesn't last as long but also the roman stuff we see is what survived the test of time, not what crumbled.
throw-qqqqq 22 hours ago [-]
> There is nothing special about roman concrete compared to moderns concrete. Modern concrete is much better
Roman concrete is special because it is much more self-healing than modern concrete, and thus more durable.
However, that comes at the cost of being much less strong, set much slower and require rare ingredients. Roman concrete also doesn’t play nice with steel reinforcement.
I think you are incorrect. Compared to modern concrete, roman concrete was more poorly cured at the time of pouring. So when it began to weather and crack, un-cured concrete would mix with water and cure. Thus it was somewhat self healing.
Modern concrete is more uniform in mix, and thus it doesn't leave uncured portions.
darkwater 22 hours ago [-]
We have modern architecture crumbling already less than 100 years after it has been built. I know engineering is about tradeoffs but we should also acknowledge that, as a society, we are so much used to put direct economic cost as the main and sometimes only metric.
bluGill 20 hours ago [-]
You would be very unhappy if you had to live in a house as built 100 years ago. Back then electric lights were rare. even if you had them the wiring wasn't up to running modern life. my house is only 50 years old and it shows signs of the major remodel 30 years ago, and there are still a lot of things that a newer house would all do different that I sometimes miss.
GarnetFloride 14 hours ago [-]
I've lived in a 100 year old house and and in a brand new house, they both had issues. That also both had advantages too.
Oddly the older house had a better designed kitchen. Our lives change over time and our housing has to adjust to that too.
recursive 23 hours ago [-]
Devil's advocate here. Maybe we'd all forget how to build bridges in the next thousand years, after bridging all the bridg-able spans.
DeathArrow 23 hours ago [-]
What if instead of one bridge we build three, so more people can cross the river?
siriusastrebe 23 hours ago [-]
And if your one bridge survived as long as, or longer than three bridges?
pixl97 23 hours ago [-]
Then you still have traffic issues and no one is happy.
fiedzia 23 hours ago [-]
> What if we built things that are meant to last? Would the world be better for it?
You'd have a better bridge, at the expense of other things, like hospitals or roads. If people choose good-enough bridges, that shows there is something else they value more.
siriusastrebe 23 hours ago [-]
Once the good-enough bridge deteriorates and we have to spend more money maintaining or replacing it
Don't we end up just spending the same? Just now we're left with a crappy bridge.
cm11 22 hours ago [-]
Certainly, "enough" is doing a lot of work and things get blurry, but I think "good enough" is meant to capture some of that. Over building is also a problem. It isn't strictly true that building longer lived things is cheaper over time either, it obviously depends on the specific things getting compared. And if you go 100 years rather than 25 years, you'll have fewer chances to adjust and optimize for changes to the context, new technology, changing goals, or more efficient (including cost saving) methods.
Obviously, there's a way to do both poorly too. We can make expensive things that don't last. I think a large chunk of gripes about things that don't last are really about (1) not getting the upside of the tradeoff, cheaper (in both senses) more flexible solutions, and (2) just getting bad quality period.
Ekaros 21 hours ago [-]
It might very well be that building and maintaining a bridge for 100 years costs three or four times as much as building and maintaining one that last 50 years. If demolition costs are not same as cost of bridge well in long run replacing the bridge ever 50 years is cheaper.
On whole it is entirely reasonable optimisation problem. What is the best lifespan of single bridge over desired total lifespan.
SchemaLoad 17 hours ago [-]
"Good enough" bridges still last 50+ years. We could design a bridge to last 200 years but we won't even know if the design we have today will even be needed in 200 years. Maybe by then we all use trains in underground tunnels.
pixl97 23 hours ago [-]
Depends how much the infrastructure and needs around it changes.
nisegami 23 hours ago [-]
But we also got roads and hospitals.
blast 23 hours ago [-]
> the cheapest bridge that just barely won't fail
That can't be right? What about safety factors
StevenWaterman 23 hours ago [-]
Safety factors exist because without them, bridges fall down
pklausler 23 hours ago [-]
The free market ensures that bridges stay up, because the bridge-makers don't want to get sued by people who have died in bridge collapses.
tredre3 20 hours ago [-]
That is definitely not the free market at play. It's legislative body at play.
Engineers (real ones, not software) face consequences when their work falls apart prematurely. Doubly so when it kills someone. They lose their job, their license, and they can never work in the field again.
That's why it's rare for buildings to collapse. But software collapsing is just another Monday. At best the software firm will get fined when they kill someone, but the ICs will never be held responsible.
quentindanjou 23 hours ago [-]
This only works when the barrier of entry to sue is low enough to be done and when the law is applied impartially without corruption with sanctions meaningful enough , potentially company-ending, to discourage them.
At the moment you remove one of these factors, free market becomes dangerous for the people living in it.
tdeck 11 hours ago [-]
I'm going to assume this is Poe's Law at work?
irishcoffee 23 hours ago [-]
That isn't how safety factors work... The person you're responding to is correct. I encourage you to look it up!
StevenWaterman 20 hours ago [-]
Safety factors account for uncertainty. Uncertainty the quality of materials, of workmanship, of unaccounted-for sources of error. Uncertainty in whether the maximum load in the spec will actually be followed.
Without a safety factor, that uncertainty means that, some of the time, some of your bridge will fall down
nzeid 22 hours ago [-]
I'd describe that as passable engineering.
Good engineering is building the strongest bridge within budget and time.
bitwize 22 hours ago [-]
Um, ackshually, real civil/structural engineers—at least, those in the global north—design bridges, roads, and buildings with huge tolerances (multiple times the expected loads) because unexpected shit happens and you don't want to suffer catastrophic failure when conditions are just outside of your typical use case and have a Tacoma Narrows Bridge type situation on your hands.
kube-system 21 hours ago [-]
We might be arguing semantics, but safety margins aren't considered 'overbuilding' but part of the bare minimum requirements for a bridge to stand. They aren't there "just in case" they are there because it is known for a fact that bridges in the real world will experience degradation and overloading.
If you build a bridge that is rated to carry 100k lbs of weight, and you build it to hold 100k lbs, you didn't build it to barely meet spec -- you under built it -- because overloading is a known condition that does happen to bridges.
chromacity 20 hours ago [-]
> I want to argue that AI models will write good code because of economic incentives.
The economic incentives on the internet by and large favor the production of slop. A significant proportion of the text-based web was content-farmed even before LLMs - and with the advent of LLMs, you now have slop-results for almost every search query imaginable, including some incredibly niche topics. We've seen the same trend with video: even before gen AI, online video consumption devolved toward carefully-engineered, staged short-form bait (TikTok, YT Shorts, etc). In the same vein, the bulk of the world's email traffic is phishing and spam.
None of this removed the incentive to produce high-quality websites, authentic and in-depth videos, and so on. But in practice, it made such content rare and made it harder for high-quality products to thrive. So yeah, I'm pretty sure that good software will survive in the LLM era. But I'm also absolutely certain that most app stores will be overrun by slop, most games on Steam will be slop, etc.
kusokurae 2 hours ago [-]
Greptile literally spams PRs with overwhelming verbiage slop and often actively dangerous recommendations. I am tired of sales/propaganda masquerading as insight from vested interests.
convexly 16 hours ago [-]
People stop caring whether something is good or not when there's so much of everything now. Why spend time evaluating when you can just move on to the next thing? That's not really an AI problem, but AI has definitely made it worse.
vb-8448 24 hours ago [-]
Good code wasn't winning even before the ai slop era!
The pattern was always: ship fast, fix/document later, but when "later" comes "don't touch what is working".
To date nothing changed yet, I bet it won't change even in the future.
deathanatos 22 hours ago [-]
& I have thus far made a large portion of my living off of fixing bad code "later".
… but lately, the rate at which some dev with an LLM can just churn out new bad code has just shot through the roof. I can still be struggling to pick apart the last piece of slop, trying to figure out "okay, if someone with a brain had written this, what would the inputs & outputs be?" and "what is it that production actually needs and relies on, and what causes problems, and how can we get the code from point A to point B without more outages"; but in the meantime, someone has spit out 8 more modules of the same "quality".
So sure, the basic tenants haven't changed, but these days I feel like I'm drowning in outages & bugs.
briantakita 24 hours ago [-]
I was told by an exec...once a company or technology implements something and gets mindshare, the community (including companies) moves on.
Competition is essentially dead for that segment given there is always outward growth.
With that being said, AI enables smaller players to implement their visions with enough completeness to be viable. And with a hands off approach to code, the underlying technology mindshare does not matter as much.
esafak 23 hours ago [-]
If that were true first movers would always win. Hotmail came before Gmail. Yahoo came before Google. Myspace came before Facebook. Et cetera. Of course it is best to avoid competition by creating a new (sub)category but category kings can change.
20 hours ago [-]
ares623 21 hours ago [-]
The irony is that "good" code and good documentation have top priority now in most orgs. For decades the best developers have been screaming about good code and documentation but leadership couldn't give a fuck. But now that their favorite nepobaby is here, now it's the most important thing all of a sudden.
danielbln 10 hours ago [-]
What nepobaby are you talking about?
mettamage 23 hours ago [-]
I disagree, Electron showed the world that good code can be magnetic
... I'll see myself out
vb-8448 21 hours ago [-]
It's a joke? (I'm not in the electron/js world and I don't get it)
pxc 10 hours ago [-]
It's striking to me that while the article argues that LLMs and agentic development tools will increasingly trend towards higher and higher quality, but the "pro-AI" comments in this discussion and others tend to announce a "quality doesn't matter" camp.
taatparya 6 hours ago [-]
I believe this will inevitably happen due to this reason - economic.
And property testing is going to be an important way to validate.
HeytalePazguato 15 hours ago [-]
The camps framing misses something. After 15 years writing code I've found the
people who ship the best products understand both sides. You need to care about
craft enough to know when AI output is garbage, and you need to care about
shipping enough to not gold-plate things that don't matter.
The slop problem isn't AI, it's people who can't tell the difference between
good and bad output because they never developed the craft in the first place.
AI just makes that gap more visible.
pixl97 2 hours ago [-]
>, it's people who can't tell the difference between good and bad output
So probably the vast majority of people that program....
titzer 20 hours ago [-]
> Markets will not reward slop in coding, in the long-term.
Forgive my cynical take, but we're currently experiencing a market that doesn't appear to be rewarding anything specific in the long-term, as huge sums of money are traded on a minute-to-minute, day-to-day, and week-to-week basis. There's an explosion of uncertainty in today's markets and complete collapse of long-range planning echoing at many levels in society--particularly at the highest levels of governments. So I kind of don't want to hear about markets are going to reward.
But what exactly is "good code" (presumably the opposite of slop)?
I'd say that good code is terse, robust, suits its function, yet admits just the right amount of testability, performance, and customizability for the future. Good code anticipates change well. That means that if it has one job, it does that one job well, and not twenty others. If the job is going to stay the same, the code is going to stay the same. Good systems are made from well-factored pieces of code that have proper jobs and do their proper jobs without taking on jobs they shouldn't.
I for one think that AI code is going to reflect its training. If it's trained on just a random selection of everything out there, it's probably going to be pretty mediocre and full of bugs.
bluGill 20 hours ago [-]
Markets have always rewarded popularity in the short term. in the long term though it has always rewarded quality.
fnoef 22 hours ago [-]
I wish it was true, but it sounds like copium. I bet garment makers, or artisan woodworkers said the same when big store cheap retails came. I bet they said "people value quality and etc", but in the end, outside of a group of people who has principles, everyone else floods their home with H&Ms and crap from Temu.
So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.
Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.
POBIX 14 hours ago [-]
Handcrafted furniture costs a lot more money than mass produced furniture.
Software, on the other hand, can be free. Even before LLMs I would argue the best code was found in FOSS projects.
Nobody is going to use sloppy buggy software if a handcrafted well engineered alternative exists, and is free.
In the case of software, the group of people who have principles might be the ones funding FOSS projects, and the software itself would then be enjoyed by all. This is more or less what's already happening today.
pixl97 2 hours ago [-]
>Nobody is going to use sloppy buggy software if a handcrafted well engineered alternative exists, and is free.
Dear sir, I think you may have already got the entire software market incorrect as it already stands.
zozbot234 19 hours ago [-]
Artisanal crafts are alive and well. It turns out that some people actually prefer handmade stuff to the mass-produced kind, and there's plenty enough of them for a viable market, at least for the highest-quality producers. The real losers are those who make stuff of only barely-acceptable quality: they have no edge over what's mass produced, their middling skills lose value and they're forced to exit the sector.
pixl97 2 hours ago [-]
>and there's plenty enough of them for a viable market, at least for the highest-quality producers.
So what you're saying is "someone" can make a living doing it.
What you're not saying is "you" can make a living doing it.
osti 15 hours ago [-]
That is ture, but the revenue of the artisanal stuff is probably only a very low percentage of the overall market, which would imply a lot of software engineers would have to exit the field. Which is what we here don't want to see.
mattmanser 21 hours ago [-]
For a lot of companies their entire income entirely depends on their uptime.
Might be fine if your HR software isn't approving holiday requests, but your checkout breaks, there's no human that can pick apart the mess and you lose your entire income for a week and that might be the end of the business.
pizzly 21 hours ago [-]
The current iteration of models don't write clean code by itself but future ones will. The problem in my view is extremely similar to agentic/vibe coding. Instead of optimizing for results you can optimize for clean code. The demand is there, clean code will lead to less bugs, faster running code and less tokens used (thus less cost) when understanding the code from a fresh session. It makes sense that the first generation of vibe coding focused on the results first and not clean code. Am I missing something?
jaredcwhite 3 hours ago [-]
Good code will win? What kind of delusion is that? The web has been overrun by low effort React & Tailwind monstrosities, and agenetic services have only accelerated that trend. I don't have any confidence that market forces will somehow corral development into a place where "good code" wins.
Tiberium 22 hours ago [-]
Electron won even in the pre-LLM era, I sure wonder why.
clawfund 19 hours ago [-]
The token efficiency argument only holds for teams paying per API call. Cursor, Copilot, and most tools developers actually use are flat subscription. On a flat monthly fee, there's no economic pressure toward brevity in generated code.
foltik 19 hours ago [-]
Apparently there’s no economic pressure toward brevity in LLM generated slop comments either.
And no, they’re not flat subscriptions. Use more tokens and your quota is gone faster.
gpm 18 hours ago [-]
Not true with github copilot. Cost is per prompt no matter how many tokens the prompt uses. Which can vary by 2 or 3 orders of magnitude...
DanHulton 19 hours ago [-]
We've been here before. Outsourcing of coding was really big for a while, until the reality of that situation caught up with those who practiced it - if you were saving a bundle on outsourcing your coding work, you were only saving money _now._ Down the line, you'd have to pay extra for someone competent to re-implement the work with an eye to quality.
(Sure, there were good outsourcing shops, but you didn't tend to save too much with them, since they knew they were good and charged appropriately.)
"Slop" ai-generated code is the same tradeoff as cheap outsourcing shops. You move quicker and cheaper now, but there will come a day when code quality will dip low enough that it will be difficult enough to make new changes that a refocus on quality becomes not just worthwhile, but financially required as well.
(And you may argue that you're using ai-generated code, but are maintaining a high code quality, and so for you this day will never come and you might be right! But you're the "good outsourcing shop", and you're not "saving" nearly as much time or money as those just sloppin' it up these days, so you're not really the issue, I'd argue.)
pixl97 2 hours ago [-]
>Outsourcing of coding was really big for a while,
I can promise you outsourcing of coding is still huge.
This said, companies have changed it up a bit, instead of hiring a outsourcing shop, they'll setup their own branch in LCOL countries.
India, Portugal, a few different countries in eastern Europe are all rather large software producing countries for US companies.
t43562 20 hours ago [-]
i.e. no matter what, the answer is always AI. If it's isn't good now it will be so .... AI. Don't forget to take your soma pills if anything isn't perfect.
beloch 21 hours ago [-]
"AI will write good code because it is economically advantageous to do so. Per our definition of good code, good code is easy to understand and modify from the reduced complexity."
---------
This doesn't necessarily follow. Yes, there might be economic pressure for AI to produce "good" code, but that doesn't necessarily mean efforts to make this so will succeed. LLM's might never become proficient at producing "good" code for the same reasons that LLM's perform poorly when trained on their own output. A heuristic prediction of what "good" code for a given solution looks like is likely always going to be less "good" than code produced by skilled and deliberate human design.
Just as there is a place for fast and dirty human code, there will be a place for slop code. Likely the same sort of place. However, we may still need humans to produce "good" code that AI can be trained on as well as for solutions that actually need to be "good". AI might not be able to do that for us anytime soon, no matter what the economic imperatives are.
Ericson2314 21 hours ago [-]
The economic force is the LLMs themselves are worse at maintaining slop than good good.
Everything fundamental that makes good easier for humans to maintain also makes it easier for LLMs to maintain. Full stop.
woeirua 19 hours ago [-]
None of this is true. Pretty much all JavaScript code is slop because the language is god awful. It’s so bad that we spent the last 20 years trying to code around the severe limitations of the language itself. Despite everyone knowing that JS sucks, no one has been able to displace it. Slop wins. Typically because of first mover advantage.
icedchai 18 hours ago [-]
30 years. I remember when Netscape 2.0 was released.
skillissue33 18 hours ago [-]
Everyone who sucks at JavaScript says this. Same with CSS lol
woeirua 18 hours ago [-]
The language itself is bad. Even the creator admitted it.
NateEag 11 hours ago [-]
Crockford was not wrong that there are beautiful corners in JS.
It's at its best for functional programming.
If you don't know that style, then JS will be very unpleasant to use.
skybrian 21 hours ago [-]
I think for this to work you need some kind of complexity budget. AI's are good at optimizing but you need to give them the right goals.
ezekg 22 hours ago [-]
The background pattern really makes it hard to read, just fyi. I'd make the content have a white bg if you absolutely must use the pattern.
personality1 22 hours ago [-]
I wish I could write beautiful good code, every part of me wants it, but I'm forced to deliver as fast as I can.
jillesvangurp 20 hours ago [-]
Getting AI tools to produce better code is not that hard; if you know how to do things right yourself. Basically, all you need to do is ask it to. Ask it to follow SOLID principles. Ask it to stick to guard rails. Ask it to eliminate code duplication, write tests, harden code bases, etc. It will do it. Most people just don't know what to ask for. Or even that they should be asking for it. Or how to ask for it. A lot of people getting messy results need to look in the mirror.
I'm using AI for coding just like everybody else. More or less exclusively since a few months. It's sometimes frustrating to get things done the right way but mostly I get the job done. I've been coding since the nineties. So, I know how to do things right and what doing it wrong looks like. If I catch my AI coding tools doing it wrong, I tell it to fix it and then adjust skills and guard rails to prevent it going off the rails.
AI tools actually seem to self correct when used in a nice code base. If there are tests, they'll just write more tests without needing to be prompted. If there is documentation, that gets updated along with the code. When you start with a vibe coded mess it can escalate quickly unless you make it clean up the mess. Sometimes the tests it adds are a bit meh and you have to tell it off by "add some tests for the non happy path cases, make sure to cover all possible exceptions, etc.". You can actually ask for a critical code review and then tell it "fix all of that". Sometimes it's as simple as that.
keeda 20 hours ago [-]
I'm optimistic that AI will actually increase the proportion of good code in the future.
1. IME AI tends to produce good code "in the small." That is, within a function or a file, I've encountered very little sloppy code from AI. Design and architecture is (still) where it quickly tends to go off the rails and needs a heavy hand. However, the bulk of the actual code will tend to be higher quality.
2. Code is now very cheap. And more tests actually results in better results from AI. There is now very little excuse to avoid extensive refactoring to do things "the right way." Especially since there will be a strong incentive to have clean code, because as TFA indicates...
3. Complex, messy code will directly increase token costs. Not just in grokking the codebase, but in the tokens wasted on failed attempts rooted in over-complicated code. Finally, tech debt has a concrete $$$ amount. What can get measured can get fixed, and nothing is easier to measure (or convince execs about!) than $$$.
Right now tokens are extremely cheap because they're heavily subsidized, but when token costs inevitably start ramping up, slop will automatically become less economically viable.
miningape 16 hours ago [-]
This is my take too - LLMs aren't an excuse to lower our standards, they're the reason to raise them.
Put simply LLMs perform better on better code.
muskstinks 23 hours ago [-]
... for now.
And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.
We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.
A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.
And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.
We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)
Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.
Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).
We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
rbbydotdev 23 hours ago [-]
The wrinkle here is what exactly “win” means
dzonga 18 hours ago [-]
> lines of code per developer grew.
when you're making (crafting) software if the lines are going up for an equivalent functionality it means you're cooking up bullshit.
the whole premise of software arts (engineering) - is you do MORE with LESS.
engineering is not science, and neither is art. creativity is needed, rules of thumb are to be followed.
sublinear 22 hours ago [-]
> economic forces will drive AI models toward generating good, simpler, code because it will be cheaper overall
Economic forces are completely irrelevant to the code quality of AI.
> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies
Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.
AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.
Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
eagerpace 18 hours ago [-]
Slop code is like an early neural net. Path of least resistance. Except we cna see it and compare it to how it’s replacing something we’ve traditionally done instead of a neural net being opaque. As they write more code the path and concern for how it gets there will be approach zero.
0xC0FFEEE 5 hours ago [-]
Though not attributed, I still appreciate the Terry Davis quote about simplicity vs complexity.
1970-01-01 19 hours ago [-]
Here's the crux of it: if you want fast and cheap today, you have to choose AI. There is no cheaper, faster option. The article completely fails to mention this very damning fact about the state of the art. Slop is the future if you need it now and don't want to spend money.
ahussain 22 hours ago [-]
My prediction is that we'll start to see a whole new layer of abstraction to help us write high quality code with LLMs - meaning new programming languages, new toolchains, stricter typechecking, in-built feedback loops etc.
The slop we're seeing today comes primarily from the fact that LLMs are writing code with tools meant for human users.
21 hours ago [-]
Ancalagon 21 hours ago [-]
Maybe the models get better on the code side but I thought slop referred to any AI generated text or imagery? It’s hard to see how most of the internet’s written words won’t be slop, especially when there’s no binding compiler contract like in code.
tehjoker 16 hours ago [-]
I don’t know if this prediction is wrong, it might well be right, but the basis for this prediction is “market forces” without (a) an analysis showing the advantage for sets of market participants or (b) fundamental scientific reasoning why the code will improve to that degree. Without those two things it’s just wishful thinking.
shevy-java 18 hours ago [-]
It seems the big corporations committed to it already.
Microslop is the future.
18 hours ago [-]
vicchenai 19 hours ago [-]
the distinction between slop and good AI-assisted code really comes down to who's reviewing it. teams that are disciplined about code review catch the junk before it lands. teams that let AI output fly straight to prod are gonna have a bad time eventually. it's less about the AI and more about engineering culture around it
ramsaybolton 17 hours ago [-]
What I would like to add
19 hours ago [-]
19 hours ago [-]
cineticdaffodil 17 hours ago [-]
Slop is a chancevfor everyone with the old culture alive to rapidly take over endaveurs that have been taken over by utopists that deprecate that company.
yshamrei 23 hours ago [-]
good code do not earn money =)
RcouF1uZ4gsC 23 hours ago [-]
The existence and ubiquity of bash scripts make me doubt this.
sergiotapia 20 hours ago [-]
The slop debt will always come to collect.
A certain big PaaS I won't name here has had lots of clusterfucks in the last 3 months. The CEO is extremely bought into AI and "code not mattering anymore". He's also constantly talking about the meteoric growth because Claude and other AI providers are using railway as default suggestions.
The toll has come to collect and now a lot of real production users are looking at alternatives.
The reality is the market is rewarding slop and "velocity now". There will come a time where it will reward quality again.
darenr 21 hours ago [-]
Everyone's talking about AI, but let's posit that today's coding models are as good as a SDE on the performance/experience distribution, maybe in the lower quartile, but can we also posit that this will improve and over time the coding models equal and then better the median software engineer? It's not like SDE's are not also churning out poor quality code "it worked for me", "what tests?" "O(what?)", etc, we've all worked with them.
The difference is that over the years while tooling and process have dramatically improved, SDE's have not improved much, junior engineers still make the same mistakes. The assumption is that (not yet proven, but the whole bubble is based on this) that models will continue to improve - eventually leaving behind human SDEs (or other domain people, lawyers, doctors, etc) - if this happens these arguments I keep seeing on HN about AI slop will all be moot.
Assuming AI continues to improve, the cost and speed of software development will dramatically drop. I saw a comment yesterday that predicted that AI will just plateau and everyone will go back to vim and Makefiles (paraphrasing).
Maybe, I don't know, but all these people saying AI is slop, Ra Ra Humans is just wishful thinking. Let's admit it, we don't know how it will play out. There's people like Dario and Sam who naturally are cheerleading for AI, then there's the HN collective who hate every new release of MacOS and every AI model, just on principle! I understand the fear, anyone who's ever read Flora Thompson's Lark Rise to Candleford will see the parallels, things are changing, AI is the plough, the railway, the transistor...
I'm tired on the debate, my experience is that AI (Gemini for me) is awesome, we all have gaps in our knowledge/skills (but not Gemini), AI helps hardcore backend engineers throw together a Gradio demo in minutes to make their point, helps junior devs review their code before making a PR, helps Product put together presentations. I could go on and on, those that don't see value in AI are doing it wrong.
As Taylor Swift said "It's me, hi, I'm the problem, it's me" - take that to heart and learn to leverage the tools, stop whining please, it's embarrassing to the whole software industry.
maxothex 39 minutes ago [-]
[dead]
seniorThrowaway 24 hours ago [-]
this submission is basically an ad
imta71770 19 hours ago [-]
[dead]
babyfounder 18 hours ago [-]
Coming from the same child founder who makes his employees work 24/7 while he jacks off to hentai all day
howoldareyou 18 hours ago [-]
Coming from the same founder who (famously) required 7-day work weeks (no days off ever) lol
Is that still the future or nah?
aplomb1026 23 hours ago [-]
[dead]
throwaway613746 24 hours ago [-]
[dead]
sloptile 24 hours ago [-]
[flagged]
dang 24 hours ago [-]
Please don't cross into personal attack, and especially please don't harass newcomers.
This website and entire startup ecosystem is a low IQ circle jerk that shouldn’t be taken seriously
7e 24 hours ago [-]
None of this is true. Models will soon scale to several million tokens of context. That, combined with the combined experience of millions of feedback cycles, will make software a solved problem for machines, even as humans remain dumb. Yes, even complex software. Complex software is actually better because it is, generally, faster with more features. It’s smarter. Like a jet fighter, the more complex it is, the more capable it is.
1. You treat your code as a means to an end to make a product for a user.
2. You treat the code itself as your craft, with the product being a vector for your craft.
The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good your code is.
The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
absolutely false.
> The general public does not care about anything other than the capabilities and limitations of your product.
also false.
People may not know that the reason they like your product is because the code is so good, but everyone likes software that is mostly free from bugs, performs extremely well, helps them do their work quickly, and is obviously created by people the care deeply about the quality of the product they produce (you know, the kind that acutally read bug reports, and fix problems quickly).
The longer your product exists the more important the quality of the code will be. This obsession so many have with "get it out the door in 5 seconds" is only going to continue the parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
You don't have to pick on camp over the other. In my opinion, if you want to make a good product for a user, you should also treat the code you produce for them as your craft. There is no substitute for high quality work.
Exactly, thank you for putting it like that.
So far it’s been my observation that it’s only the people who think like the OP who put the situation in the terms they did. It’s a false dichotomy which has become a talking point. By framing it as “there are two camps, it’s just different, none of them is better”, it lends legitimacy to their position.
For an exaggerated, non-comparable example meant only to illustrate the power of such framing devices, one could say: “there are people who think guns should be regulated, and there are people who like freedom”. It puts the matter into an either/or situation. It’s a strategy to frame the conversation on one’s terms.
It's totally fine to say "the code works, that area is stable, let's not mess with that code". I make those kinds of tradeoffs on a near daily basis. But let's be real, "perfectly functioning code" is an ill defined, moving target. What looks like perfectly functioning code to a sibling team or a PM, could be a massive liability to someone who actually knows the code.
But then again I'm writing OS and performance critical code. A 1 in 1 million bug is easier to ignore in a throwaway log viewer website.
But I digress, there's a plentiful discussion to be had about the ethics and morality of freedom/regulations, but my point is how there is, in fact, a dichotomy between both and it isn't just framing. Which, in a sense, you just corroborated.
I would classify all of those as "capabilities and limitations of your product"
I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.), and in that sense I agree no customer who's just using the product actually cares about that.
Another definition of "good code" is probably "code that meets the requirements without unexpected behavior" and in that sense of course end users care about good code, but you could give me two black boxes that act the same externally, one written as a single line , single character variables, etc. etc. etc. and another written to be readable, and I wouldn't care so long as I wasn't expected to maintain it.
The reality of software products is that they are in nearly in all cases developed/maintained over time, though--and whenever that's the case, the black box metaphor fails. It's an idealization that only works for single moments of time, and yet software development typically extends through the entire period during which a product has users.
> I read OPs "good code" to mean "highly aesthetic code" (well laid out, good abstractions, good comments, etc. etc.)
The above is also why these properties you've mentioned shouldn't be considered aesthetic only: the software's likelihood of having tractable bugs, manageable performance concerns, or to adapt quickly to the demands of its users and the changing ecosystem it's embedded in are all affected by matters of abstraction selection, code organization, and documentation.
> I wouldn't care so long as I wasn't expected to maintain it.
But, if you’re the one putting out that software, of course you will have to maintain it! When your users come back with a bug or a “this flow is too slow,” you will have to wade into the innards (at least until AI can do that without mistakes).
If it’s software that will never be modified, sure it doesn’t matter.
And most users are not consuming your code. They’re consuming some compiled, transpiled, or minified version of it. But they do have expectations and it’s easier to amend the product if the source code is maintainable.
This obviously doesn't represent all of the billions of dollars spent on software like Salesforce, SAP, Realpage, Booking.com, etc. etc. (all notoriously buggy, slow, and complex software). You can't tell me with a straight face that all of the thousands of developers who develop these products/services care deeply about the quality of the product. They get real nice paychecks, benefits and put dinner on the table for their families. That's the market.
> There is no substitute for high quality work.
You're right because there really isn't a consistent definition of what "high quality" software work looks like.
Those first three are "enterprise" or B2B applications, where the person buying the software is almost never one of the people actually using the software. This disconnect means that the person making the buying decision cannot meaningfully judge the quality of any given piece of software they are evaluating beyond a surface level (where slick demos can paper over huge quality issues) since they do not know how it is actually used or what problems the actual users regularly encounter.
You literally just told me the market doesn't care about quality. I don't get what point you're trying to make?
> When the market incentives are aligned between users and purchasers (such as when they are the same person) quality tends to become very important for the market viability of software
Right, but this magical market you're talking about doesn't exist. That's my point.
And if you can deterministically define "high quality software" with linters, analysers etc - then an AI Agent can also create high quality software within those limits.
What about caring and being depressed because quality comes from systems rather than (just) individuals?
Google, Facebook, Apple clearly care deeply about the quality of their code. They have to because bugs, bad performance, outages, vulnerabilities have very direct and immediate costs for them. I know Amazon and Microsoft have their critics but I bet they are also better than we give them credit for.
There are factors besides software quality that affect their success. But running bad software certainly isn’t going to help.
They seem fine with the output of the current hodge-podge of the original algorithm results plus massaging by many downstream ML pipelines that run one after the other without context of how each stop might affect the next.
Not the impression I get these days.
> parade of garbage software that is slow as a dog, and uses gigabytes of memory to perform simple tasks.
and of course, this isn't even the worst. A lot worse can happen, such as data loss and corruption. Things that can directly affect people's lives in real life.
As a developer, these things are constantly on my mind, and I believe this is the case for people who do care about the quality.
As has been said elsewhere many times, AI producing code is not the same as say, a compiler producing machine code. It is not such a well-defined strong abstraction, hence why code quality still is highly relevant.
It is also easily forgotten that code is as much a social construct (e.g. things that have to be done in certain ways due to real life restrictions, that you wouldn't have to do in an ideal world).
Sometimes I feel very powerless though. It feels as if some of us are talking past each other, even if we seemingly are using the same words like "quality". Or in a way, that is what makes this more futile-- that we are using the same words and hence seemingly talking about the same thing, when we are referring to completely different things.
It is difficult to have a conversation about a problem when some of us don't even see it as a problem to begin with-- until it reaches a point when it starts affecting their lives as well, whether it be directly or indirectly. But that takes time.
Time will tell.
Quality of code is just not that important of a concept anymore for the average web developer building some saas tool. React code was always crap anyways. Unless you are building critical systems like software that powers a plane or medical equipment, then code quality just doesn’t really matter so much in the age of AI. That may be a hard pill to swallow for some.
I don't think complexity is the right metric. The listed companies still do write a lot of shit software, but mostly smaller things you don't see much. Anything in the 'fast path' of these companies is getting executed trillions or quadrillions of times and because of that any bugs will become shallow.
Except at scale it really does, because garbage in garbage out. The crappier the code you feed the current models, the worse and more confusing the broken leaky abstractions, the more bugs the AI will generate.
That's where you get it wrong. The world is full of mediocre and low quality work in many, many fields. We all, in fact, depend on mediocre work in many ways.
Many, many people would prefer a solution with mediocre or even bad code than no solution at all while they wait for "high quality work" that never appears.
The magic of LLMs, especially as the technolgy improves, is that a truly mind boggling number of solutions to problems will be created with thoroughly mediocre (or worse!) LLM generated code. And the people benefitting from those solutions won't care! They'll be happy their problems are being solved.
But seriously, the denial is incredible to watch. Our product wouldn't exist without LLMs, and our users are vocally thankful that it does, saving them time and money and helping them reach their offline goals.
[0] https://news.ycombinator.com/item?id=47505428
[1] https://news.ycombinator.com/item?id=47476893
Among countless others
Exactly. A lot of devs optimizing for whether the feature is going to take a day or an hour, but not contemplating that it's going to be out in the wild for 10 years either way. Maybe do it well once.
I think there are a lot of developers working in repos where it's almost guaranteed that their code will _not_ still be there in 10 years, or 5 years, or even 1 year.
And in almost all of those cases, they'd be wrong.
Interestingly, I still have some code in the codebase, which I guess makes sense because I submitted about 680K LOC (note: not all hand-authored, there was a lot of output from automated tools in that) and 2^15 is 32768, so I'd expect to have about 20 lines left, which is actually surprisingly close to accurate (I didn't precisely count, but a quick glance at what I recognized suggested about 200 non-deprecated lines remain in prod). It is not at all the code that I thought would still be there 15 years later, or that I was most proud of. The most durable change appears to be renaming some attributes in a custom templating language that is now deeply embedded in the Search stack, as well as some C++ code that handles how various search options are selected and persisted between queries.
I think this both proves and disproves the original point. Most of your code is temporary. You have no idea which parts of your code is temporary. It's probably not the parts that you wish were temporary, which will almost certainly be made permanent.
When it comes to professional development, I've almost never worked on a codebase less than 10 years old, and it was always [either silently or overtly] understood that the software we are writing is a project that's going to effectively live forever. Or at least until the company is no longer recognizable from what it is today. It just seems wild and unbelievable to me, to go to work at a company and know that your code is going to be compiled, sent off to customers, and then nobody is ever going to touch it again. Where the product is so throwaway that you're going to work on it for about a year and then start another greenfield codebase. Yet there are companies that operate that way!
I've seen mismatch in each direction..
When you get a fatter check because your code break, the incentives are not in favor of good code.
In other words, I would, when possible, absolutely make a purchasing decision based on how good the code is (or based on how good I estimate the code to be), among other things.
[0] The concept of design is often misunderstood. First, obviously, when it’s classified as “how the thing looks”; then, perhaps less obviously, when it’s classified as “how the thing works”. A classification I am arriving at is, roughly, “how the thing works over time”.
Second, shitty electron apps are pervasive.
In the end software is means to the end. And if you do not get to end because software is crap it will be replaced, hopefully by someone else.
Have you taken a look at the world in the past… I dunno, at least several decades. That ceased to be true somewhere around the time I was in high school, maybe before.
High latencies, outages, memory leaks, security vulnerabilities, will be seen in your AWS bill or whatever hardware or service you deploy your software to. If your code isn’t clear enough to understand what it’s really doing, you have no chance at preventing or addressing the above.
> absolutely false.
Actually, you are both correct.
Nobody makes a purchasing decision based on code quality.
But they may later regret a purchasing decision based on code quality.
I know engineers who aren't that lucky and struggle in "enterprise" software development, where slop was a feature for decades - people making decisions won't use the software (their low paid employees will) and software monstrosities need a hell a lot of support which sometimes brings more revenue than the original purchase.
From working on many many old and important code bases, the code quality is absolutely trash.
You present no proof only touchy feely it must be so otherwise pseudo proof like software longevity is on the line.
Please first define software longevity quality in detailed terms: what is it exactly and how to you assess it regarsing quality and quantity?
Doom is judged by myself and by its versatility a masterpiece however, we all know and the Black Book is quite open about it the code itself is arguably not hitting modern standards as of today and there is a infamous WTF hardcoded value used to do speed code. So what? It inspired a whole generation. Second reality? A mindbender demo that accelerated quite a few finish developer careers has sadly forgotten after it was once considered for decades to be for the demo scene on PC what Doom was for the underground gamer scene. A nice match anyway.
Is Windows a masterpiece or not? Judging by its UX/UI definitely not, by its versatility and transposable potential I would rate it a masterpiece.
1000th of developers work on some code basis which change chipsets and compiler settings quite significantly - so there are ups and downs even in domains where a simple deadline and a requirement change suddenly makes technological prowess useless trash.
And the most heretical question ever: what if your so called number one quality software product might fool you and - could be way better done? You didn’t even consider that option which makes your point shaky to say the least.
Look through the list of top apps in mobile app stores, most used desktop apps, websites, SaaS, and all other popular/profitable software in general and tell me where you see users rewarding quality over features and speed of execution.
If your goal is to break into the market with software that is dogshit from day 1, you're just going to be ones of millions of people failing their get-rich-quick scheme.
If there are people who, on principle, demand the superior product then those people simply aren't numerous enough to matter in the long run. I might be one of those people myself, I think.
I mean Jira is probably one of the most purchased software packages in the world that is specifically used by developers that care about their craft - you don't want to look at the code, trust me.
For now. We only call it slop when we notice it. Majority of AI text, music, images, videos and code is indistinguishable and you use it every day.
This whole "treat your code as craft" reminds me of organic farming, zero waste, etc movements. Cute movements that only minority of people care about.
I really don't think this is true. If it was, we'd be able to point to countless examples of things assumed not to be AI that actually were, but there's a dearth of such examples.
The assumption that people CARE about your product is the most Silicon Valley, Hacker News, forgot what the world out side of tech looks like thing ever.
People CARE about their software as much as the CARE about their bank, or a new finance product... People excited over software is more of a historic footnote than any thing real people think in 2026.
The vast majority of Software is one of two things:
A) a tool
B) a source of entertainment
As a tool it either needs to provide value or it's something that is shoved on you by work.
The user experience of your average mobile game today is fucking awful. People put up with a massive amount of garbage for a trickle of fun. So much of the web looks like a mid 90's Hong Kong back alleyway --- biking ads, videos screaming at you, and someone trying to steal your wallet. And the majority of things people are forced to use for work... well... Show me someone who is excited about their ERP or CMS or Network drive... Show me someone who thinks that anything to do with Salesforce is something to be excited over.
> The general public does not care about anything other than the capabilities and limitations of your product.
A segment of our industry is screaming about the security of open claw. People dont care (because we have also made a mockery of security as well) - they are using it as a tool that will deliver them a solution. It strips away all the arcanea that made people think we were wizards and writes the dam spells for them. It's a dumpster fire, and people are thrilled about it and what its delivering them. And thats software not made by you or I.
They will start caring when a security bug on one of their phone's apps (or on the phone's software itself) empties their savings accounts. At that point the law officials/the Government might also start to care.
That's the minimalism that's been lost.
That's why I find the group 2 arguments disingenuous. Emotional appeal to conservatism, which conveniently also props up their career.
Why all those parsers and package systems when what's really needed is dials min-max geometric functions from grand theft auto geometry to tax returns?
Optimization can be (and will be) engineered into the machine through power regulation.
There's way too many appeals to nostalgia emanating from the high tech crowd. Laundering economic anxiety through appeals to conservatism.
Give me an etch a sketch to shape the geometry of. Not another syntax art parser.
There are some types of software (e.g. websites especially), where a bit of jank and is generally acceptable. Sessions are relatively short, and your users can reload the webpage if things stop working. The technical rigor of these codebases tends to be poor, but it's generally fine.
Then there's software which is very sensitive to issues (e.g. a multi-player game server, a driver, or anything that's highly concurrent). The technical rigor here needs to be very high, because a single mistake can be devastating. This type of software attracts people who want to take pride in their code, because the quality really does matter.
I think these people are feeling threatened by LLMs. Not so much because an LLM is going to outperform them, but because an LLM will (currently) make poor technical design decisions that will eventually add up to the ruin of high-rigor software.
If this level of quality/rigor does matter for something like a game, do you think the market will enforce this? If low rigor leads to a poor product, won't it sell less than a good product in this market? Shouldn't the market just naturally weed out the AI slop over time, assuming it's true that "quality really does matter"?
Or were you thinking about "matter" in some other sense than business/product success?
Look at Windows. It's objectively not been a good product for a long time. Its usage is almost entirely down to its moat.
This obscures things in favour of the “quality/performance doesn’t matter argument”.
I am, for example, forced to use a variety of microslop and zoom products. They are unequivocally garbage. Given the option, I would not use them. However, my employer has saddled us with them for reasons, and we must now deal with it.
As someone who also falls into camp one, and absolutely loves that we have thinking computers now, I can also recognize that we're angling towards a world of hurt over the next few years while a bunch of people in power have to learn hard lessons we'll all suffer for.
People who care about code quality are not artists who want to paint on the company's dime. They are people who care about shipping a product deeply enough to make sure that doing so is a pleasant experience both for themselves and their colleagues, and also have the maturity to do a little bit more thinking today, so that next week they can make better decisions without thinking, so that they don't get called at 4 AM the night after launch for some emergency debugging of an issue that that really should have been impossible if it was properly designed.
> No one has ever made a purchasing decision based on how good your code is.
Usually they don't get to see the internals of the product, but they can make inferences based on its externals. You've heard plenty of products called a "vibe-coded piece of crap" this year, even if they're not open source.
But also, this is just not true. Code quality is a factor in lots of purchasing decisions.
When buying open source products, having your own team check out the repo is incredibly common. If there are glaring signs in the first 5 minutes that it was hacked together, your chances of getting the sale have gone way down. In the largest deals, inspecting the source code
It was for an investment decision rather than for a purchase, but I've been personally hired to do some "emergency API design" so a company can show that it both has the thing being designed, and that their design is good.
Speak for yourself. This is exactly the GPs point. Some people care more about the craft of code than the output. I personally find writing good code to be what motivates me. Obviously its a spectrum; shipping is good too. But it's not why I get up in the morning.
Explain how?
Code evolvability is one of the extrinsic quality markers ( https://www.pathsensitive.com/2023/07/the-11-aspects-of-good... ).
If it's easy to add new features without creating bugs, the code is evolvable. Else, it's not. Does not seem very subjective.
There are products that are made better when the code itself is better. I would argue that the vast majority of products are expected to be reliable, so it would make sense that reliable code makes for better product. That's not being a code craftsman, it's being a good product designer and depending on your industry, sometimes even being a good businessman. Or, again, depending on your industry, not being callous about destroying people's lives in the various ways that bad code can.
And at the same time I hope that you will some day be forced to maintain a project written by someone else with that mindset. Cruel, yes. But unfortunately schadenfreude is a real thing - I must be honest too.
I have gotten to old for ship now, ask questions later projects.
If it's harder to work with, it's harder to work with, it's not the end of the world. At least it exists, which it probably wouldn't have if developed with "camp 2" tendencies.
I think camp 2 would rather see one beautiful thing than ten useful things.
I optimise for "make it work", that's what the deal says.
If there's extra time, I might go to step two which is "make it pretty". Meaning that I go through the code and see that it's all good and proper if we need to add features later on.
Unashamedly, I would, but this is a false dilemma. We can have ten beautiful and useful things.
The thing that drives the camp 2 folk crazy is that often it would have taken no extra effort (or perhaps even less effort overall) to make a good version of the thing but the people who made thing simply couldn't be bothered.
The attitude you're describing here has led directly to our world being full of profoundly disappointing objects that proliferate because they meet a minimum bar of usefulness.
People don't like the minimum bar. They'll take it if it's the only thing on offer, but they like better things.
Ah yes, if you aren't shitting code out the door as fast as possible, you're probably not shipping anything at all.
They didn't say beautiful/crafted things were not necessary.
They were critiquing viewpoints that all code needs to be.
Even if we (for humorous purposes) took their 1 in 10 ratio as a deadly serious cap on crafting, 10% of projects being "exceptionally well crafted code" would be a wonderful world. I would take 1% high craft to 99% useful! (Not disjointly of course.)
> I think camp 2 would rather see one beautiful thing than ten useful things.
Both beautiful and useful are subjective (imo). Steve job's adding calligraphy to computer fonts could've considered a thing of beauty which derived from his personal relation to calligraphy, but it also is an really useful thing.
It's my personal opinion that some of the most valuable innovations are both useful and beautiful (elegant).
Of course, there are rough hacks sometimes but those are beautiful in their own way as well. Once again, both beauty and usefulness is subjective.
(If you measure Usefulness with the profit earned within a purely capitalistic lens, what happens is that you might do layoffs and you might degrade customer service to get to that measure, which ultimately reduces the usefulness. profit is a very lousy measure of usefulness in my opinion. We all need profit though but doing solely everything for profit also feels a bit greedy to me.)
Yeah it just takes longer and makes you miserable in the process. No biggie!
I've worked on a project that went over the complexity cliff before LLM coding even existed. It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken, but their use-cases are supported by a Gordian Knot of tech debt that practically cannot be improved without breaking something. It's not about a single bug that an LLM (or human) might introduce. It's about a complete breakdown in velocity and/or reliability, but the product is very mature and still makes money; so abandoning it and starting over is not considered realistic. Eager uptake of tech debt helped fuel the product's rise to popularity, but ultimately turned it into a dead end. It's a tough balancing act. I think a lot of LLM-generated platforms will fall eventually into this trap, but it will take many years.
LLMs are often poor at writing tests that provide useful information to human readers and poor at writing tests that can survive project evolution. To be fair, humans are also poor at these tasks if done in hindsight, after all the information you normally want to capture in tests has been forgotten. That boat has been missed for the legacy code no matter how you slice it. But LLMs are quite good at writing tests that lock in existing functionality in the rawest way. It seems like LLM-generation is actually the best hope of saving such a project?
To me, the entire point of crafting good code is building a product with care in the detail. They're inseparable.
I don't think I've ever in my life met someone who cared a lot about code and technology who didn't also care immensely about detail, and design, and craft in what they were building. The two are different expressions of the same quality in a person, from what I've seen.
Would those companies be better off just using pen and paper? Because "craft code" programmers don't have time for this, and not all companies can pay for bespoke software. Well, maybe now they can, with AI "slop".
Likewise, should people who don't have the skills or means to cook delicious and nutritious meals just starve without food? Or is it okay that they eat something which isn't perfect?
Vibe coding will fill in more "new feature" checkboxes, faster, but the level of quality averages out, to something often mediocre, or worse (like all my OSS projects in which I experiment; because such projects are the training data). It skips liberally on maintainability, accessibility, security and privacy considerations.
Code is a liability, I want to have less of it at a higher abstraction level (for which natural language isn't a good fit due to inherent ambiguity). For products, simplicity and user utility is how I approach the problem when given wiggle room.
We are on HN, so there's bound to be many startup people that only need to bang out features to lure in users and then pass on that pile further onto someone else, when they cash out.
What I have seen however, are mid-managers+ that haven't coded in a decade or so, and now with LLMs they feel that they deliver equal quality results, whereas they have been so long out of the game and haven't picked up the modern skills on how to maintain and build applications.
Yes, there are risks (Lethal Trifecta and all that), but AI assisted developmet by non-programmers isn't that much worse than letting the same people do complex macro/function/VBA setups in shared Excel sheets.
I'm running out of fingers on my hands on the number of 100% vibe-coded applications we've built internally that save double digits of % of time from people's day to day work. All created by the people who use them to fix a very specific workflow they've had to do by hand over hours. Now It's a click of a button on a bespoke application they made.
Craft often inspires a quasi-religious adherence to fight the ever-present temptation to just cut this one corner here real quick, because is anything really going to go wrong? The problems that come from ignoring craft are often very far-removed from the decisions that cause them, and because of this craft instills a sense of always doing the right thing all the time.
This can definitely go too far, but I think it's a complete misunderstanding to think that craft exists for reasons other than ensuring you produce high-quality products for users. Adherents to craft will often end up caring about the code as end-goal, but that's because this ends up producing better products, in aggregate.
For personal projects, I've been in both camps:
For scripts and one-offs, always #1. Same for prototypes where I'm usually focused on understanding the domain and the shape of the product. I happily trade code quality for time when it's simple, throwaway, or not important.
But for developing a product to release, you want to be able to jump back in even if it's years later.
That said, I'm struggling with this with my newest product. Wavering between the two camps. Enforcing quality takes time that can be spent on more features...
The developers don't care that either. If developers cared the whole npm ecosystem wouldn't exist.
Code quality is absolutely important. It’s just not a quality that’s easily visible to a layman. I can definitely feel the difference as a user when a program has been crafted with care.
As stated by others, this is very false. Most if not all software I use is selected by its disk/memory footprint and performance. Having a small disk/memory footprint and having good performance at the same time is a good indicator of a good code quality.
Moreover, after using computers for more than three decades, you get a feeling about the performance of a particular software suite. So an inefficient piece of code makes itself known in a loud way if you look the right way.
One of my favorite applications, Obsidian, is generally performs well, but when you hit it just the right way (e.g. add a couple of PDFs and enable previewing), you can feel how sluggish it becomes.
Having a suite of well written applications which have high performance/footprint ratio also allows me to do more with less resources and in less time. So, good code quality matters. It also almost guarantees the software suite will be maintained in a longer time.
Incidentally, I'm also in camp #2, and write my code with the same attention to detail. I have also written code which squeezed all performance from systems, approaching theoretical IPC limits of the processor the code is running on.
ZERO times has anyone even mentioned disk/memory footprint. Performance maaybe, but no hard limits were defined in any contracts. And even those were "these things have to be processed within 24 hours because the law says so", not microsecond precision.
Even Obsidian is 440MB. It's a markdown editor with a built-in javascript scripting system. There's no reason for it to be almost half a gigabyte. Zero people have checked the directory size and went "nah, too big, won't use it".
Nope, at least one. I refused to install software just because it's too big. I resisted using Obsidian exactly of these reasons until I failed to find a credible alternative.
Well, there's ZimWiki, but it wasn't working on macOS well enough to use that daily.
If that would be true, electron apps would not exists and everything would be a native software. But alas, most modern products, even before vibe-coding are horrible performance-wise.
Of course it depends on the context, but consumer facing products have been awful in terms of performance for a while now.
It mystifies me when people don't intuit this.
For any suitably sized project, there are parts where elegance and friction removal are far more important than others. By an order or two of magnitude.
I have shipped beautifully-honed, highly craft code. Right alongside jank that was debugged in the "Well it seems to work now" and "Don't touch anything behind this in-project PI" category.
There are very good reasons and situations for both approaches, even in one project.
1. you care about shipping working, tested code that solves a specific business/user problem
2. you care about closing tickets that were assigned to you
The question where experience comes in is when quality is and isnt worth the time. I can create all sorts of cool software I couldn't before because now I can quickly pump out "good enough" android apps or react front ends! (Not trying to denigrate front end devs, it's just a skill I dont have)
This is very much a "it's not the fall that kills you, it's the sudden stop at the end" sort of thing. (Same with the other variant I've heard, which is something like "no company has gone out of business because of tech debt".)
Code is as much a tool for developing and expressing conceptual models as it is for making computers do things. So not only does code quality have proximate impacts on engineering productivity and reliability, but, done well, it also improves the holistic design of the system you're building. You get better tools, faster, by putting some thought and care into your codebase and, especially, your core abstractions. Teams with good code move faster even in the short term and produce better tools and products.
Of course, it's not just a matter of code; you also need a culture that gives engineers the agency to make real, long-term decisions about what you're building (not just how) which, unfortunately, is rare to find in the modern tech industry :/ The dominant "high-output management" paradigm where code is seen as a virtually fungible "output" to be "delivered" loses the higher-order advantages of good code and good conceptual design, and leaves us with something much closer to the trade-off you describe. But there are other ways of approaching technical work that don't make this trade-off at all!
Modern harnesses are systems built with LLMs as one of many building blocks (incl. regex, test suites, linters). If it can be measured and verified, there's a good chance LLMs will optimize it
This is not a new concept. Humans stopped writing "artful" assembly many years ago, because Lattner and others made it much more efficient to rely on LLVM than hand-optimizing assembly
It's also been demonstrated in other domains within Google (4x4 matmul, silicon photonics, protein folding)
Interface heavy apps are not purely about objective function, they are about feel, comfort, usability - those apps will benefit heavily from humans. But subcomponents of these apps (eg. an algorithm to route packets efficiently) can often be better solved (somewhat objectively) by LLM-based solvers or other forms of RL.
However, writing assembly for the sake of art sounds rather interesting in an 2026. Many of my favorite musicians and DJs are driving a resurgence in vinyl to help balance the computed future - and I think that's a great thing
if it really were fuzzing and finding different candidate spaces, then I'd expect it to be good at things like dynamic programming, where I've only seen it fail. usually i see it get stuck in a bad solution and just thrashes around in that minima. these are problems where we can construct a verifiable test space, and it will eventually wind up with a solution, but one that is thousands of lines long and uses no structure of the problem space
Code is usually a liability. A means to an end. But is your code going to run for a minute, a month, a year, or longer? How often will it change? How likely are you going to have to add unforeseen features? Etc. Etc. Etc.
You ever notice how everyone who drives slower than you is a moron and everyone who drives faster than you is a maniac? Your two camps have a similar bias.
AI can help you make well-engineered code, but you have to ask for it because it's not what it will do by default. Prompt it with "Figure out how this crappy piece of code really works and document it in depth. Propose viable refactorings that could improve code quality" and it will do a much better job than the usual vibe-coded result.
“When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.”
He specifically calls out the use of the material in a place it wouldn't be seen as something that the carpenter making beautiful cabinets wouldn't do, but that's actually something they absolutely do. He's trying to argue some other point by way of this analogy, but his analogy is wrong because he doesn't know anything about it. It would be like if I was trying to argue my point by way of car analogy but the example I chose was to say that car enthusiasts insist on genuine OEM parts, I'm only exposing myself as somebody who doesn't understand car people.
If they do, then they aren't making truly great things. It's as simple as that really.
If you cut corners, even if you think nobody will notice, some people will, and your product will never be truly great. Steve understood that.
If you aren't even sure if your idea is even gonna work, whether you have PMF, or the company will be around next year.. then yeah.. speed over quality all day long.
On the other hand, I've never done the startup thing myself, and tend to work on software project with 10-20 year lifecycles. When code velocity maximalism leads to outages, excess compute cost and reputational issues.. good code matters again.
Re: "No one has ever made a purchasing decision based on how good your code is." Sonos very much could go out of business for agreeing with this line. I can tell you lots of people stopped buying their products because of how bad their code quality became with the big app change debacle. Lost over a decade of built up good will.
Apple is going through this lately with the last couple major OS releases across platforms and whatever is going on with their AI. This despite having incredible hardware.
>No one has ever made a purchasing decision based on how good your code is.
Disagree. Users tend to be very sensitive to quality, in software more than anything else. (Of course there isn't always perfect transmission from "user" to "purchaser" but most likely for an upstart product, your user will be your customer. You're not Oracle or Microsoft yet.)
In fact, quality matters now especially more than before because the barrier to entry has reduced. If you have a great idea, but executed it poorly, it attracts second movers like a magnet. Quality is intimidating. Quality is moat.
Good software engineers (your camp 2) can anticipate the ways in which bad code results in a poor quality product, or one that is difficult to debug and evolve. That is the crux of good code. Good code is not craftsmanship in the same sense as making a beautiful painting or making colorful notes.
Other times, I have something specific I want to accomplish, but I dread the amount of time it will take to make it happen.
Now, it is never that I don't know HOW to make it happen, it is that I know how, and I know how many steps it is and how many components there are to build to even get the simplest version running and I just dread it. I want the thing, but I don't want to spend the time to make the thing.
I have had so much fun recently making so many things that I have never gotten around to over the years, because I just couldn't justify the time.
I also have the time to tell the AI to add all the nice to haves, and handle all the edge cases that weren't worth the time before, etc.
I am having a blast. I still stop to write the fun bits when I want to, though. It is great because I only have to code the bits I want, that are fun.
The future of software looks a lot more like factory production lines with a small group of architect-tier engineers working on a design with product management and then feeding it into the factory for prototyping and production.
If you're not an experienced late senior or principal engineer at your career stage by now there is basically no future for you in this industry. Lower end roles will continue to be reduced. People who can build and maintain the factory and understand its outputs are going to be the remaining high-value software talent.
> 1. You treat your code as a means to an end to make a product for a user.
> 2. You treat the code itself as your craft, with the product being a vector for your craft.
Among the vocal devs, maybe. Most devs choose a trade-off between #1 and #2, leaning heavily towards #2.
And the reason is, very few people actually want to the output of their labour to be poor, no matter how superficially good it looks.
I find, like the poster below me said, the people presenting the false dichotomoty you present are desperate to legitimise their production of lovecraftian code horrors.
It's a trick, a verbal one usually, that people who espouse woo and who know that they are BSing, use to sort of "borrow" legitimacy from a field that is already respectable. Like... ghost-believers referring to themselves as occult scientists. They throw in the word "scientist" in there to borrow the legitimacy and respectability of actual scientists[1].
Throwing in "user delight" or "useful to the user" into their arguments for vibe-coding is their way of borrowing the respectability of actual developers, who had always been developing for an actual user, and who cared about their user enough to target that specific use-case.
The folks in #1 are simply borrowing what they can from the respectable practitioners to paper over the fact that all they care about is themselves, not actual users.
The clear majority of them are hoping to hit a jackpot; the borrowed terms, phrases and words is simply a poor attempt to cover up their naked greed.
---------------------------
[1] There's probably a joke in there somewhere about "software engineers" :-)
Because to determine good code, you need to see it and I'd presume most open source code is free.
Code quality makes the difference between a janky system that works most of the time and a rock solid system that is an enjoyment to use.
QA can only apply duct tape. If your state management isn’t clean, the UX will suck. If your functions aren’t clean, you will keep chasing bugs.
Luckily, AI is capable of writing good code. Today, that still requires some amount of hand holding, but it’s getting better.
So I’m confused. I really am, help me understand your world view.
I've always found my self at the clean code end of the spectrum (which also means simple and flexible to me) because it makes it easier to be flexible for customer needs. So I like good code but it's a means to an end.
But many ppl will refuse a purchasing because the product breaks randomly
Being created by a human doesn't imbue any specific amount of reliability to a piece of software.
This sounds more like a product owner not a developer
Well, you certainly should. Those people made AI based coding a possibility in the first place.
Treating code as a means to an end doesn't guarantee success for your product anymore than treating code as a craft.
The two-camp construct is a tool to establish the believer as a member of the supreme one camp group; apart from the lesser campers. Their entire identity and self worth is built around one-camp membership.
So instead of criticizing the gentleman for its dichotomy I feel like adding more states to it to complete the picture. And I mean it, this is not making fun of someone it means I tamed myself in stopping to do premature optimization knowing Gigahertz won’t care about me saving a cycle and might even hinder the masterfully crafted compilers from optimizing even more.
3. Partly awesome, partly not so much but don’t need to be awesome.
4. Myself has to understand this masterpiece of human thinking in six months and more from here and after a 20 hour stint I myself marvel at the result so better comment before committing the code into oblivion.
5. Embarrassing, but people are delighted.
6. This made headlines years ago, but some code doesn’t age well.
7. OMG, OOP might looks right, but a new paradigm rushes me into refactoring frenzy and makes the code look better without breaking any features!
8. I used tool to check for bottlenecks and it runs well, but looks crap. So what do I do?
9. Loop unrolling is still a thing or not? Do compilers have headaches just like I do? Do they really care or simply follow orders or adjust to the target platform and settings?
Perhaps this is an antiquated concept which has fallen out of favor in silicon valley, but code doesn't just run in an imaginary world where there are no consequences and everything is fun all the time. You are responsible for the product you sell. If you sell a photo app that has a security bug, you are responsible for your customers nude photos being leaked. If you vibe-code a forum and store passwords in plaintext, you are responsible for the inevitable breech and harm.
The "general public" might not care, but that is only because the market is governed by imperfect information. Ultimately the public are the ones that get hurt by defective products.
But I will demand my money back or sue you if your crappy code leaks my personal information, destroys my property, performs worse than advertised, or otherwise harms me in some way.
There was sloppy code before LLMs. It’s what they were trained on. And it’s why they generate slop.
All that code that was rushed out to meet an arbitrary deadline made up by the sales team, written by junior and lazy senior developers, pushed by the, “code doesn’t matter,” folks. Code written by the enterprise architecture astronauts with a class hierarchy deeper than the Mariana Trench. A few years down the line you get bloated, slow, hard to maintain spaghetti piles of dung. Windows rendering text that stutter when you scroll them. Virtual keyboards that take seconds to pop up. Browser tabs that take more available memory than was available to send astronauts to the moon and back.
When humans write it you generally have a few people on a team who are concerned with these things. They try to reign in the slop generating, “always be shipping,” people. You need a mix of both. Because each line of code is a liability as much as it’s a new feature.
People who think developers fall into one of two camps.
And people worth listening to.
But I am not the fan of code it writes most of the times. I want my code to read and behave certain way. I can not submit that code, even if it works, if I can't explain or just don't like it. I then iterate over that code myself or ask AI until it has the shape I agree with.
For my personal side projects I don't care as much what code looks like as long as it works correctly and easily modifiable. But for work, it still remains my responsibility no matter which tool was used.
You (are required to) treat your code as having to fulfill both functional requirements and declared non-functional requirements, including measures of maintainability, reliability, performance, and security and (regulatory/legal) compliance.
I am in both camps. Always have been.
Code janitors about to be in high demand. We’ve always been pretty popular with leadership and it’s gonna get even more important.
Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things
My output is org velocity.
I'm currently of the opinion that humans should be laser focused on the data model. If you've got the right data model, the code is simpler. If you've got the relevant logical objects and events in the database with the right expressivity, you have a lot of optionality for pivoting as the architecture evolves.
It's about that solid foundation - and of course lots of tests on the other side.
yes. good programmers talk about data structures, bad programmers talk about code
Lots of people try to make their code better by revising their procedures and don't even think about changing their data model, and generally fail. You might not be able to change your data model without changing your code, but they're different activities.
"We'll do it on the app level".
sigh
Amen, slow and steady and the feature fly wheel just keeps getting faster.
I am stealing that phrase haha
There are two reasons for this. One is that the people who make purchasing decisions are often not the people who suffer from your bad code. If the user is not the customer, then your software can be shitty to the point of being a constant headache, because the user is powerless to replace it.
The other reason is that there's no such thing as "free market" anymore. We've been sold the idea that "if someone does it better, then they'll win", but that's a fragile idea that needs constant protection from bad actors. The last time that protection was enacted was when the DOJ went against Microsoft.
> Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
Any semblance of accountability for that has been diluted so much that it's not worth mentioning. A bug someone wrote into some cloud service can end up causing huge real-world damage in people's lives, but those people are so far removed from the suits that made the important decisions that they're powerless to change anything and won't ever see that damage redressed in any way.
So yeah, I'm in camp #2 and I'm bitter about AI, because it's just accelerating and exacerbating the enshittification.
Someone on the HN wrote recently that everyone who's foaming at the mouth about how AI helps us ship faster is forgetting that velocity is a vector -- it's not just about how fast you're going, but also in what direction.
I'd go further and say that I'm not even convinced we're moving that much faster. We're just cranking out the code faster, but if we actually had to review that code properly and make all the necessary fixes, I'm pretty sure we would end up with a net loss of velocity.
If you have buggy software, people don’t use it if there are alternatives. They don’t care about the code but hard to maintain, buggy code will eventually translate to users trying other products.
The thing here was that if you have two boxes that take the same input and produce the same output at the same speed, do you care what the insides look like?
What if one is delivered in 4 days and the other in 30 days and costs more? Which one will you pick?
I'm weird, I'm part of camp 2, but I think AI can be used to really craft some interesting things. While I appreciate camp 1, camp 2 is what produces better codebases that are easier to maintain, myself and others have realized, that the best practices for humans are also the best practice for AI models to fit your code.
- revenue/man-hour, features shipped/man-hour, etc.
- ms response time, GB/s throughput, number of bugs actually shipped to customers, etc.
People in the second camp use AI, but it's a lot more limited and targeted. And yes, you can always cut corners and ship software faster, but it's not going to be higher quality by any objective metric.
Code quality isn't just a fetish. It has real implications for security and the final product.
I've also found that unmaintainable codebases aren't just hard to maintain for humans. LLMs seem to struggle with them just as much
And then they can ship those products much faster than before, because human hours aren't being eaten up writing out all of these abstractions and tests.
The better tooling will let the AI iterate faster and catch errors earlier in the loop.
Right?
But also I know when to put up and make the damn thing work.
I think more often they simply picked the wrong programming language as a target. In my experience, AI is especially bad at writing Typescript/Javascript, which happens to overlap with the most widely used language. I have negative things to say about AI too when developing in that ecosystem. If I only ever used AI in that ecosystem I'd probably tell you it is useless with the rest of them.
But my daily work sees me working in more than one language and when I am in some other language environments I have no reservations about AI whatsoever. AI vs good code is no longer even at odds with each other. In those certain languages, the models write good, stable, production-ready code pretty much all the time. It is really quite amazing.
That's however what makes for stable systems, deep knowledgable engineers, and structurally building the basis for the future.
If all you care about is getting money for your product slop, it's not different than late night marketed crap, or fast fashion...
1. The people who don't understand (nor care) about the risks and complexity of what they're delivering; and
2. The people that do.
Widespread AI usage is going to be a security nightmare of prompt injection and leaking credentials and PII.
> No one has ever made a purchasing decision based on how good your code is.
This just isn't true. There's a whole process in purchasing software, buying a company or signing a large contract called "due diligence". Due diligence means to varying degree checking how secure the product is, the company's processes, any security risks, responsiveness to bugfixes, CVEs and so on.
AI is going to absolutely fail any kind of due diligence.
There's a little thing called the halting problem, which in this context basically means there's no way to guarantee that the AI will be restricted from doing anything you don't want it to do. An amusing example was an Air Canada chatbot that hallucinated a refund policy that a court said it had to honor [1].
How confident are we going to be that AIs won't leak customer information, steal money from customers and so on? I'm not confident at all.
[1]: https://arstechnica.com/tech-policy/2024/02/air-canada-must-...
If I do a bad job, I get a bunch of bug reports, I get called out for writing bugs, etc. We've been pushed to use AI, and it's hurt more than it's helped with our code base.
I routinely close tabs when I sense that low-quality code is wasting time and resources, including e-commerce sites. Amazon randomly cancelled my account so I will never shop from them. I try to only buy computers and electronics with confirmed good drivers. Etc.
Because the ones that sell crappy code don’t sell to people that can tell the difference.
You think I’d pay for Jira or Confluence if it wasn’t foisted upon me by a manager that has got it in with the Atlassian sales rep?
I don’t even need to see Atlassian’s source code to know it’s sh*t.
1. You treat the house as a means to an end to make a living space for a person.
2. You treat the building construction itself as your craft, with the house being a vector for your craft.
The people who typically have the most negative things to say about buildings fall into camp #2 where cheap unskilled labor is streamlining a large part of what they considered their art while enabling people in group #1 to iterate on their developments faster.
Personally, I fall into the first camp.
No one has ever made a purchasing decision based on how good the pipes inside the walls are.
The general public does not care about anything other than the square footage and color of your house. Sure, if you mess up and one of the houses collapses then that'll manifest as an outcome that impacts the home owner negatively.
With that said, I do have respect for people in the latter camp. But they're generally best fit for homes where that level of craftsmanship is actually useful (think: mansions, bridges, roads, things I use, etc).
I just feel like it's hard to talk about this stuff if we're not clear on which types of construction we're talking about.
>Sure, if you mess up and one of the houses collapses then that'll manifest as an outcome that impacts the home owner negatively.
lol
> No one has ever made a purchasing decision based on how good your code is.
This has since been edited (I suspect OP later realized how ridiculous of a claim it was) but it's an objectively false statement that blindly projects OP's stance on software quality onto the entire population.
People have, in fact, stopped using software because it was too slow or buggy. I've done it, many others have done it. To give you a real example, I'm a fan of JetBrains IDEs functionality-wise, but I've been seriously considering moving away from them due to how bloated and unoptimized they've become in recent years - when your IDE feels more sluggish than one built on top of Electron, something has gone seriously wrong. I don't have to actually read the code itself to know it's bad code, I can feel it by simply using the software.
Just because you don't care doesn't mean others don't. Modern society was not built by people who didn't care about their craft, nor was it built by designing everything with the lowest common denominator in mind.
It might do that, if it had any basis in reality. Why do you believe it does?
I don’t care what kind of steel you used to design my car, but I care a great deal that it was designed well, is safe, and doesn’t break down all the time.
Craft isn’t a fussy thing.
Shit bloated code is one of the reasons Epic Launcher is extremely behind in market share when compared to Steam.
Sure, they ship their product fast. They can iterate faster than Valve. They also add technical debt with each iteration.
Also: we are almost all using a Chrome derived browser instead of Firefox, old IE, old Opera, because of performance and quality. They just won the internet because of the quality of their code. Besides that, all browsers let you browse the internet.
When people can choose, they choose quality most of the time.
As such AI is a net negative as it would be in writing a novel or making any other kind of art.
Just because one falls in the "Cha bu duo" camp and (potentially) looks down on the "Kaizen" types doesn't mean the two products are equivalent.
That also doesn't mean that slop / cha bu duo / made in china products are bad, mind you. They have their place, and occasionally a Kaizen approach would be detrimental to getting "something" done quick and dirty that will likely work ok anyway. The danger is in believing that just because they're "ok" this means they are equivalent (or at least largely overlap) with the more refined products, which is demonstrably false and can be a dangerous attitude to have.
John Carmack has talked about it in a podcast a few years ago, and he's the closest popular programmer that I can think of who was simply obsessed with milking every tiny ounce of GPU performance, yet none of his effort would matter if Doom and Quake weren't fun games.
I think that the real two camps here are those who haven't carefully - and I mean really carefully - reviewed the code the agents write and haven't put their process under some real stress test vs those who have. Obviously, people who don't look for the time bombs naturally think everything is fine. That's how time bombs work.
I can make this more concrete. The program wants to depend on some invariant, say that a particular list is always sorted, and the code maintains it by always inserting elements in the right place in the list. Other code that needs to search for an element depends on that invariant. Then it turns out that under some conditions - due to concurrency, say - an element is inserted in the wrong place and the list isn't sorted, so one of the places that tries to find an element in the list fails to find it. At that point, it's a coin toss of whether the agent will fix the insertion or the search. If it fixes the search, the bug is still there for all the other consumers of the list, but the testing didn't catch that. Then what happens is that, with further changes, depending on their scope, you find that some new code depends on the intended invariant and some doesn't. After several such splits and several failed invariants, the program ends up in a place that nothing can be done to fix a bug. If the project is "done" before that happens - you're in luck; if not, you're in deep, deep trouble. But right up until that point, unless you very carefully review the code (because the agents are really good at making code seem reasonable under cursory scrutiny), you think everything is fine. Unless you go looking for cracks, every building seems stable until some catastrophic failure, and AI-generated code is full of cracks that are just waiting for the right weight distribution to break open and collapse.
So it sounds to me that the people you think are in the first camp not only just care how the building is built as long as it doesn't collapse, but also believe that if it hasn't collapsed yet it must be stable. The first part is, indeed, a matter of perspective, but the second part is just wrong (not just in principle but also when you actually see the AI's full-of-cracks code).
Invariants must be documented as part of defining the data or program module, and ideally they should be restated at any place they're being relied upon. If you fail to do so, that's a major failure of modularity and it's completely foreseeable that you'll have trouble evolving that code.
Most importantly, this isn't hypothetical. We see that agents write programs that after some number of changes just collapse because they don't converge. They don't transition well between layers of abstractions, so they build contingencies into multiple layers, and the result is that after some time the codebase is just broken beyond repair and no changes can be made without breaking something (and because of all the contingencies, reproducing the breakage can be hard). This is why agents don't succeed in building even something as simple as a workable C compiler even with a full spec and thousands of human-written tests.
If the agents could code well, no one would be complaining. People complain because agent code becomes structurally unsound over time, and then it's only a matter of time until it collapses. Every fix and change you make without super careful supervision has a high chance of weakening the structure.
Neither do people, yet people manage to write software that they can evolve over a long time, and agents have yet to do that. I think it's because people can move back and forth between levels of abstraction, and they know when it's best to do it, but agents seem to have a really hard time doing that.
On the other hand, agents are very good at debugging complex bugs that span many parts of the codebase, and they manage to do that even with their context limitations. So I don't think it's about context. They're just not smart enough to write stable code yet.
You need a specific methodology to do that, one that separates "programming in the large" (the interaction across program modules) from "programming in the small" within a single, completely surveyable module. In an agentic context, "surveyable" code realistically has to imply a manageable size relative to the agent's context. If the abstraction boundaries across modules leak in a major way (including due to undocumented or casually broken invariants) that's a bit of a disaster, especially wrt. evolvability.
But this happens at all levels because there are many more than just two abstraction levels. E.g. do I change a subroutine's signature or do I change the callsite? Agents get it wrong. A lot.
Another thing they just don't get (because they're so focused on task success) is that it's very often better to let things go wrong in a way that could inform changes rather than get things to "work" in a way that hides the problem. One of the reasons agent code needs to be reviewed even more carefully than human code is that they're really good at hiding issues with potentially catastrophic consequences.
That's realistically because they aren't even trying to answer that question by thinking sensibly about the code. Working in a limited context with anything they do leaves them guessing and trying the first thing that might work. That's why they generally do a bit better when you explicitly ask them to reverse engineer/document a design of some existing codebase: that's a problem that at least involves an explicit requirement to comprehensively survey the code, figure out what part matters, etc. They can't be expected to do that as a default. It's not even a limitation of existing models, it's quite inherent to how they're architected.
It's funny, but one of the lessons I've learnt working with agents is just how much design matters in software and isn't just a matter of craftsmenship pride. When you see the codebase implode after the tenth new feature and realise it has to be scrapped because neither human nor AI can salvage it, the importance of design becomes palpable. Before agents it was hard to see because few people write code like that (just as no one would think to make a curtain rod load-bearing when building a structure).
And let's not forget that the models hallucinate. Just now I was discussing architecture with Codex, and what it says sounds plausible, but it's wrong in subtle and important ways.
I mean, if you properly define "do well" as getting a first draft of something interesting that might or might not be a step towards a solution, that's not completely wrong. A pass/fail test is verified feedback of a sort, that the AI can then do quick iteration on. It's just very wrong to expect that you can get away with only checking for passing tests and not even loosely survey what the AI generated (which is invariably what people do when they submit a bunch of vibe-coded pull requests that are 10k lines each or more, and call that a "gain" in productivity).
When I make a purchasing decision I expect the payment to go trough quickly and correctly and for whatever I purchase to arrive to me in reasonable time. All of this rests on the reputation of software being solid. If a user hears a whiff of purchase not being executed correctly, money or goods going somewhere else, this is the death sentence for your company.
Industry is now pushing for agentic web where agents can do this on your behalf. But if we have slop foundations and then add unstable models that can hallucinate and make mistakes on top of it, then it's just a recipe for catastrophe. I think relegating 2) into category of only mission critical software escapes the reality of how much reliability goes into everyday services people use.
It is strange, but not really upsetting to me, that I am not particularly anal about the code Claude is generating for me anymore but that could also be a function of how low stakes the projects are or the fact nothing has exploded yet.
That influenced some unfortunate interactions with people and meant that no one could be held to their agreements since you never knew if they received the agreements.
So, well, code quality kind of matters. But I suppose you're still right in a sense - currently people buy and use complete crap.
Why did Whatsapp grow so big while thousands of previous chat apps didn't code quality (scalability)
In other words, it "works reasonably well" until it doesn't. That point might break your business. Many such cases.
> The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.
That’s fine for people argue those things.
My criticisms of AI are mainly
1. The principle of the GenAI approach
2. Political
The first point is about how stupid the GenAI approach is (I could link to the arguments). But I have left open the door for pure results, i.e. demonstrating that it (despite my belief) works in practice. So this is not about craftmanship.
I’ve previously commented that I would respect a more principled approach even though it takes my craft.[1]
> Personally, I fall into the first camp.
Of course you do. Because...
> No one has ever made a purchasing decision based on how good your code is.
In these dichotomies the author typically puts himself forward as the pragmatist and the other side as the ones who care about things that are just irrelevant to the purchasing decision or whatever.
But the AI haters have made real arguments against AI, against the people behind AI, and so on. It’s not a matter of vibes. So maybe respond to those arguments? We don’t need another armchair lesson in psychological inclinations.
Be a pragmatist for all I care. But beware of the bloodless pragmatist who only sees what is, essentially, instant product gratification and not what comes after, or from the sides, or from below.
[1] https://news.ycombinator.com/item?id=47358696
The capabilities and limitations of your product are defined in part by how good the code is. If you write a buggy mess (whether you write it yourself or vibe code it), people aren't going to tolerate that unless your software has no competitors doing better. People very much do care about the results that good code provides, even if they don't care about the code as an end in itself.
The question is how much does the market value this, and how much it should value it.
For one-off scripts and software built for personal use, it doesn't matter. Go nuts. Move fast and break things.
But the quality requirement scales proportionally with how many people use and rely on the software. And not just users, but developers. Subjective properties like maintainability become very important if more than one developer needs to work on the codebase. This is true even for LLMs, which can often make a larger mess if the existing code is not in good shape.
To be clear, I don't think LLMs inevitably produce poor quality software. They can certainly be steered in a good direction. But that also requires an expert at the wheel to provide good guidance, which IME often takes as much, if not more, work than doing it by hand.
So all this talk about these new tools replacing the craft of programming is overblown. What they're doing, and will continue to do unless some fundamental breakthrough is reached, is make the creation of poor quality software very accessible. This is not the fault of the tools, but of the humans who use them. And this should concern everyone.
> The general public does not care about anything other than the capabilities and limitations of your product.
It's absolutely asinine to say the general public doesn't care about the quality and experience of using software. People care enough that Microsoft's Windows director sent out a very tail-between-legs apology letter due to the backlash.
It's as it always has been, balancing quality and features is... well, a balance and matters.
But you can have an extremely well designed product that functions flawlessly from the perspective of the user, but under the hood it's all spaghetti code.
My point was that consuming software as a user of the product can be quite different from the experience of writing that software.
Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'd just be careful to separate code elegance from product experience, since they are different. Related? Yeah, sure. But they're not the same thing.
Good code makes it easier for the business to move fast and stay ahead of the competition while reducing expenses for doing so.
A lot, and I mean a lot, of software work is trying to justify existence by constantly playing and toying with a product that worked for for everyone in version 1.0, whether it be to justify a job or justify charging customers $$ per month to "keep current".
> Facebook is a great example of this, there's some gnarly old spaghetti code under the hood just from the years of legacy code but those are largely invisible to the user and their experience of the product.
I'm sure that's the case in basically everything, it sorta doesn't matter (until it does) if it's cordoned off into a corner that doesn't change and nominally works from the outside perspective.
But those cases are usually isolated, if they aren't it usually quickly becomes noticeable to the user in one way or another, and I think that's where these new tools give the illusion of faster velocity.
If it's truly all spaghetti underneath, the ability to make changes nosedives.
My own two cents: there's an inherent tension with assistants and agents as productivity tools. The more you "let them rip", the higher the potential productivity benefits. And the less you will understand the outputs, or even if they built the "correct thing", which in many cases is something you can only crystalize an understanding about by doing the thing.
So I'm happy for all the people who don't care about code quality in terms of its aesthetic properties who are really enjoying the AI-era, that's great. But if your workload is not shifting from write-heavy to read-heavy, you inevitably will be responsible for a major outage or quality issue. And moreso, anyone like this should ask why anyone should feel the need to employ you for your services in the future, since your job amounts to "telling the LLM what to do and accepting it's output uncritically".
I think that's actually a good way to look at it. I use AI to help produce code in my day to day, but I'm still taking quite a while to produce features and a lot of it is because of that. I'm spending most of my time reading code, adjusting specs, and general design work even if I'm not writing code myself.
There's no free lunch here, the workflow is just different.
Did that guy make it because Rust, and because he's passionate about that sort of thing? Probably.
But it's fucking fast. So did he sell out to OpenAI? Of course he did.
And thusly, both camps.
I got my company to switch from GitHub to GitLab after repeated outages. I've always moved companies to away from using GCP or Azure because of their reliability problems.
This is a really funny comment.
Everybody else is dealing with AIgen is suffering the AI spitting out the end product. Like if we asked AI to generate the compiled binary instead of the source.
Artists can't get AIgen to make human-reviewed changes to a .psd file or an .svg, it poops out a fully formed .png. It usurps the entire process instead of collaborating with the artist. Same for musicians.
But since our work is done in text and there's a massive publicly accessible corpus of that text, it can collaborate with us on the design in a way that others don't get.
In software the "power of plain text" has given us a unique advantage over kinds of creative work. Which is good, because AIgen tends to be clumsy and needs guidance. Why give up that advantage?
It isn’t that though, the “end” here is making money not building products for users. Typically people who are making products for users cares about the craft.
If the means-to-end people could type words into a box and get money out the other side, they would prefer to deal with that than products or users.
Thats why ai slop is so prevalent — the people putting it out there don’t care about the quality of their output or how it’s used by people, as long as it juices their favorite metrics - views, likes, subscribes, ad revenue whatever. Products and users are not in scope.
I don't think all means-to-end people are just in it for money, I'll use the anecdote of myself. My team is working on a CAD for drug discovery and the goal isn't to just siphon money from people, the goal is legitimately to improve computational modeling of drug interactions with targets.
With that in mind, I care about the quality of the code insofar as it lets me achieve that goal. If I vibe coded a bunch of incoherent garbage into the platform, it would help me ship faster but it would undermine my goal of building this tool since it wouldn't produce reliable or useful models.
I do think there's a huge problem with a subset of means-to-end people just cranking out slop, but it's not fair to categorize everyone in that camp this way ya'know?
Also it’s more than an art or a craft. It’s identity. Many people hold their coding skill as an identity they honed it over many years and it put them in the ranking they are in today. This kind of destruction of identity by AI is what causes people to deny reality.
This stuff also covers your job, even if you don’t hold coding as an identity it is still responsible for many people’s livelihoods. Like ai is convenient right now but what happens when it gets even more convenient? What happens to your job and your life especially if software was all you did for over a decade?
I’m in camp 2 and I can’t lie to myself about what’s happening. I’ve embraced ai and I now vibe code daily even though I was originally an artistic functional programmer. This ability comes at high cost. I’m able to do it because I hold zero identity. I dont identity with anything and I don’t put too much pride into anything I do or at least if I put pride into something I’m always conscious of severing the entire thing at a moments notice.
Lying to oneself is a powerful ability but it becomes a liability when society goes through an intense paradigm shift. This is what is happening now.
RollerCoaster Tycoon.
> The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.
People care how fast you're able to ship updates, new features, and bugfixes. If you're working with a pile of vibe-coded spaghetti slop it's going to take longer to deliver these.
Oh wait, they're the biggest car builder in the world.
I don't believe there is a dichotomy, or even a spectrum of developers, but a complex landscape. Of course, that is also an bald assertion, but on a weaker claim, and no less valid than the original assertion.
That said, independent of assertions about developer classification, in my experience there is a clear connection between the quality of the software and the quality of the product, and I've often see evidence of poor quality software compromising the product and user experience. Poor quality leaks out. Remember BSOD? Maybe not.
I've become hesitant to unleash coding agents simply because the code base ends up looking like the victim of drive-by coding, littered with curious lambda functions, poor encapsulation, etc. The only thing I use coding agents for is exploratory and throwaway code, like one off scripts. I love coding agents for all the ancillary work, I protect the critical path like mamma bear her cubs.
Coding agents make all the second order work easier so I have more bandwidth to focus on the critical parts. Again, software is a landscape, but at least for my work I can't abdicate parts to a coding agent and "works" is an inadequate standard. I need bullet-proof and unfailingly correct.
Token generation definitely produces a certain stream-of-consciousness, Kerouac-as-programmer style. As long as I don't ever have to maintain or modify the code myself, am not concerned about cost control (especially in cloud environments where I am billed by compute cycles), I am fine with quick and dirty and done. I sigh when I see what should be a six line change in my head balloon to 300 lines of generated code, revert, and write the six lines myself. Would take longer to write the prompt to get the coding agent to fix it than fix it myself. It would grind away for several minutes and burn up an astonishing number of tokens for simple fixes.
Anything linguistic the coding agents do well. Want to rename a variable in 300 different source files? I mean, it is overkill to be running a 200B parameter model to avoid writing the sed script I might write otherwise, but who am I to turn my nose up at my work being subsidized by investors? I don't think that economic model will go on forever.
Any higher abstraction is being cargo-culted from language. This is where LLMs are weakest, because they don't understand abstraction or encapsulation, only the artifacts as expressed in language.
Outside of exploratory and throwaway code, I use inline prompting to precisely target and scope changes, and then identify the cleanup and refactoring required to bring the code to acceptable quality. Although I do a lot of cleanup by hand as well. Rather than tell the coding agent that a lambda function wrapping a one liner that is used in one place in the code is dumb, I'll just remove the lambda myself. The coding agent can't adopt and generalize lessons from code review comments the way a human software engineer can -- I am forced to burn tokens every single time to get it to dial back its insane love affair with lambda functions. Again, not a big deal while costs remain subsidized.
Operations and maintenance overhead in the type of software I've written through my career dominates over programming cost. Telecom, aerospace, e-commerce, etc. Systems are long lived. Outages are expensive. Regulatory compliance is a large factor. I've worked in shops with 70% cost overhead in operations. A $50K a month cloud compute bill can be reduced to $15K. There's usually some low hanging fruit and poor quality software doesn't account for all of this, but it is a significant fraction. Like a poorly written termination condition in a container that essentially was a busy wait burning thousands of dollars a month doing nothing (true story).
I am currently writing a trading system, and can't afford to hallucinate a bunch of bad trades. Like the developer landscape, the software landscape is complex and not uniform. So I will concede there are probably many types of software outside of my own experience that can be implemented largely by coding agents. Low consequence. Marginal operational overhead.
I might assert that coding agents forte is autogenerating technical debt, but then I am just being a wag. Less waggishly I would say use of coding agents is subject to engineering judgement, like any tool. Who is going to read that headline or give it a billion dollar valuation?
Yes, some people left to their own devices would take twice as long to ship a product half as buggy only to find out the team that shipped early has taken a massive lead on distribution and now half the product needs to be reworked to catch up.
And some people left to their own devices will also ship a buggy mess way too early to a massive number of people and end up with zero traction or validation out of it, because the bugs weren't letting users properly experience the core experience.
So we've established no one is entirely right, no one is entirely wrong, it's ying/yang and really both sides should ideally exist in each developer in a dynamic balance that changes based on the situation.
-
But there's also a 3rd camp that's the intersection of these: You want to make products that are so good or so advanced *, that embracing the craft aspect of coding is inherent to actually achieving the goal.
That's a frontend where the actual product is well outside typical CRUD app forms + dashboard and you start getting into advanced WebGL work, or complex non-standard UI state that most LLMs start to choke on.
Or needing to do things quicker than the "default" (not even naive) approach allows for UX reasons. I ran into this using Needleman-Wunsch to identify UI elements on return visits to a site without an LLM request adding latency: to me that's the "crafty" part of engineering serving an actual user need. It's a completely different experience getting near instant feedback vs the default today of making another LLM request.
And it's this 3rd camp's feedback on LLM development that people in the 1st camp wrongly dismiss as being part the 2nd craft-maxxed group. For some usecases, slop is actually terminal.
Intentionally contrived example, but if you're building a Linear competitor and you vibecode a CRDT setup that works well enough, but has some core decisions that mean it'll never be fast enough to feel instant and frontend tricks are hiding that, but now users are moving faster than the data and creating conflicts with their own actions and...
You backed yourself into a wall that you don't discover until it's too late. It's only hypervigilance and strong taste/opinion at every layer of building that kind of product that works.
LLMs struggle with that kind of work right now and what's worrying is, the biggest flaw (a low floor in terms of output quality) doesn't seem to be improving. Opus 4.6 will still try to dynamically import random statements mid function. GPT 5.3 tried to satisfy a typechecker by writing a BFS across an untyped object instead of just updating the type definitions.
RL seems to be driving the floor lower actually as the failure modes become more and more unpredictable compared to even GPT 3.5 which would not even be "creative enough" to do some of these things. It feels like we need a bigger breakthrough than we've seen in the last 1-2 years to actually get to the point where it can do that "Type 3" work.
* good/advanced to enable product-led growth, not good/advanced for the sake of it
We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.
AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.
Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?
"The mending apparatus is itself in need of mending" - "The Machine Stops", by E.M. Forster, 1909.
I'm glad to see that the author of the article is putting an emphasis on simplicity here, especially given the nature of their business. Those that fully embrace the "code doesn't matter" approach are in for a world of hurt.
Long-term, I expect there will be more tooling and model advancements to help us in this regard - and there will certainly be a big economic incentive for that soon. But in the meantime it feels like a dam has been breached and we're just waiting for the real effects to become manifest.
I'm not saying it's wrong, because I haven't actually looked for alternative sources, just that the source isn't great.
That new design-level representation will be code.
It will need to be code, because prompts, while dense, are not nearly deterministic enough.
It will need to be much higher level code, because current code, while deterministic, is not nearly dense enough.
There needs to me more design rep indeed.
The trouble is people don't want to bother reviewing the changes.
Sure, I can read the diffs as they're generated (and I do). But proper transparency goes further than that, and it's being stripped away.
The more common use case is that these tools struggle immensely on anything outside the happy path.
Whatever the hell economics was supposed to do, right now it seems to be causing every industry to produce worse products, lay off more people, and concentrate wealth in an aristocratic subset of the population, and this has been going on for the better part of my entire lifetime. If we're to reverse this trend, we need to stop pretending that economics is a natural force and remember that it is a complex system made of policy decisions that can in fact be the wrong ones
The whole business strategy for those companies is to be the one big monopolists that is left standing. That is why they are bleeding money offering token prices well beyond market rate so that they can grow.
Plus they can always lobby the state to ban foreign competition for security reasons.
It takes a lot of work to make cheap, low precision parts work together reliably. The Rolex has it easy, all the parts are precisely built at a great cost and everything fits perfectly. With the cheap alarm clock, you don't know what you will get, so you have to account for every possible defect, because you won't get anything better with your budget and the clock still needs to give you an idea about what time it is.
The parallel in software would be defensive programming, fault tolerance, etc... Ironically, that's common practices in critical software, and it is the most expensive kind of software to develop, the opposite of slop.
It would make sense to me that a parallel mechanism could apply to Soviet engineering. If material and technologically advanced capital are scarce, but engineers are abundant, you would naturally spend more time doing proper engineering, which means figuring out how to squeeze the most out of what you have available.
aka "fitting".
I wrote a blog on why Soviet-style engineering is bad https://blog.est.im/2026/stderr-04
> Everything was ruthlessly sacrificed to cut costs, including pilot safety.
If we translate this analogy back to AI driven software development, what would be the equivalent of "pilot safety"?...
If a technology to build airplanes quickly and cheaply existed and was made available to everyone, even to people with no aeronautical engineering experience, flying would be a much scarier ordeal than it already is.
There are good reasons for the strict safety and maintenance standards of the aviation industry. We've seen what can happen if they're not followed.
The fact that the software industry doesn't have similar guardrails is not something to celebrate. Unleashing technology that allows anyone to create software without understanding or even caring about good development practices and conventions is fundamentally a bad idea.
(Confession: "good code will still win" was my suggestion- IIRC they originally had "Is AI slop the future?". You win some you lose some.)
I don't fully agree this optimistic view. Unfortunately, for now, coding agents produce code that, if not further optimized upon "human" request, often generates more complexity than necessary.
It's true that this requires more computational effort for the agents themselves to debug or modify it, but it's also true that the computational cost is negligible compared to the benefit of having features working quickly.
In other words: agents quickly generate hyper-complex and unoptimized code. And the speed of delivery provides more immediate benefits than the costs resulting from bad code.
On the other hand, it's also true that the "careful eye" of an experienced developer can optimize and improve the output in a few simple iterations.
So overall (and unfortunately) the "bad code", if it immediately works, can wins against (or with) a good code.
All the change and shuffle feels like an inevitable consequence of so much communication and competition between companies, and cultures and such. Gone are the days where a software product can remain stagnant. Someone else will build something that does a bit more, or if nothing else, does something new, and it will take people's attention away.
Everyone is stuck trying to keep up with trends, even if those trends don't make any sense.
We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.
Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
That's an issue I have with Claude actually. I found it very good at breaking abstractions to get the job done. This is what I'd call slope (more so than the class internals).
How fast it was to create an initial version of a piece of software can be easily measured.
But how efficient it is, how easy it is to make changes to it, how easy it is to debug it, how easy it is to extend in the direction that the domain requires...all these cannot be easily measured or quantified, but is 10 times more important than that initial creation time....For a software that has to run and maintained for decades delivering value all that time, it does not really matter if the initial version was created in 5 minutes or 1 month, if the 5 minute version does not contribute negatively to all those non-measurable, non-marketable traits of the software.
It is like how camera marketing was mostly around the megapixel value, instead of something vastly more important like low light performance, dynamic range, or fast auto-focus. Because the LCD of the market won't be able to grasp the relevance, and would not act on it. So it was all about megapixel, but at least that does not have a lot of negative consequence unlike the marketing around AI...
What if your AI uses an O(n) algorithm in a function when an O(log n) implementation exists? The output would still be "correct"
No, unfortunately. In a past life, in response to an uptime crisis, I drove a multi-quarter company-wide initiative to optimize performance and efficiency, and we still did not manage to change the company culture regarding performance.
If it does not move any metrics that execs care about, it doesn't matter.
The industry adage has been "engineer time is much more expensive than machine time," which has been used to excuse way too much bloated and non-performant code shipped to production. However, I think AI can actually change things for the better. Firstly, IME it tends to generate algorithmically efficient code by default, and generally only fails to do so if it lacks the necessary context (e.g. now knowing that an input is sorted.)
More importantly though, now engineer time is machine time. There is now very little excuse to avoid extensive refactoring to do things "the right way."
Performance can be a direct target in a feedback loop and optimised away. That's the easy part. Taking an idea and poof-ing a working implementation is the hard part.
Test what you care about. If you care about performance, then test your performance. Otherwise performance doesn't matter.
But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
You are reinventing the wheel again with yet another form of reinforcement learning. I don't use any form of LLM assistance for coding, but if I have to continually tell it what to do, how to do it, what not to do, what assumptions to make - I would rather stimulate my neurons more by doing that damn thing itself.
The narrative of "Yeah it will do everything, provided you tell it how to do everything!" seems baseless, personally. Even if you emulate the smartest human possible, can you emulate an idiot?
Did the best processor win? no x86 is trash
Did the best computer language win? no (not that you can can pick a best)
The same is true pretty much everywhere else outside computers, with rare exception.
There is an abundance of mediocre and even awful code in products that are not failing because of it.
The worst thing about poorly designed software architecture is that it tends to freeze and accumulate more and more technical debt. This is not always a competitive issue, and with enough money you can maintain pretty much any codebases.
When you work with F500s you end up seeing code and culture that is absolute balls and that I would never work directly for all the time. And yet roles are always filled. And when the economy gets bad, they have decent engineers.
I call it the fast food quality theory of economics. When the economy is good, low pay jobs tend to have low quality employees and it shows in their products. When the economy gets bad higher quality employees end up downgrading because of layoffs and the quality of these low tier jobs improves.
I was always into software architecture and I was dreaming to be a software architect but after completing university, the position was on the way out.
- Simple and easy to understand
- Easy to modify”
In my career at fast-moving startups (scaling seed to series C), I’ve come to the same conclusion:
> Simple is robust
I’m sure my former teams were sick of me saying it, but I’ve found myself repeating this mantra to the LLMs.
Agentic tools will happily build anything you want, the key is knowing what you want!
Every abstraction adds some complexity. So maybe the PoC skips all abstractions. Then we need to add a variant to something. Well, a single if/else is simpler than an abstract base class with two concrete implementations. Adding the 3rd as another if clause is simpler than refactoring all of them to an ABC structure. And so on.
“Simple” is relative. Investing in a little complexity now can save your ass later. Weighing this decision takes skill and experience
Sure maybe its fast to write that simple if statement, but if it doesn't capture the deeper problem you'll just keep running head first into edge cases - whereas if you're modelling the problem in a good way it comes as a natural extension/interaction in the code with very little tweaking _and_ it covers all edge cases in a clean way.
Like I used 100 gallons of petrol this month and 10 kilos of rabbit feed!
Your brain can still "just click" with agentic coding. But it will have to be at a higher level of abstraction. Perhaps the "click" feels different, and will take some adjusting to.
People forget that good engineering isn't "the strongest bridge", but the cheapest bridge that just barely won't fail under conditions.
Left to their own devices, engineers would build the cheapest bridge they could sell that hopefully won't collapse. And no care for the impact on any stakeholder other than the one paying them.
I don't know any real (i.e. non-software) engineers, but I would love to ask them whether what you said is true. For years now, I've been convinced that we should've stuck with calling ourselves "software developers", rather than trying to crib the respectability of engineering without understanding what makes that discipline respectable.
Our toxic little industry would benefit a lot from looking at other fields, like medicine, and taking steps to become more responsible for the outcomes of our work.
Such a world still has room for unlicensed developers too -- I'd certainly be among them.
Sign me up. When I started programming as a 7 year old kid, it wasn't because I dreamed of spending my days on endless meetings and documents. But hey, 40 years later, I'm working as a senior "engineer" and with that comes a heavy emphasis on project management.
Sure, you're expected to know how to solve interesting technical challenges, but that's more of a nice-to-have. It's nowhere near important as being able to make a project look successful despite the fact that the middle management convinced the senior "leadership" to do that project out of sheer ambition and without bringing on board the people who actually talk to the users, so now you're stuck without clear requirements, without a clear way to measure success, and with accumulating tech debt gumming up the works while your boss works with various "stakeholders" to "pivot" over and over so he doesn't have to go to the senior leadership to explain why we're delaying launch again.
And what I'm describing is one of the best places I've ever worked at across more than 25 years of my professional career. Hell, I'm lucky that senior "engineer" is what they call a "terminal" position here, i.e. I'm allowed to settle in it without having to work towards a promotion. From what I've been told, there are places where you have to get to be a staff engineer or they'll eventually let you go.
I don't know about anyone else, but I find the whole situation fucking insane.
What if we built things that are meant to last? Would the world be better for it?
We only recently figured out how to reproduce Roman concrete.
We’d have more but a lot were blown up during WWII.
The difference is that they didn't have rebar. And so they built gravity stable structures. Heavy and costly as fuck.
A modern steel and concrete structure is much lighter and much cheaper to produce.
It does mean a nodern structure doesn't last as long but also the roman stuff we see is what survived the test of time, not what crumbled.
Roman concrete is special because it is much more self-healing than modern concrete, and thus more durable.
However, that comes at the cost of being much less strong, set much slower and require rare ingredients. Roman concrete also doesn’t play nice with steel reinforcement.
https://en.wikipedia.org/wiki/Roman_concrete
Modern concrete is more uniform in mix, and thus it doesn't leave uncured portions.
You'd have a better bridge, at the expense of other things, like hospitals or roads. If people choose good-enough bridges, that shows there is something else they value more.
Don't we end up just spending the same? Just now we're left with a crappy bridge.
Obviously, there's a way to do both poorly too. We can make expensive things that don't last. I think a large chunk of gripes about things that don't last are really about (1) not getting the upside of the tradeoff, cheaper (in both senses) more flexible solutions, and (2) just getting bad quality period.
On whole it is entirely reasonable optimisation problem. What is the best lifespan of single bridge over desired total lifespan.
That can't be right? What about safety factors
Engineers (real ones, not software) face consequences when their work falls apart prematurely. Doubly so when it kills someone. They lose their job, their license, and they can never work in the field again.
That's why it's rare for buildings to collapse. But software collapsing is just another Monday. At best the software firm will get fined when they kill someone, but the ICs will never be held responsible.
At the moment you remove one of these factors, free market becomes dangerous for the people living in it.
Without a safety factor, that uncertainty means that, some of the time, some of your bridge will fall down
Good engineering is building the strongest bridge within budget and time.
If you build a bridge that is rated to carry 100k lbs of weight, and you build it to hold 100k lbs, you didn't build it to barely meet spec -- you under built it -- because overloading is a known condition that does happen to bridges.
The economic incentives on the internet by and large favor the production of slop. A significant proportion of the text-based web was content-farmed even before LLMs - and with the advent of LLMs, you now have slop-results for almost every search query imaginable, including some incredibly niche topics. We've seen the same trend with video: even before gen AI, online video consumption devolved toward carefully-engineered, staged short-form bait (TikTok, YT Shorts, etc). In the same vein, the bulk of the world's email traffic is phishing and spam.
None of this removed the incentive to produce high-quality websites, authentic and in-depth videos, and so on. But in practice, it made such content rare and made it harder for high-quality products to thrive. So yeah, I'm pretty sure that good software will survive in the LLM era. But I'm also absolutely certain that most app stores will be overrun by slop, most games on Steam will be slop, etc.
The pattern was always: ship fast, fix/document later, but when "later" comes "don't touch what is working".
To date nothing changed yet, I bet it won't change even in the future.
… but lately, the rate at which some dev with an LLM can just churn out new bad code has just shot through the roof. I can still be struggling to pick apart the last piece of slop, trying to figure out "okay, if someone with a brain had written this, what would the inputs & outputs be?" and "what is it that production actually needs and relies on, and what causes problems, and how can we get the code from point A to point B without more outages"; but in the meantime, someone has spit out 8 more modules of the same "quality".
So sure, the basic tenants haven't changed, but these days I feel like I'm drowning in outages & bugs.
Competition is essentially dead for that segment given there is always outward growth.
With that being said, AI enables smaller players to implement their visions with enough completeness to be viable. And with a hands off approach to code, the underlying technology mindshare does not matter as much.
... I'll see myself out
And property testing is going to be an important way to validate.
The slop problem isn't AI, it's people who can't tell the difference between good and bad output because they never developed the craft in the first place. AI just makes that gap more visible.
So probably the vast majority of people that program....
Forgive my cynical take, but we're currently experiencing a market that doesn't appear to be rewarding anything specific in the long-term, as huge sums of money are traded on a minute-to-minute, day-to-day, and week-to-week basis. There's an explosion of uncertainty in today's markets and complete collapse of long-range planning echoing at many levels in society--particularly at the highest levels of governments. So I kind of don't want to hear about markets are going to reward.
But what exactly is "good code" (presumably the opposite of slop)?
I'd say that good code is terse, robust, suits its function, yet admits just the right amount of testability, performance, and customizability for the future. Good code anticipates change well. That means that if it has one job, it does that one job well, and not twenty others. If the job is going to stay the same, the code is going to stay the same. Good systems are made from well-factored pieces of code that have proper jobs and do their proper jobs without taking on jobs they shouldn't.
I for one think that AI code is going to reflect its training. If it's trained on just a random selection of everything out there, it's probably going to be pretty mediocre and full of bugs.
So yeah, good code might win among small group of principled people, but the majority will not care. And more importantly, management won't care. And as long as management don't care, you have two choices: "embrace" slop, or risk staying jobless in a though market.
Edit: Also, good code = expensive code. In an economy where people struggle to afford a living, nobody is going to pay for good code when they can get "good enough" code for 200$ a month with Claude.
Software, on the other hand, can be free. Even before LLMs I would argue the best code was found in FOSS projects.
Nobody is going to use sloppy buggy software if a handcrafted well engineered alternative exists, and is free.
In the case of software, the group of people who have principles might be the ones funding FOSS projects, and the software itself would then be enjoyed by all. This is more or less what's already happening today.
Dear sir, I think you may have already got the entire software market incorrect as it already stands.
So what you're saying is "someone" can make a living doing it.
What you're not saying is "you" can make a living doing it.
Might be fine if your HR software isn't approving holiday requests, but your checkout breaks, there's no human that can pick apart the mess and you lose your entire income for a week and that might be the end of the business.
And no, they’re not flat subscriptions. Use more tokens and your quota is gone faster.
(Sure, there were good outsourcing shops, but you didn't tend to save too much with them, since they knew they were good and charged appropriately.)
"Slop" ai-generated code is the same tradeoff as cheap outsourcing shops. You move quicker and cheaper now, but there will come a day when code quality will dip low enough that it will be difficult enough to make new changes that a refocus on quality becomes not just worthwhile, but financially required as well.
(And you may argue that you're using ai-generated code, but are maintaining a high code quality, and so for you this day will never come and you might be right! But you're the "good outsourcing shop", and you're not "saving" nearly as much time or money as those just sloppin' it up these days, so you're not really the issue, I'd argue.)
I can promise you outsourcing of coding is still huge.
This said, companies have changed it up a bit, instead of hiring a outsourcing shop, they'll setup their own branch in LCOL countries.
India, Portugal, a few different countries in eastern Europe are all rather large software producing countries for US companies.
---------
This doesn't necessarily follow. Yes, there might be economic pressure for AI to produce "good" code, but that doesn't necessarily mean efforts to make this so will succeed. LLM's might never become proficient at producing "good" code for the same reasons that LLM's perform poorly when trained on their own output. A heuristic prediction of what "good" code for a given solution looks like is likely always going to be less "good" than code produced by skilled and deliberate human design.
Just as there is a place for fast and dirty human code, there will be a place for slop code. Likely the same sort of place. However, we may still need humans to produce "good" code that AI can be trained on as well as for solutions that actually need to be "good". AI might not be able to do that for us anytime soon, no matter what the economic imperatives are.
Everything fundamental that makes good easier for humans to maintain also makes it easier for LLMs to maintain. Full stop.
It's at its best for functional programming.
If you don't know that style, then JS will be very unpleasant to use.
I'm using AI for coding just like everybody else. More or less exclusively since a few months. It's sometimes frustrating to get things done the right way but mostly I get the job done. I've been coding since the nineties. So, I know how to do things right and what doing it wrong looks like. If I catch my AI coding tools doing it wrong, I tell it to fix it and then adjust skills and guard rails to prevent it going off the rails.
AI tools actually seem to self correct when used in a nice code base. If there are tests, they'll just write more tests without needing to be prompted. If there is documentation, that gets updated along with the code. When you start with a vibe coded mess it can escalate quickly unless you make it clean up the mess. Sometimes the tests it adds are a bit meh and you have to tell it off by "add some tests for the non happy path cases, make sure to cover all possible exceptions, etc.". You can actually ask for a critical code review and then tell it "fix all of that". Sometimes it's as simple as that.
1. IME AI tends to produce good code "in the small." That is, within a function or a file, I've encountered very little sloppy code from AI. Design and architecture is (still) where it quickly tends to go off the rails and needs a heavy hand. However, the bulk of the actual code will tend to be higher quality.
2. Code is now very cheap. And more tests actually results in better results from AI. There is now very little excuse to avoid extensive refactoring to do things "the right way." Especially since there will be a strong incentive to have clean code, because as TFA indicates...
3. Complex, messy code will directly increase token costs. Not just in grokking the codebase, but in the tokens wasted on failed attempts rooted in over-complicated code. Finally, tech debt has a concrete $$$ amount. What can get measured can get fixed, and nothing is easier to measure (or convince execs about!) than $$$.
Right now tokens are extremely cheap because they're heavily subsidized, but when token costs inevitably start ramping up, slop will automatically become less economically viable.
Put simply LLMs perform better on better code.
And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.
We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.
A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.
And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.
We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)
Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.
Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).
We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
when you're making (crafting) software if the lines are going up for an equivalent functionality it means you're cooking up bullshit.
the whole premise of software arts (engineering) - is you do MORE with LESS.
engineering is not science, and neither is art. creativity is needed, rules of thumb are to be followed.
Economic forces are completely irrelevant to the code quality of AI.
> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies
Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.
AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.
Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
The slop we're seeing today comes primarily from the fact that LLMs are writing code with tools meant for human users.
Microslop is the future.
A certain big PaaS I won't name here has had lots of clusterfucks in the last 3 months. The CEO is extremely bought into AI and "code not mattering anymore". He's also constantly talking about the meteoric growth because Claude and other AI providers are using railway as default suggestions.
The toll has come to collect and now a lot of real production users are looking at alternatives.
The reality is the market is rewarding slop and "velocity now". There will come a time where it will reward quality again.
The difference is that over the years while tooling and process have dramatically improved, SDE's have not improved much, junior engineers still make the same mistakes. The assumption is that (not yet proven, but the whole bubble is based on this) that models will continue to improve - eventually leaving behind human SDEs (or other domain people, lawyers, doctors, etc) - if this happens these arguments I keep seeing on HN about AI slop will all be moot.
Assuming AI continues to improve, the cost and speed of software development will dramatically drop. I saw a comment yesterday that predicted that AI will just plateau and everyone will go back to vim and Makefiles (paraphrasing).
Maybe, I don't know, but all these people saying AI is slop, Ra Ra Humans is just wishful thinking. Let's admit it, we don't know how it will play out. There's people like Dario and Sam who naturally are cheerleading for AI, then there's the HN collective who hate every new release of MacOS and every AI model, just on principle! I understand the fear, anyone who's ever read Flora Thompson's Lark Rise to Candleford will see the parallels, things are changing, AI is the plough, the railway, the transistor...
I'm tired on the debate, my experience is that AI (Gemini for me) is awesome, we all have gaps in our knowledge/skills (but not Gemini), AI helps hardcore backend engineers throw together a Gradio demo in minutes to make their point, helps junior devs review their code before making a PR, helps Product put together presentations. I could go on and on, those that don't see value in AI are doing it wrong.
As Taylor Swift said "It's me, hi, I'm the problem, it's me" - take that to heart and learn to leverage the tools, stop whining please, it's embarrassing to the whole software industry.
Is that still the future or nah?
https://news.ycombinator.com/newsguidelines.html