Users don't care about your tech stack
https://www.empathetic.dev/users-dont-care-about-your-tech-stackBut that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting. When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.
Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)
Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.
This isn't true. It took me two seconds to create a new project, run `cargo build` followed by `ls -hl ./target/debug/helloworld`. That tells me it's 438K, not 3.7MB.
Also, this is a debug build, one that contains debug symbols to help with debugging. Release builds would be configured to strip them, and a release binary of hello world clocks in at 343K. And for people who want even smaller binaries, they can follow the instructions at https://github.com/johnthagen/min-sized-rust.
Older Rust versions used to include more debug symbols in the build, but they're now stripped out by default.
No, it means that product quality is all that matters. The users don't care how you make it work, only that it works how they want it to.
I have written performant high quality products in weird tech stacks where performance can be s bit tricky to get: Ruby, PL/PgSQL, Perl, etc. But it was done by a team who cared a lot about technology and their tech stack. Otherwise it would not have been possible to do.
There are developers who care about product and not about tech. They build things that just barely work.
There are developers who care about both. They build the stuff people remember.
> What truly makes a difference for users is your attention to the product and their needs.
> Learn to distinguish between tech choices that are interesting to you and those that are genuinely valuable for your product and your users.
Then you need to read more, because that's what it means. The tech stack doesn't matter. Only the quality of the product. That quality is defined by the user. Not you. Not your opinion. Not your belief. But the user of the product.
> which hurt the user.
This will self correct.
Horrible tech choices have lead to world class products that people love and cherish. The perfect tech choices have lead to things people laugh at and serve as a reminder that the tech stack doesn't matter, and in fact, may be a red flag.
"It's a basic tool that sits hidden in my tray 99.9% of the time and it should not use 500MB of memory when it's not doing anything" is part of product quality.
You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time
This is why startups can outcompete incumbents sometimes
Suddenly there's a market shift and a startup can actually build your entire product and the new competitive edge in less time than it takes you to add just the new competitive edge, because your code and architecture has atrophied to the point it takes longer to update it than it would to rebuild from scratch
Maybe this isn't as common as I think, I don't know. But I am pretty sure it does happen
While it's true that that can be partially due to tech debt, there are generally other factors as well. The more years you've had to accrue customers in various domains, the more years of decisions you have to maintain backwards compatibility with, the more regulatory regimes you conduct business under and build process around, the slower you're going to move compared to someone trying to move fast and break things.
But it says that in such a roundabout way that non technical people use it as an argument for MBAs to dictate technical decisions in the name of moving fast and breaking things.
I don't know what technology was used to build the audio mixer that I got from Temu. I do know that it's a massive pile of garbage because I can hear it when I plug it in. The tech stack IS the product quality.
Agree the article is not clearly presented but it's crazy to see the gigantic threads here that seem to be based on a misunderstanding.
I think when you get more concrete about what the statement is talking about, it becomes very hard to assert that they mean something else.
Like if you are skilled with, say, Ruby on Rails, you probably should just use that for your v1.0. The hypothetical better stack is often just a myth we tell ourselves as software engineers because we like to think that tech is everything when it's the product + launching that's everything.
(Yes, for real. I've once witnessed this being said out loud and used to justify specific tech stack choice.)
While the difference is huge in your example, it doesn't sound too bad at first glance, because that hello world just includes some Rust standard libraries, so it's a bit bigger, right? But I remember a post here on HN about some fancy "terminal emulator" with GPU acceleration and written in Rust. Its binary size was over 100MB ... for a terminal emulator which didn't pass vttest and couldn't even do half of the things xterm could. Meanwhile xterm takes about 12MB including all its dependencies, which are shared by many progams. The xterm binary size itself is just about 850kB of these 12MB. That is where binary size starts to hurt, especially if you have multiple such insanely bloated programs installed on your system.
> If you want to make something that starts instantly you can't use electron or java.
Of course you can make something that starts instantly and is written in Java. That's why AOT compilation for Java is a thing now, with SubstrateVM (aka "GraalVM native-image"), precisely to eliminate startup overhead.
Speaking of motte-and-bailey. But I actually disagree with the article's "what should you focus on". If you're a public-facing product, your focus should be on making something the user wants to use, and WILL use. And if your tech stack takes 30 seconds to boot up, that's probably not the case. However, if you spend much of your time trying eek out an extra millisecond of performance, that's also not focusing on the right thing (disclaimer: obviously if you have a successful, proven product/app already, performance gains are a good focus).
It's all about balance. Of course on HN people are going to debate microsecond optimizations, and this is perfect place to do so. But every so often, a post like this pops up as semi-rage bait, but mostly to reset some thinking. This post is simplistic, but that's what gets attention.
I think gaming is good example that illustrates a lot of this. The purpose of games is to appeal to others, and to actually get played. And there are SO many examples of very popular games built on slow, non-performant technologies because that's what the developer knew or could pick up easily. Somewhere else in this thread there is a mention of Minecraft. There are also games like Undertale, or even the most popular game last year Balatro. Those devs didn't build the games focusing on "performance", they made them focusing on playability.
Debug symbols aren't cheap. A release build with a minimal configuration (linked below) gets that down to 263kb.
https://stackoverflow.com/questions/29008127/why-are-rust-ex...
It does seem weird to complain about the file size of a debug build not a release build.
This technical requirement is only on the spec sheet created by HN goers. Nobody else cares. Don't take tech specs from your competitors, but do pay attention. The user is always enchanted by a good experience, and they will never even perceive what's underneath. You'd need a competitor to get in their ear about how it's using Electron. Everyone has a motive here, don't get it twisted.
Semi-dependent.
Default Java libraries being a piles upon piles of abstractions… those were, and for all I know still are, a performance hit.
But that didn't stop Mojang, amongst others. It can be written "fast", if you ignore all the stuff the standard library is set up for, if you think in low-level C-like manipulation of int arrays (not Integer the class, int the primitive type) rather than AbstractFactoryBean etc. — and keep going from there, with that attitude, because there's no "silver bullet" to software quality, no "one weird trick is all you need", software in (almost!) any language fast if you focus on doing that and refuse to accept solutions that are merely "ok" when we had DOOM running in real time with software rendering in the 90s on things less powerful than the microcontroller in your USB-C power supply[0] or HDMI dongle[1].
[0] http://www.righto.com/2015/11/macbook-charger-teardown-surpr...
[1] https://www.tomshardware.com/video-games/doom-runs-on-an-app...
Of course, these days you can't run applets in a browser plugin (except via a JavaScript abstraction layer :P), but a similar thing is true with the JavaScript language, though here the trick is to ignore all the de-facto "standard" JS libraries like jQuery or React and limit yourself to the basics, hence the joke-not-joke: https://vanilla-js.com
Stop them from... Making one of the most notoriously slow and bloated video game ever? Like, just look at the amount of results for "Minecraft" "slow"
My niece was playing it just fine on a decade-old Mac mini. I've played it a bit on a Raspberry Pi.
The sales figures suggest my nice's experience is fairly typical, and things such as you quote are more like the typical noise that accompanies all things done in public — people complaining about performance is something which every video game gets. Sometimes the performance even becomes the butt of a comic strips' joke.
If Java automatically causing low performance, neither my niece's old desktop nor my Pi would have been able to run it.
If you get me ranting on the topic: Why does BG3 take longer to load a saved game file than my 1997 Performa 5200 took to cold boot? Likewise Civ VI, on large maps?
I actually tried it in a pi literally a few days agod (came across an older pi which had it preinstalled) and it's pretty much unplayable
> Why does BG3 take longer to load a saved game file than my 1997 Performa 5200 took to cold boot? Likewise Civ V, on large maps?
I believe there should be the possibility for legal customer complaints when this happen just like I can file a complaint if I buy a microwave and it takes 2 minutes to start
Only a small subset of users actually do this, because there are many other reasons that people choose software. There are plenty of examples of bloated software that is successful, because those pieces of software deliver value other than being quick.
Vanishingly few people are going to choose a 1mb piece of software that loads in 10ms over a 100mb piece of software that loads in 500ms, because they won't notice the difference.
Hmmm, VScode starts instantly on my M1 Mac
Slack's success suggests you're wrong about bloat being an issue. Same with almost every mobile app.
The iOS Youtube app is 300meg. You could repo the functionality in 1meg. The TikTok app is 597meg. Instagram 370meg. X app 385meg, Slack app 426meg, T-Life (no idea what it is but it's #3 on the app store, 600meg)
Users don't care about bloat.
The reason why people aren't sweating 200mb is because everything has gotten to be that big. Change that number to 2 terabytes.
Adn guess what? In 5 years time, someone will say "Nobody in the West is seating a 2 TB download" because it keeps increasing.
As a "user", it is not only "zippiest" that matters to me. Size matters, too. And, in some cases, the two are related. (Rust? Try compiling on an underpowered computer.^1)
"If you want to make something that starts instantly you can't use Electron or Java."
Nor Python.
1. I do this everyday with C.
I do think people nowadays over-index on iteration/shipping speed over quality. It's an escape. And it shows, when you "ship".
18mb vs 180mb is probably the difference between an instant download and ~30 seconds. 1.8gb is gonna make someone stop and think about what they’re doing.
But 18mb vs 9mb is not significant in most cases.
Yes, yes it is. But they were going to do it anyway. Even if people were to stop accepting this argument, they'll just start using another one.
Startup culture is never going to stop being startup culture and complacent corporations are never going to stop being complacent.
As the famous adage goes: If you want it done right, you gotta do it yourself.
> File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download.
File Pilot is... seemingly a fully-featured GUI file explorer in 1.8mb, complete with animations?
Dude. What.
But does it matter ? I think the only metric with optimising for is latency. Other stuff is something we do.
It's an incredibly effective argument to shut down people pushing for the new shiny thing just because they want to try it.
Some people are gullible enough to read some vague promises on the homepage of a new programming language or library or database and they'll start pushing to rewrite major components using the new shiny thing.
Case in point: i've worked at two very successful companies (one of them reached unicorn-level valuation) that were fundamentally built using PHP. Yeah, that thing that people claim has been dead for the last 15 years. It's alive, kicking and screaming. And it works beautifully.
> If you want to make something that starts instantly you can't use electron or java.
You picked the two technologies that are the worst examples for this.
Electron: electron has essentially breathed new life into GUI development, that essentially nobody was doing anymore.
Java: modern java is crazy fast nowadays , and on decent computer your code gets to the entrypoint (main) in less than a second. Whatever slows it down is codebase problem, it's not the jvm.
Most users do not care at all.
If someone is sitting down for an hour-long gaming session with their friends, it doesn't matter if Discord takes 1 second or 20 seconds to launch.
If someone is sitting down to do hours of work for the day, it doesn't matter if their JetBrains IDE or Photoshop or Solidworks launches instantly or takes 30 seconds. It's an entirely negligible amount.
What they do care about is that the app works, gives them the features they want, and gets the job done.
We shouldn't carelessly let startup times grow and binaries become bloated for no reason, but it's also not a good idea to avoid helpful libraries and productivity-enhancing frameworks to optimize for startup time and binary size. Those are two dimensions of the product that matter the least.
> All else equal users will absolutely choose the zippiest products.
"All else equal" is doing a lot of work there. In real world situations, the products with more features and functionality tend to be a little heavier and maybe a little slower.
Dealing with a couple seconds of app startup time is nothing in the grand scheme of people's work. Entirely negligible. It makes sense to prioritize features and functionality over hyper-optimizing a couple seconds out of a person's day.
> As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb.
Okay. Comparing a debug build to a released app is a blatantly dishonest argument tactic.
I have multiple deployed Rust services with binary sizes in the 1-2MB range. I do not care at all how large a "Hello World" app is because I'm not picking Rust to write Hello World apps.
Nice to have, not a must.
Nobody is making the argument that users care about your tech stack. I've literally never heard a dev justify using a library because "users care about it". Nobody.
When developers are having those discussions, are they ever doing so in relation to some hypothetical user caring? This feels like a giant misdirection strawman.
When I discuss technology and framework choices with other developers, the context is the experience for developers. And of course business considerations come into play as well: Can we find lots of talent experienced in this set of technologies? Is it going to efficiently scale in a cost effective manner? Are members of the team going to feel rewarded mastering this stack, gaining marketable skills? And so on.