Hacker News new | past | comments | ask | show | jobs | submit
I'm making some big assumptions about Adobe's product ideation process, but: This seems like the "right" way to approach developing AI products: Find a user need that can't easily be solved with traditional methods and algorithms, decide that AI is appropriate for that thing, and then build an AI system to solve it.

Rather than what many BigTech companies are currently doing: "Wall Street says we need to 'Use AI Somehow'. Let's invest in AI and Find Things To Do with AI. Later, we'll worry about somehow matching these things with user needs."

I would interpret it that they're getting the same push from Wall Street and the same investor-hype-driven product leadership as every other tech firm, but this time they have the good fortune to specialize in one of the few verticals (image editing) where generative AI currently has superhuman performance.

This is a testable claim: where were Adobe in previous hype cycles? Googles "Adobe Blockchain"...looks like they were all about blockchains in 2018 [0], then NFTs and "more sustainable blockchains" in 2022 [1].

[0] https://blog.adobe.com/en/publish/2018/09/27/blockchain-and-...

[1] https://www.ledgerinsights.com/adobe-moves-to-sustainable-bl...

The article says clearly there's no guarantee this feature will be released.

Which I'm reading as "Demo-ready, but far from production-ready."

Somewhat relevant: my experience with Photoshop's Generative Fill has been underwhelming. Sometimes it's wrong, often it's comically wrong. I haven't had many easy wins with it.

IMO this is a company that doodles with code for its own entertainment, not a company that innovates robust and highly useful production-ready features for the benefit of users.

So we'll see if Mr Spinny Dragon makes it to production, and is as useful as billed in the demo.

you don't need to release to production for real value. I'm under intense pressure to scope out frothy AI features because just discussing them with prospects has a material impact on the costs of the sales funnel.
> just discussing them with prospects [...] sales funnel

I'll admit I have no idea what % of Adobe licensees/subscribers are individuals and small visual/graphic design firms (who choose Adobe for personal reasons) compared to larger companies (news agencies, web-design body-shops, etc) where employees use the tools given to them despite any personal preferences for rivals like Procreate, etc - and the rest: students, hobbyist photographers, etc.

...but none of the aforementioned market-segments seem like they'd make "AI" (whatever that means) any part of their purchasing-decision. Buzzwords only help sales when the audience is ignorant and/or impressionable; and when your audience are well-informed, seasoned (and cynical) professionals then buzzwords have the opposite effect and damage a company's credibility.

...so I'm not sure who, exactly, Adobe is trying to message with their press-copy for Adobe Firefly (their "generative AI for business" product); perhaps it's just a charade meant only for their shareholders? I'm glad they aren't copying Microsoft and shoving AI branding where it really doesn't belong and compromising the user-experience (...at least not so the same extent).

Execs love genai & execs make purchasing decisions.
This sounds closer to fake value.
I disagree with your analysis. I think this is a novel use of AI in a commercial art product. Is there any AI feature that Adobe could release that you would not view as "pushed from Wall Street"?
I think you're being a bit too generous with Adobe here :-). I shared this before, but it's worth resharing [1]. It covers the experience of a professional artist using Adobe tools.

The gist is that once a company has a captive audience with no alternatives, investors come first. Flashy (no pun intended :-p), cool features to impress investors become more important than the everyday user experience—and this feature does look super cool!

--

1: https://www.youtube.com/watch?v=lthVYUB8JLs

I don’t think those ideas are mutually exclusive. I heavily dislike Adobe and think they’re a rotten company with predatory practices. I also think “AI art” can be harmful to artists and more often than not produces uninteresting flawed garbage at an unacceptable energy cost.

Still, when I first heard of Adobe Firefly, my initial reaction was “smart business move, by exclusively using images they have the rights to”. Now seeing Turntable my reaction is “interesting tool which could be truly useful to many illustrators”.

Adobe can be a bad and opportunistic company in general but still do genuinely interesting things. As much as they deserve the criticism, the way in which they’re using AI does seem to be thought out and meant to address real user needs while minimising harm to artists.¹ I see Apple’s approach with Apple Intelligence a bit in the same vein, starting with the user experience and working backwards to the technology, as it should be.²

Worth noting that I fortunately have distanced myself from Adobe for many years now, so my view may be outdated.

¹ Which I don’t believe for a second is out of the goodness of their hearts, it just makes business sense.

² However, in that case the results seem to be subpar and I don’t think I’d use it even if I could.

    > I also think “AI art” can be harmful to artists and more often than not produces uninteresting flawed garbage at an unacceptable energy cost.
What do you think about Midjourney? The (2D) results are pretty incredible.
That is where opinions actually diverge between pro-AI and anti-AI clusters - they look gorgeous and human-indistinguishable if you aren't trained with tons of images, or extremely disturbing and obvious if you were. It's like how CGIs and special effects from the past would look terrible today.

The big genAI flamewar actually has very little do with copyright or would-be-lost jobs. It's mostly about quality and emotions encoded in the images(deep rage). Lots of tech inclined miss this point.

Whether they avail of it, or not, Adobe have the possibility of accessing feedback and iterating on it for a lot of core design markets. I have a similar view to yours, but there is a segment of the AI community who feel that they are disrupting Adobe as much as other companies. In most cases, these companies have access to the domain experience which will enable AI and it won't work the other way around.

All of this is orthogonal to Adobe's business practices. You should expect them to operate the way they do given their market share and the limited number of alternatives. I personally have almost moved completely to Affinity products, but I expect that Adobe should be better placed to execute products and for Affinity to be playing catchup to some extent.

[flagged]
What’s the goal of your comment? You’re making a straw man argument which in no way relates to my point and ridicules the opinions of people not on this thread. That makes for uninteresting and needlessly divisive conversation.

The HN guidelines rightfully urge us to make substantive comments that advance the discussion and avoid shallow dismissals.

https://news.ycombinator.com/newsguidelines.html

I think they are actually agreeing with you. Just, in a somewhat unpleasant and sarcastic manner. They aren’t strawmanning your argument, right? They are strawmanning the argument against it.
> They aren’t strawmanning your argument, right? They are strawmanning the argument against it.

Yes, that’s the impression I got out of it too. I disapprove either way. I’d sooner defend a good argument against my point than a bad argument in favour of it.

I come to HN for reasoned, thoughtful, curious discussion.

I think what has happened (and I’ve been hit by this in the past, it is very annoying) is: You included the bit in the beginning about being generally skeptical of AI art in some forms to signal that you are somebody with a nuanced opinion, who believes that the thing can be bad at times. Then, you go on to describe that this isn’t one of those times.

Unfortunately, this gets you some comments that want to disagree with that less specific, initial aside. I’m not sure if people just read the first paragraph and respond entirely based on that, without realizing that it is not the main point of the rest of the post. Or if they just don’t want to give up the ground that you did in the beginning, at all, so they knowingly ignore the rest of the post.

I don’t really know what to do about this sort of thing. It seems like… generally nice to be able to start a post with something that says basically: look I’ve thought about this and it isn’t an uninformed reflexive take. But I’m trying to give up on that sort of thing. It isn’t really logically part of the argument, and it ends with people arguing in a direction that I’m not really interested in defending against in this context.

But it does seem a shame, because, while it isn’t logically part of the argument, it is nice to know beforehand how firm somebody’s stance is.

loading story #41873301
I think GP behavior is coming from weird assumption among Internet troll-y people that strong negativity shown by online drawing communities wrt AI _literally_ has nothing to do with output quality of generated data.

This is clearly incorrect to some, not to others. This point being unclear to some, leads to those people assuming that the commonly observed strong negativity is generalized response to all shape and form of new technologies, rather than that specific emotional reaction to current generation of still somewhat Lovecraftian generative AI outputs.

A bit like what if a non-vision super LLM was to characterize anti-genAI sentiment and create "techno-luddite artist" persona. But there's across-modal component to it that they don't capture, so that falls flat.

that's three comments so far (now four) discussing if the comment in question adequately adds to the discussion. If you ask me, hyperbole and sarcasm have a place in nearly any exchange of ideas, but maybe I just haven't drank the right kool-aid for this space.

I think another, perhaps more relevant reference could be the replacement of hand-painted cells with computer-generated frames for animation. It replaced one kind of artist with another. Nobody got all that worked up about it, in the long run.

loading story #41872359
[flagged]
> I think the keyboard can be harmful to scribes

I like this reasoning. If something is new then it must be the future of humanity. People scoffed at Concorde for being “wasteful” and “flawed” but look at the company today

You’re focusing on an irrelevant part of the comment and making a straw man out of it. Your account has very little content so you may be unfamiliar with the HN guidelines, in which case I urge you to refer to them before proceeding.

Discussion should assume good faith and responses should become more substantive, not less, as the conversation goes on.

https://news.ycombinator.com/newsguidelines.html

You can have both!

Cool features that excite users (and that they ultimately end of using), and that get investors excited.

(i.e. Adobe mentioned in the day 1 keynote that Generative Fill, released last year and powered by Adobe Firefly is not one of the top 5 used features in Photoshop).

The features we make, and how we use gen ai is based on a lot of discussions and back and forth with the community (both public and private)

I guess Adobe could make features that look cool, but no one wants to use, but that doesn't seem to really make any sense.

(I work for Adobe)

> is not one of the top 5 used features in Photoshop

I mean, is there any Photoshop feature that’s come to dominate people’s workflows so quickly?

People (e.g. photographers) who use Photoshop “in anger” for professional use-cases, and who already know how to fix a flaw in an image region without generative fill, aren’t necessarily going to adopt it right out of the gate. They’re going to tinker with it a bit, but time-box that tinkering, otherwise sticking with what they can guarantee from experience will get a “satisfactory” result, even if it takes longer and might not have as high a ceiling for how perfectly the image is altered.

And that’d just people who repair flaws in images. Which I’m guessing aren’t even the majority of Photoshop users. Is the clone brush even in the top 5 Photoshop features by usage?

You're super wrong. Pro here working with this stuff for decades.

There was a brief moment in time where freehand was just a better and faster drawing tool than illustrator (which is whats is shown here) but from there on psp, ill & indesign have pretty much killed all competition out there.

The formats they use are sigularly stupid and arcane for legacy reasons, they are all mem hogs and inefficient to the extreme - but nothing beats that unholy trifecta and it is used it or die.

Now to get the point: generative fill is one of the absolute killer features of psp - in an instant it does what could take multiple hours to do previously with 5-10 sec of watching a loader.

There are many mor gamechangers and this really looks like another

That should read "is NOW one of the top 5 used features in Photoshop".
Moreover, when one looks at the chronology with which features were rolled out, all the computationally hard things which would save sufficient time/effort that folks would be willing to pay for them (and which competitors were unlikely to be able to implement) were held back until Adobe rolled out its subscription pricing model --- then and only then did the _really_ good stuff start trickling out, at a pace to ensure that companies kept up their monthly payments.
Is there no alternative to Photoshop? Affinity or Pixelator don't cut it?
I think Krita is the best I’ve found now, though it’s not a 1-on-1 comparison.
My company has decided to update its hr page to use AI for reasons unknown.

So instead of the old workflow:

"visit HR page" → "click link that for whatever reason doesn't give you a permanent link you can bookmark for later"

it's now:

"visit HR page" → "do AI search for the same link which is suggested as the first option" → "wait 10-60 seconds for it to finally return something" → "click link that for whatever reason doesn't give you a permanent link you can bookmark for later"

Nvidia needs to continue selling chips like crazy, all companies in the US need to do their fair share to contribute!...
You joke, but it's literally in the interest of many companies to prop up the SP500 et al. by wasting money on M7 products, isn't it?
Bubbles require constant maintenance
Somebody's putting "AI expert" on their resume
Mine has as well, but it's pretty useful. It's really just a search engine though, but it's indexed confluence and all other internal sites and i've found it pretty useful for everything.
"click link that for whatever reason doesn't give you a permanent link you can bookmark for later"

Sounds like engagement hacking?

I think this is a weird SAML pattern I've seen before where e.g. Okta generates a URL that's like https://somevendor.com/SAML/somesignedbase64payload to do SSO, which is sort of the inverse of the more common approach of the page you're logging into sending you to the Auth provider after seeing your email domain.
This is just to make it an IdP initiated flow (instead of a SP initiated flow) and its to prevent the extra hop back and forwards between Okta/IdP and the application.
My assumption would be clumsy session tracking.
This is certainly a great immediately useful tool but also a relatively small ROI, both the return and the investment. Big tech is aiming for a much bigger return on a clearly bigger investment. That’s going to potentially look like a lot of useless stuff in the meantime. Also, if it wasn’t for big tech and big investments, there wouldn’t even be these tools / models at this level of sophistication for others to be using for applications like this one.
While the press lumps it all together as "AI", you have to differentiate LLMs (driven by big tech and big money) from unrelated image/video types of generative models and approaches like diffusion, NeRF, Gaussian splatting, etc, which have their roots in academia.
LLMs don't have their roots in academia?
Not anymore.
Not at all - Transformer was invented by a bunch of former Google employees (while at Google), primarily Jakob Uszkoreit and Noam Shazeer. Of course as with anything it builds on what had gone before, but it's really quite a novel architecture.
The scientific impact of the transformer paper is large, but in my opinion the novelty is vastly overstated. The primary novelty is adapting the (already existing) dot-product attention mechanism to be multi-headed. And frankly, the single-head -> multi-head evolution wasn't particularly novel -- it's the same trick the computer vision community applied to convolutions 5 years earlier, yielding the widely-adopted grouped convolution. The lasting contribution of the Transformer paper is really just ordering the existing architectural primitives (attention layers, feedforward layers, normalization, residuals) in a nice, reusable block. In my opinion, the most impactful contributions in the lineage of modern attention-based LLMs are the introduction of dot-product attention (Bahdanau et al, 2015) and the first attention-based sequence-to-sequence model (Graves, 2013). Both of these are from academic labs.

As a side note, a similar phenomenon occurred with the Adam optimizer, where the ratio of public/scientific attribution to novelty is disproportionately large (the Adam optimizer is very minor modification of the RMSProp + momentum optimization algorithm presented in the same Graves, 2013 paper mentioned above)

I think the most novel part of it, and where a lot of the power comes from, is in the key based attention, which then operationally gives rise to the emergence of induction heads (whereby pair of adjacent layers coordinate to provide a powerful context lookup and copy mechanism).

The reusable/stackable block is of course a key part of the design since the key insight was that language is as much hierarchical as sequential, and can therefore be processed in parallel (not in sequence) with a hierarchical stack of layers that each use the key-based lookup mechanism to access other tokens whether based on position or not.

In any case, if you look at the seq2seq architectures than preceded it, it's hard to claim that the Transformer is really based-on/evolved-from any of them (especially prevailing recurrent approaches), notwithstanding that it obviously leveraged the concept of attention.

I find the developmental history of the Transformer interesting, and wish more had been documented about it. It seems from interview with Uszkoreit that the idea of parallel language processing based on an hierarchical design using self-attention was his, but that he was personally unable to realize this idea in a way that beat other contemporary approaches. Noam Shazeer was the one who then took the idea and realized it in the the form that would eventually become the Transformer, but it seems there was some degree of throw the kitchen sink at it and then a later ablation process to minimize the design. What would be interesting to know would be an honest assessment of how much of the final design was inspiration and how much experimentation. It's hard to imagine that Shazeer anticipated the emergence of induction heads when this model was trained at sufficient scale, so the architecture does seem to at least partly be an a accidental discovery, and more than the next generation seq2seq model that it seems to have been conceived as.

Key-based attention is not attributable to the Transformer paper. First paper I can find where keys, queries, and values are distinct matrices is https://arxiv.org/abs/1703.03906, described at the end of section 2. The authors of the Transformer paper are very clear in how they describe their contribution to the attention formulation, writing "Dot-product attention is identical to our algorithm, except for the scaling factor". I think it's fair to state that multi-head is the paper's only substantial contribution to the design of attention mechanisms.

I think you're overestimating the degree to which this type of research is motivated by big-picture, top-down thinking. In reality, it's a bunch of empirically-driven, in-the-weeds experiments that guide a very local search in a intractably large search space. I can just about guarantee the process went something like this:

- The authors begin with an architecture similar to the current SOTA, which was a mix of recurrent layers and attention

- The authors realize that they can replace some of the recurrent layers with attention layers, and performance is equal or better. It's also way faster, so they try to replace as many recurrent layers as possible.

- They realize that if they remove all the recurrent layers, the model sucks. They're smart people and they quickly realize this is because the attention-only model is invariant to sequence order. They add positional encodings to compensate for this.

- They keep iterating on the architecture design, incorporating best-practices from the computer vision community such as normalization and residual connections, resulting in the now-famous Transformer block.

At no point is any stroke of genius required to get from the prior SOTA to the Transformer. It's the type of discovery that follows so naturally from an empirically-driven approach to research that it feels all but inevitable.

This makes no sense. A thing's roots don't change, either it did start there or it didn't.
It didn't.

At least, the Transformer didn't. The abstract idea of a language model goes way back though within the field of linguistics, and people were building simplistic "N-gram" models before ever using neural nets, then using other types of neural net such as LSTMs and CNNs(!) before Google invented the Transformer (primarily with the goal of fully utilizing the parallelism available from GPUs - which couldn't be done with a recurrent model like LSTM).

On the plus side, for Adobe, is that they have a fairly stable & predictable SaaS revenue stream so as long as their R&D and product hosting costs don't exceed their subscription base, they're ok. This is wildly different from -- for example -- the hyperscalers, who have to build and invest far in advance of a market [for new services especially].
This feels extremely ungenerous to the Big Tech companies.

What's wrong with trying out 100 different AI features across your product suite, and then seeing which ones "stick"? You figure out the 10 that users find really valuable, another 10 that will be super-valuable with improvement, and eventually drop the other 80.

Especially when if Microsoft tries something and Google doesn't, that suddenly gives Microsoft a huge lead in a particular product, and Google is left behind because they didn't experiment enough. Because you're right -- Google investors wouldn't like that, and would be totally justified.

The fact is, it's often hard to tell which features users will find valuable in advance. And when being 6 or 12 months late to the party can be the difference between your product maintaining its competitive lead vs. going the way of WordPerfect or Lotus 123 -- then the smart, rational, strategic thing to do is to build as many features as possible around the technology, and then see what works.

I would suggest that if Adobe is being slower with rolling out AI features, it might be more because of their extreme monopoly position in a lot of their products, thanks to the stickiness of their file formats. That they simply don't need to compete as much, which is bad.

> What's wrong with trying out 100 different AI features across your product suite, and then seeing which ones "stick"?

For users? Almost everything is wrong with that.

There are no users looking for wild churn in their user interface, no users crossing their fingers that the feature that stuck for them gets pruned because it didn't hit adoption targets overall, no users hoping for popups and nags interrupting their workflow to promote some new garbage that was rushed out and barely considered.

Users want to know what their tool does, learn how to use it, and get back to their own business. They can welcome compelling new features, of course, but they generally want them to be introduced in a coherent way, they want to be able to rely on the feature being there for as long as their own use of those features persists, and they want to be able to step into and explore these new features on their own pace and without disturbance to their practiced workflow.

There are multiple different types of users.

The users of https://notebooklm.google/ aren't the same as the users of Google Docs.

Think about the other side though -- if the tool you've learned and rely on goes out of business because they didn't innovate fast enough, it's a whole lot worse for you now that you have to learn an entirely new tool.

And I haven't seen any "wild churn" at all -- like I said in another comment, a few informative popups and a magic wand icon in a toolbar? It's not exactly high on the list of disruptions. I can still continue to use my software the exact same way I have been -- it's not replacing workflows.

But it's way worse if the product you rely on gets discontinued.

The presence or absence of some subtle new magic wand icon that shows up in the toolbar is neither making nor breaking anyone's business. And even if it comes to be a compelling feature in my competitor's product, I've got plenty of time to update my product with something comparable. At least if I've done a good job building something useful for my customers in the first place.

Generative ML technologies may dramatically change a lot of our products over time, but there's no great hole they're filling and there's basically no moat besides capital requirements that keeps competitors from catching up with each other as features prove themselves out. They just open a few new doors that people will gradually explore.

Anxiously spamming features simply betrays a lack of confidence in one's own product as it stands, directly frustrates professional users, and soaks up tons capital that almost certainly has other places it could be going.

> The presence or absence of some subtle new magic wand icon that shows up in the toolbar is neither making nor breaking anyone's business.

Sounds like famous last words to me.

The corporate landscape is filled with the corpses of companies that thought they didn't need to rush to adapt to new technologies. That they'd have time to react if something really did take off in the end.

Just think of how Kodak bided its time to see if newfangled digital photography would actually take off and when... and then it was too late.

You're comparing being 3 months behind on a supplementary software feature that's tucked among dozens of icons on the toolbar with making a hard decision about pivoting your entire megalithic industrial, research, sales, and distribution infrastructure to a radically new technology.

The discussion you started is about spamming features to see what sticks, as set against making deliberate, selective product decisions as you confidently observe your market.

It's possible that a company that ideologically sets itself against delivering any generative AI features ever might miss where the industry is going over the next 10 or 20 years. But we were never talking about that, were we?

Digital photography started out as a supplementary toy as well. And we are starting to witness a gigantic computational infrastructure pivot with GPU's and NPU's and whatnot. Google and Amazon are literally signing nuclear power plant agreements to power it. AI is a radically new technology.

Do you remember two years ago when ChatGPT came out, and people here on HN were confidently declaring it was the end of Google Search, unless Google proved they could respond immediately? And Google released Gemini less than six months later to demonstrate that Search wasn't going to go the way of Kodak, and it still took people a while to calm down after that?

And the AI revolution is moving a lot faster than the digital photography revolution. We're not talking about "the next 10 or 20 years". You seem to be severely underestimating the power of competition and technological progress, and the ability for it to put you out of business.

You're suggesting the correct approach is "deliberate, selective product decisions as you confidently observe your market." What happens when your deliberation is too slow, your selectivity turns out to be wrong, and your confidence is ill-founded? Well, the company that was willing to experiment with a lot more features is more likely to build the winning features and take over the market while you were busy deliberating.

I'm surprised to be having this conversation on HN, where the start-up ethos reigns supreme. The whole idea of the tech world is to try new things and fail fast, because it's better for everyone in the long run. That's what the big corporations are doing with AI features. Isn't that the kind of thing that tech entrepreneurs are supposed to celebrate?

loading story #41872712
loading story #41872798
No, it's way worse if the product I rely on does as you suggest and keeps adding new features just to see what will stick. I hate that sort of behavior with a passion and it is the sort of thing which will make me never do business with a company again.
Back in the olden days (10 years ago), when you bought software, you could actually keep using it indefinitely. Doesn’t matter if the company went bankrupt, if you like using Logic Pro 7 and it works with your equipment you can kept using it. I know people who only recently moved off of OS 9 - they were using creative software for over 25 years, it did what they needed it to do so they kept using it. I still know at least one person who uses Office for Mac 98 to this day on an iMac G3; it’s their only computer, but it still works and they have backups of their important documents, so why pay money to switch to an unfamiliar computer, OS, software?

This modern idea of “you’ll own nothing and you’ll like it” ruins that of course, but if someone bought CS6 they can still be using it today. If adobe went bankrupt 5 years ago they could still be legally using it today (they’d have to bypass the license checks if the servers go down, which might be illegal in the US, though). If adobe goes bankrupt tomorrow and I have a CC subscription, I can’t legally keep using photoshop after the subscription runs out.

LLMs aren't profitable. There's no significant threat of a product getting discontinued because it didn't jump high enough over the AI shark.
> What's wrong with trying out 100 different AI features across your product suite, and then seeing which ones "stick"?

Even the biggest tech companies have limited engineering bandwidth to allocate to projects. What's wrong with those 100 experiments is the opportunity cost: they suck all the oxygen out of the room and could be shifting the company's focus away from fixing real user problems. There are many other problems that don't require AI to solve, and companies are starving these problems in favor of AI experiments.

It would be better to sort each potential project by ROI, or customer need, or profit, or some other meaningful metric, and do the highest ranked ones. Instead, we're sorting first by "does it use AI" and focusing on those.

What you describe, I don't see happening.

If you look at all the recent Google Docs features rolled out, only a small minority are AI-related:

https://workspaceupdates.googleblog.com/search/label/Google%...

There are a few relating to Gemini in additional languages and supporting additional document types, but the vast majority is non-AI.

Seems like the companies are presumably sorting on ROI just fine. But, of course, AI is expected to have a large return, so it's in there too.

So it's ok for all of us to become lab rats for these companies?
Every consumer is a "lab rat" for every company at all times, if that's how you want to think about it.

Each of our decisions to buy or not buy a product, to use or not use a feature, influences the future design of our products.

And thank goodness, because that's the process by which products improve. It's capitalism at work.

Mature technologies don't need as much experimentation because they're mature. But whenever you get new technologies, yes all these new applications battle each other out in the market in a kind of survival-of-the-fittest. If you want to call consumers "lab rats", I guess that's your choice.

But the point is -- yes, it's not only OK -- it's something to be celebrated!

You might be ok with being a lab rat, but most people are not. People buy products to satisfy their needs, not to participate in somebody else's experiment. Given the option (in the absence of monopoly) they will search for another company that treats them correctly.
> People buy products to satisfy their needs

People buy products for the novelty all the time. Sometimes they are disappointed with what they got, sometimes they discover new things. Take this very feature being discussed. How many people need it if Adobe released it today? How many would like what they see and decide to buy or renew?

> Given the option (in the absence of monopoly) they will search for another company that treats them correctly.

Are we still talking about product features?

Force-feeding 100s of different AI features (90% of which are useless at best) to users is what's wrong with the approach.
Why?

It's not "force-feeding". You usually get a little popup highlighting the new feature that you close and never see again.

It's not that hard to ignore a new "magic wand" button in the toolbar or something.

I personally hardly use any of the features, but neither do I feel "force-fed" in the slightest. Aside from the introductory popups (which are interesting), they don't get in my way at all.

It's popups. It's emails. It's constant nudges towards changes in workflows. Most importantly, it's accelerated the slurping of data and aggressive terms of service by an order of magnitude. Sure, in theory everyone wanted your data before, but now everyone wants all your data all the time. They want to upload it to their servers. They want to train products on it. And they want to ban you from using their product if you don't agree.
I don't think it's a Big Tech problem. Big Tech can come up with moronic ideas and be fine because they have unlimited cash. It's the smaller companies that need to count pennies who decide to flush the money down the AI Boondoggle Toilet.

"But Google does it. If we do it, we will be like Google".

"But Google does it. If we do it, we will be like Google".

Were you in my meeting about 40 minutes ago? Because that's almost exactly what was said.

If the big tech companies wanted to be really evil, they could invent a nonsense tech that doesn't work, then watch as all the small upstart competitors bankrupt themselves to replicate it.

Isn't this what AI is all about? Don't kid yourself, most companies, even some big ones, will bankrupt themselves chasing AI and the few remaining will get the spoils.
It seems that’s just the way things go with disruptive technologies. It’s a gold rush and you don’t want to be left behind.
Wait, is that why we all have microservices now?
That's exactly right. This appeared before on HN but that's what I wrote about a couple of years back: https://renegadeotter.com/2023/09/10/death-by-a-thousand-mic...
Sounds a bit like trying to roll and support your own k8s platform
You mean like React, right? Right?
That approach makes sense for very specific domain-tethered technologies. But for AI I think letting it loose and allowing people to find their own use cases is an appropriate way to go. I've found valuable use cases with ChatGPT in the first months of its public release that I honestly think we still wouldn't have if it went through a traditional product cycle.
It is the 'make something for the user/client' vs. 'make something to sell' mindset.

The latter one is what overwhelmingly more companies (not only BigTech, not at all!) adopted nowadays.

And Boeing. ;)

If the lore is to be believed, Southwest (a airline that has made its business only the 737) saw the a320 neo and basically told Boeing "give us a new 737 or we go to airbus." they did what the client wanted, to their detriment.

"If I asked people what they wanted they would've said faster horses," or whatever Henry Ford is falsely accused of saying.

Counterpoint, the pandering to the market has better stock price appreciation :)

Also I am sure Adobe is doing both. They released an OpenAI competitor recently

Been doing both. Just look at their asset store as of late. Complete mess if you work professionally.

At the same time, apparently their generative autofill is top notch. It's just a shame the industry decided to mix together ML tools with generative art, so that it's hard to tell which from which on a casual glance

Focusing on solving customer problems, not buzz words, typically is the right path.
Yeah I much prefer this approach to the current standard of just putting a chat bot somewhere on the page and calling it a day.
Yeah but sometimes, they just f it up. Like the PS crop tool was aok then they introduced the move the background instead of the crop rectangle way of cropping which is still to this day a terrible experience.

Also, Lightroom is one of the worst camera tools out there. It's only known because ADOBE...

Precisely. There are many such use cases too! It's disappointing to see the industry go all in on chatbot wrappers.
More like "ship some half baked bullshit wrapper for ChatGPT or llama and call it revolutionary."