Hacker News new | past | comments | ask | show | jobs | submit

Adobe's new image rotation tool is one of the most impressive AI tools seen

https://www.creativebloq.com/design/adobes-new-image-rotation-tool-is-one-of-the-most-impressive-ai-concepts-weve-seen
I'm making some big assumptions about Adobe's product ideation process, but: This seems like the "right" way to approach developing AI products: Find a user need that can't easily be solved with traditional methods and algorithms, decide that AI is appropriate for that thing, and then build an AI system to solve it.

Rather than what many BigTech companies are currently doing: "Wall Street says we need to 'Use AI Somehow'. Let's invest in AI and Find Things To Do with AI. Later, we'll worry about somehow matching these things with user needs."

I would interpret it that they're getting the same push from Wall Street and the same investor-hype-driven product leadership as every other tech firm, but this time they have the good fortune to specialize in one of the few verticals (image editing) where generative AI currently has superhuman performance.

This is a testable claim: where were Adobe in previous hype cycles? Googles "Adobe Blockchain"...looks like they were all about blockchains in 2018 [0], then NFTs and "more sustainable blockchains" in 2022 [1].

[0] https://blog.adobe.com/en/publish/2018/09/27/blockchain-and-...

[1] https://www.ledgerinsights.com/adobe-moves-to-sustainable-bl...

The article says clearly there's no guarantee this feature will be released.

Which I'm reading as "Demo-ready, but far from production-ready."

Somewhat relevant: my experience with Photoshop's Generative Fill has been underwhelming. Sometimes it's wrong, often it's comically wrong. I haven't had many easy wins with it.

IMO this is a company that doodles with code for its own entertainment, not a company that innovates robust and highly useful production-ready features for the benefit of users.

So we'll see if Mr Spinny Dragon makes it to production, and is as useful as billed in the demo.

loading story #41873376
loading story #41875242
I think you're being a bit too generous with Adobe here :-). I shared this before, but it's worth resharing [1]. It covers the experience of a professional artist using Adobe tools.

The gist is that once a company has a captive audience with no alternatives, investors come first. Flashy (no pun intended :-p), cool features to impress investors become more important than the everyday user experience—and this feature does look super cool!

--

1: https://www.youtube.com/watch?v=lthVYUB8JLs

I don’t think those ideas are mutually exclusive. I heavily dislike Adobe and think they’re a rotten company with predatory practices. I also think “AI art” can be harmful to artists and more often than not produces uninteresting flawed garbage at an unacceptable energy cost.

Still, when I first heard of Adobe Firefly, my initial reaction was “smart business move, by exclusively using images they have the rights to”. Now seeing Turntable my reaction is “interesting tool which could be truly useful to many illustrators”.

Adobe can be a bad and opportunistic company in general but still do genuinely interesting things. As much as they deserve the criticism, the way in which they’re using AI does seem to be thought out and meant to address real user needs while minimising harm to artists.¹ I see Apple’s approach with Apple Intelligence a bit in the same vein, starting with the user experience and working backwards to the technology, as it should be.²

Worth noting that I fortunately have distanced myself from Adobe for many years now, so my view may be outdated.

¹ Which I don’t believe for a second is out of the goodness of their hearts, it just makes business sense.

² However, in that case the results seem to be subpar and I don’t think I’d use it even if I could.

loading story #41875255
Whether they avail of it, or not, Adobe have the possibility of accessing feedback and iterating on it for a lot of core design markets. I have a similar view to yours, but there is a segment of the AI community who feel that they are disrupting Adobe as much as other companies. In most cases, these companies have access to the domain experience which will enable AI and it won't work the other way around.

All of this is orthogonal to Adobe's business practices. You should expect them to operate the way they do given their market share and the limited number of alternatives. I personally have almost moved completely to Affinity products, but I expect that Adobe should be better placed to execute products and for Affinity to be playing catchup to some extent.

[flagged]
loading story #41871562
[flagged]
loading story #41872313
loading story #41871736
You can have both!

Cool features that excite users (and that they ultimately end of using), and that get investors excited.

(i.e. Adobe mentioned in the day 1 keynote that Generative Fill, released last year and powered by Adobe Firefly is not one of the top 5 used features in Photoshop).

The features we make, and how we use gen ai is based on a lot of discussions and back and forth with the community (both public and private)

I guess Adobe could make features that look cool, but no one wants to use, but that doesn't seem to really make any sense.

(I work for Adobe)

> is not one of the top 5 used features in Photoshop

I mean, is there any Photoshop feature that’s come to dominate people’s workflows so quickly?

People (e.g. photographers) who use Photoshop “in anger” for professional use-cases, and who already know how to fix a flaw in an image region without generative fill, aren’t necessarily going to adopt it right out of the gate. They’re going to tinker with it a bit, but time-box that tinkering, otherwise sticking with what they can guarantee from experience will get a “satisfactory” result, even if it takes longer and might not have as high a ceiling for how perfectly the image is altered.

And that’d just people who repair flaws in images. Which I’m guessing aren’t even the majority of Photoshop users. Is the clone brush even in the top 5 Photoshop features by usage?

loading story #41873003
Moreover, when one looks at the chronology with which features were rolled out, all the computationally hard things which would save sufficient time/effort that folks would be willing to pay for them (and which competitors were unlikely to be able to implement) were held back until Adobe rolled out its subscription pricing model --- then and only then did the _really_ good stuff start trickling out, at a pace to ensure that companies kept up their monthly payments.
loading story #41874004
My company has decided to update its hr page to use AI for reasons unknown.

So instead of the old workflow:

"visit HR page" → "click link that for whatever reason doesn't give you a permanent link you can bookmark for later"

it's now:

"visit HR page" → "do AI search for the same link which is suggested as the first option" → "wait 10-60 seconds for it to finally return something" → "click link that for whatever reason doesn't give you a permanent link you can bookmark for later"

Nvidia needs to continue selling chips like crazy, all companies in the US need to do their fair share to contribute!...
loading story #41874232
Bubbles require constant maintenance
Somebody's putting "AI expert" on their resume
Mine has as well, but it's pretty useful. It's really just a search engine though, but it's indexed confluence and all other internal sites and i've found it pretty useful for everything.
"click link that for whatever reason doesn't give you a permanent link you can bookmark for later"

Sounds like engagement hacking?

I think this is a weird SAML pattern I've seen before where e.g. Okta generates a URL that's like https://somevendor.com/SAML/somesignedbase64payload to do SSO, which is sort of the inverse of the more common approach of the page you're logging into sending you to the Auth provider after seeing your email domain.
loading story #41874372
My assumption would be clumsy session tracking.
This is certainly a great immediately useful tool but also a relatively small ROI, both the return and the investment. Big tech is aiming for a much bigger return on a clearly bigger investment. That’s going to potentially look like a lot of useless stuff in the meantime. Also, if it wasn’t for big tech and big investments, there wouldn’t even be these tools / models at this level of sophistication for others to be using for applications like this one.
While the press lumps it all together as "AI", you have to differentiate LLMs (driven by big tech and big money) from unrelated image/video types of generative models and approaches like diffusion, NeRF, Gaussian splatting, etc, which have their roots in academia.
LLMs don't have their roots in academia?
Not anymore.
loading story #41871091
loading story #41871250
On the plus side, for Adobe, is that they have a fairly stable & predictable SaaS revenue stream so as long as their R&D and product hosting costs don't exceed their subscription base, they're ok. This is wildly different from -- for example -- the hyperscalers, who have to build and invest far in advance of a market [for new services especially].
This feels extremely ungenerous to the Big Tech companies.

What's wrong with trying out 100 different AI features across your product suite, and then seeing which ones "stick"? You figure out the 10 that users find really valuable, another 10 that will be super-valuable with improvement, and eventually drop the other 80.

Especially when if Microsoft tries something and Google doesn't, that suddenly gives Microsoft a huge lead in a particular product, and Google is left behind because they didn't experiment enough. Because you're right -- Google investors wouldn't like that, and would be totally justified.

The fact is, it's often hard to tell which features users will find valuable in advance. And when being 6 or 12 months late to the party can be the difference between your product maintaining its competitive lead vs. going the way of WordPerfect or Lotus 123 -- then the smart, rational, strategic thing to do is to build as many features as possible around the technology, and then see what works.

I would suggest that if Adobe is being slower with rolling out AI features, it might be more because of their extreme monopoly position in a lot of their products, thanks to the stickiness of their file formats. That they simply don't need to compete as much, which is bad.

> What's wrong with trying out 100 different AI features across your product suite, and then seeing which ones "stick"?

For users? Almost everything is wrong with that.

There are no users looking for wild churn in their user interface, no users crossing their fingers that the feature that stuck for them gets pruned because it didn't hit adoption targets overall, no users hoping for popups and nags interrupting their workflow to promote some new garbage that was rushed out and barely considered.

Users want to know what their tool does, learn how to use it, and get back to their own business. They can welcome compelling new features, of course, but they generally want them to be introduced in a coherent way, they want to be able to rely on the feature being there for as long as their own use of those features persists, and they want to be able to step into and explore these new features on their own pace and without disturbance to their practiced workflow.

Think about the other side though -- if the tool you've learned and rely on goes out of business because they didn't innovate fast enough, it's a whole lot worse for you now that you have to learn an entirely new tool.

And I haven't seen any "wild churn" at all -- like I said in another comment, a few informative popups and a magic wand icon in a toolbar? It's not exactly high on the list of disruptions. I can still continue to use my software the exact same way I have been -- it's not replacing workflows.

But it's way worse if the product you rely on gets discontinued.

loading story #41871354
loading story #41872594
loading story #41874285
loading story #41871256
> What's wrong with trying out 100 different AI features across your product suite, and then seeing which ones "stick"?

Even the biggest tech companies have limited engineering bandwidth to allocate to projects. What's wrong with those 100 experiments is the opportunity cost: they suck all the oxygen out of the room and could be shifting the company's focus away from fixing real user problems. There are many other problems that don't require AI to solve, and companies are starving these problems in favor of AI experiments.

It would be better to sort each potential project by ROI, or customer need, or profit, or some other meaningful metric, and do the highest ranked ones. Instead, we're sorting first by "does it use AI" and focusing on those.

What you describe, I don't see happening.

If you look at all the recent Google Docs features rolled out, only a small minority are AI-related:

https://workspaceupdates.googleblog.com/search/label/Google%...

There are a few relating to Gemini in additional languages and supporting additional document types, but the vast majority is non-AI.

Seems like the companies are presumably sorting on ROI just fine. But, of course, AI is expected to have a large return, so it's in there too.

So it's ok for all of us to become lab rats for these companies?
Every consumer is a "lab rat" for every company at all times, if that's how you want to think about it.

Each of our decisions to buy or not buy a product, to use or not use a feature, influences the future design of our products.

And thank goodness, because that's the process by which products improve. It's capitalism at work.

Mature technologies don't need as much experimentation because they're mature. But whenever you get new technologies, yes all these new applications battle each other out in the market in a kind of survival-of-the-fittest. If you want to call consumers "lab rats", I guess that's your choice.

But the point is -- yes, it's not only OK -- it's something to be celebrated!

loading story #41873522
Force-feeding 100s of different AI features (90% of which are useless at best) to users is what's wrong with the approach.
Why?

It's not "force-feeding". You usually get a little popup highlighting the new feature that you close and never see again.

It's not that hard to ignore a new "magic wand" button in the toolbar or something.

I personally hardly use any of the features, but neither do I feel "force-fed" in the slightest. Aside from the introductory popups (which are interesting), they don't get in my way at all.

loading story #41871297
I don't think it's a Big Tech problem. Big Tech can come up with moronic ideas and be fine because they have unlimited cash. It's the smaller companies that need to count pennies who decide to flush the money down the AI Boondoggle Toilet.

"But Google does it. If we do it, we will be like Google".

"But Google does it. If we do it, we will be like Google".

Were you in my meeting about 40 minutes ago? Because that's almost exactly what was said.

If the big tech companies wanted to be really evil, they could invent a nonsense tech that doesn't work, then watch as all the small upstart competitors bankrupt themselves to replicate it.

Isn't this what AI is all about? Don't kid yourself, most companies, even some big ones, will bankrupt themselves chasing AI and the few remaining will get the spoils.
loading story #41871816
Wait, is that why we all have microservices now?
loading story #41872658
Sounds a bit like trying to roll and support your own k8s platform
You mean like React, right? Right?
That approach makes sense for very specific domain-tethered technologies. But for AI I think letting it loose and allowing people to find their own use cases is an appropriate way to go. I've found valuable use cases with ChatGPT in the first months of its public release that I honestly think we still wouldn't have if it went through a traditional product cycle.
It is the 'make something for the user/client' vs. 'make something to sell' mindset.

The latter one is what overwhelmingly more companies (not only BigTech, not at all!) adopted nowadays.

And Boeing. ;)

If the lore is to be believed, Southwest (a airline that has made its business only the 737) saw the a320 neo and basically told Boeing "give us a new 737 or we go to airbus." they did what the client wanted, to their detriment.

"If I asked people what they wanted they would've said faster horses," or whatever Henry Ford is falsely accused of saying.

Counterpoint, the pandering to the market has better stock price appreciation :)

Also I am sure Adobe is doing both. They released an OpenAI competitor recently

loading story #41873212
Focusing on solving customer problems, not buzz words, typically is the right path.
Yeah I much prefer this approach to the current standard of just putting a chat bot somewhere on the page and calling it a day.
Yeah but sometimes, they just f it up. Like the PS crop tool was aok then they introduced the move the background instead of the crop rectangle way of cropping which is still to this day a terrible experience.

Also, Lightroom is one of the worst camera tools out there. It's only known because ADOBE...

Precisely. There are many such use cases too! It's disappointing to see the industry go all in on chatbot wrappers.
More like "ship some half baked bullshit wrapper for ChatGPT or llama and call it revolutionary."
The source (Adobe MAX) 'demoes' full range of incredible scenarios..

https://www.youtube.com/watch?v=gfct0aH2COw

The video is much better than the linked page. The video shows the dynamic multi-angle character rotation and other object rotations. https://www.youtube.com/watch?t=63
loading story #41875596
That event has the enthusiasm of old Apple demos
Maybe you missed the video in the linked article? it's the same demo.
Cut out the middle-man.
loading story #41874454
Finally more AI tools for vectors!

With bitmaps you get a blob of pixels but vectors you can be edited and refined much easier.

Not bad, but what's up with the audience? Is there an Adobe cult or something?
Regardless of whether these are adobe employees or not, I’d argue that a feature like this warrants such a response.

It makes me miss Apple’s old keynote style that they’ve abandoned in favor of the bland, sanitized, over-polished and pre-recorded video keynotes.

I’m honestly over so much of the corporate cynicism and Blind-indification that’s turned what was once a necessary precautionary stance to this demonization or ridicule of people who happen to love their work and where they do it.

The audience is creative community members who use Adobe tools and are attending max (around 11,000 for this event).
This is Adobe Max, which is a huge event held by Adobe for creators.

This session is "Sneaks" which is held every year, and has a fun, casual atmosphere. i.e. it has a theme, has a celebrity co-host, lots of jokes, food and drink served, etc...

Its basically a bunch of people who are creative, and are having fun nerding out on the tech...

Its a lot of fun.

(I work for Adobe)

Of course there is. Just like there's an Apple Cult, Android Cult, Facebook Cult, Sportsball team cult, blah blah blah. Any group that is large enough to attract that many users/followers/fans will naturally have a subset that is more gungho than the rest.
Also the "fun" stage setting. I once quit a big-ish tech company when they started pulling stuff like that.
I'm not sure about this particular event, but companies often have employees who worked on the products in the crowd during launches to provide even more crowd noise.
There’s millions of Adobe product users out there. Almost all the design tool developers have keynote events.

As an aside, I hate that people like yourself describe fans of anything they don’t personally understand as cults. It’s an antagonistic framing of a question designed to remove any good faith discussion.

I think cult is really fitting for a massive group of customers who are locked in by a monopolist. Maybe eve; worse than a cult, because there's just the one,
Not bad, but what's up with the audience? Is there an Adobe cult or something?

There are conferences for Adobe customers to teach them how to use Adobe tools. I think there was recently an Adobe Max conference in Los Angeles. It could have been filed there.

Come on.. it is literally dream come true for an artist whose nightmare is indecisive, confused clients.

You have to read some of the YouTube comments to understand that some of those hoots and claps could be for real.

It's an immense feature. Illustrators love it. Of course they're enthusiastic about it. Man, there's nothing like this. You draw 2D vector art and then rotate it in 3D space. What the heck, that's freaking crazy. I'd be hooting and hollering. How can you not be losing your mind over this. It would accelerate so many processes.

My wife's an artist and says they had a shitty version of this but this is crazy.

loading story #41875215
{"deleted":true,"id":41871801,"parent":41870168,"time":1729186788,"type":"comment"}
I find that Adobe is really pulling away from open source software with all this AI stuff. A few years ago it could be argued that GIMP, Inkscape, and Darktable could do almost everything that Photoshop, Illustrator, and Lightroom could, albeit with a jankier user interface.

But now none of the open source software can compete with AI generative fill, AI denoising, and now AI rotation.

loading story #41875611
With all due respect there’s never been a time when that could have legitimately been argued unless someone was doing relatively basic things with those apps or was a hobbyist.

There’s always been a significant gap in capabilities once you looked past the surface.

I find this sentiment is common among FOSS advocates who don’t actually professionally use those tools.

I am definitely an advocate for free tools closing that gap, but I both design content professionally and contribute to OSS projects to close that gap. So I feel quite confident in saying that gap has always been large when compared to the Adobe suite.

loading story #41874830
In some way, having followed the open source image generation scene for a while, it feels a little bit like it's opposite?

Most of the ai image generation stuff I've seen from adobe feels late to party in terms of what you can do with open source tools. Where they do compete however is with tight integration, and I guess that's what matters the most to users in the end.

There are plugins for gimp that let you do image generation, inpainting and other things.

As far as what the post shows, it looks very much like current models that generate novel viewpoints of an object, but for illustrations. It might be doable to fine tune this for illustrations and simply vectorise the new viewpoint again. Though this will destroy any structure previously held in the object.

All I'm saying is that we have the tech to do even more than what adobe is doing, we just haven't put it nicely together yet.

I think your last paragraph sums it up pretty nicely: users need a good UX to get to these tools.

So I would love if GIMP started shipping these awesome plugins by default to pick up the pace!

IMHO Krita has really become the cross platform open source darling for graphic editors. There are some things that are unintuitive but it's leagues better than GIMP.
The more I spend time as a software developer, the more strongly I believe that UX is 80% of what makes a tool good, and that a lot of programmers really just don’t get that.
loading story #41871395
loading story #41873736
GIMP does not fully support non-destructive editing yet.

That, by itself, would be a complete deal breaker for professional work.

There's plenty more deal breakers remaining.

They'll probably be better able to compete once Adobe ups prices to reflect the actual cost of all that processing.
Photoshop is £30 a month. NASDAQ.com reports their net profit to be 40% and elsewhere they're reported to gross $20B revenue.

I think they can afford the ML based content generation costs without increasing prices.

They might do it anyway though. I have the "all apps" subscription but it's not actually everything they make any more, all their "Substance 3D" tools are another $50/mo. I can easily see this feature getting most of its functionality locked behind that extra subscription the way Illustrator's new 3d tools just give you a tiny handful of materials without that.
loading story #41870963
I was just thinking similarly. I don't need any of these AI features and I'm certainly not about to start giving Adobe money, but I'd be lying if I said I wasn't jealous.
Not yet, but I imagine soon they will. Closed source is moving to video and open source is catching up to static images with incredible pace. I won't be suprised if not only GIMP integrates something like a couple of general stable diffusion models but pirated copies of photoshop find a way to hook up a local generative model instead of the online stuff.
"But now none of the open source software can compete with AI generative fill, AI denoising, and now AI rotation."

This is a common pattern across many fields. The truly top-end companies are always running ahead of open source.

But that doesn't mean it's a permanent situation. It just means you're looking at it from a point in time where the commercials got there, and open source hasn't yet. Open source will get there, and then Adobe will be ahead on something else.

I've played a bit with "comfyui" over the past few days, a bizarre name for an AI image generation power tool. (And other things, but I have no experience there to know how good it is at those.) It drips with power. The open source world is not generally behind on raw capability. As is often the case, open source's deficiency for generative fill for instance is that A: it offers too much control, too many knobs (e.g., "which of several dozen models would you like to start with?"), and while that's awesome if you know what you're doing, it is not yet at the "circle this and click 'remove'" yet, and B: the motivation and firepower to integrate this all into a slick package is not there. I can definitely do an AI generative fill with open source software, but I'll be exporting an image into comfyui, either building my own generative fill program or grabbing some rando's program online who may or may not be using compatible models or require me to install additional bespoke functionality into comfyui, doing my work, and re-exporting it. The job is done, but it's much more complicated, and most people don't care about the other extra capabilities this workflow yields so for them it's just cost.

It's a very normal pattern in the open source world. Nothing about the current situation particularly gives me cause to worry specially about it.

To be concrete, here's a YouTube video that's to the more advanced side of what you can do in the open source world, which is probably still ultimately simplistic compared to what some people do: https://www.youtube.com/watch?v=ijqXnW_9gzc That entire series is worth a look, and there's more it doesn't cover. You can get incredible control over these tools in the open source world, but it involves listening to some guy on YouTube trying to explain why you might to sometimes use a thing called "dpmpp_2m_sde_gpu"... not exactly normie-friendly.

loading story #41875355
I'm not convinced. The flows are a little less convenient right now, but that's basically it.

Ex - I can absolutely get exactly this same rotation feature using open toolchains, they just haven't been nicely consolidated into a pretty package yet.

So to recreate the same thing adobe is doing here I currently have to:

1. Use the 3d-pack in comfy-ui to get stack orbit camera poses for my char (see: https://github.com/MrForExample/ComfyUI-3D-Pack scroll down to stack orbit in the readme)

2. Import those images back into the open source tool manually.

Is it as convenient? Nope - it requires a lot more setup and knowledge.

Is it hard to imagine this getting implemented in open source? Also nope. It's going to happen, it just won't be quite as quick.

loading story #41874241
They can, but the user experience is abysmal, useless and nerve racking.
Can't speak for PS vs GIMP but I used to use Illustrator a fair bit and Inkscape was nowhere near it in terms of both features and useability. Now that was 15 years ago, so it may have caught up.
You are correct even today. Inkscape is great but it’s a fraction of the utility that Illustrator offers.

The only people who would actually equate them are people not professionally using these tools everyday.

Even paid apps like affinity designer are a fraction of the functionality of Illustrator.

Again, a great product but people are just dead wrong if they compare them as an absolute.

loading story #41875212
> A few years ago it could be argued that GIMP, Inkscape, and Darktable

To a Linux user, yes. To a professional, it was always a cruel joke, it was never close, even a few years ago. It's like saying Notepad++ is a functional IDE, or Kdenlive is a functional replacement for DaVinci Resolve.

I cannot stress this enough: Actual professionals do not think GIMP is a viable replacement, in any way, and never have.

GIMP did spawn one kinda good thing, GTK was made because Peter Mattis disliked Motif and wrote a replacement and called it the GIMP toolkit
{"deleted":true,"id":41871061,"parent":41870867,"time":1729181856,"type":"comment"}
I would also like to add (as a separate comment though, this will be controversial):

Some would say that GIMP, Inkscape, and Darktable aren't really competitive yet because they haven't had enough investment. If we invested in them enough, and managed them well, they could be like Blender.

GIMP has been in development since 1995. Photopea was initially released in 2013, has been solely developed by one person, and is a far-and-away better Photoshop competitor. The projects themselves are mismanaged. GIMP should (frankly) be abandoned and completely reset, in my opinion, as being a failed attempt at salvaging old code forever. Wisdom is knowing when to keep pushing - and when to give up.

Ok that's VERY impressive, now give me the possibility of exporting it as an .stl to 3D print and then we'll be talking. Just imagine drawing something in 2D and be able to print it as a fully 3D object, it gives me chills just by thinking about it.
I don't think quite the same kind of tech, but this kinda reminds me of the "3D" pixel art sprite editor thing in Smack Studio

https://youtu.be/sM3ss-lY1zU?t=10

If you rmb-click on the video and select "show controls", you will not only be able to seek, but you'll also be able to unmute it.

I don't know why it was embedded with the controls hidden.

This is the true power of generative AI, enabling new functionality for the user with simple UX while doing all the heavy lifting in the background. Prompting as a UX should be abstracted away from the user.
This probably isn't backed by an LLM but instead some kind of geometric shape model.
How do you explain a horse 2 legs become 4 legs when rotated assuming they only drew 2 legs on the side view
The second L in LLM stands for "language". Nothing of what you're describing has to do with language modeling.

They could be using transformers, sure. But plenty of transformers-based models are not LLMs.

They are probably looking for LGMs - Large Generative Models which encapsulate vision & multi-modal models.
It looks cool and covinient for people like designers and other non-techinical content creators. One natural follow-up would be, can we find many other similar operations that are used by creativity people everyday and tackle them under a unified framework?
As someone who currently works in GenAI and analytics but paid their way through college doing design (for print media) and still keeps around old copies of Illustrator and Fireworks (running under Wine) as well as using Affinity Suite, this is STUPEFYINGLY more impressive than any LLM.

Still not enough to make me pay for Adobe Creative Suite (I just dabble these days), but the target demographic will be all over it.

I spent so many hours trying to do rotations with a pirated copy of Flash as a kid, and I never really got the hang of it, and it always bothered me how deceptively hard rotation was; when I would show my parents my work, they would do their very best to try and act excited but I could tell that they weren't really impressed with the effort because it doesn't seem that hard, at least to a lot of people.

This makes me irrationally happy.

Yeah, this is one of those things that seems trivial until you try to do it, and then it's impossible.
I've seen a lot of cool shit from adobe but its mostly rehashed stuff thats been cleaned up from public workflows from stuff we've seen done in comfyui and other flux/stable diffusion based expansion workflows... like the IC-Light style relighting they demod...

But this... this is really fuckin cool

Incredible, but a shame you'll have to use Adobe to get it.
loading story #41875194
Yes. I absolutely despise Adobe, and I will not be using this.

They were double charging me for photoshop for two years. I caught them and it took 60 minutes on the phone to get them to do something about it.

They have an entire cancellation department. (!)

There are actually multiple open source ML models for 2d to 3d which is clearly what they are doing. The difference with most of them is that this is vectors.

There might actually be a similar open source model already.

But I think to create it you would build it from a database of 3d assets that you could render from many angles. Probably quite similar to the way the 2d to 3d works. I don't know maybe the typical 2d to 3d models will work out of the box or with some kind of smoothing or parameterization. Maybe if you have a large database of parameterized 3d models then you combine that with rendering in 2d from different angles then you can basically use the existing 2d to 3d model.

https://replicate.com/collections/3d-models

Are you sure that’s what they’re doing? In the demo, they show that the vector sections have been preserved, so there’s clearly more to the story. Maybe 2D -> 3D, map path to vertex, rotate, project path back into 2D?
loading story #41875373
loading story #41875271
Good idea, but such a frustrating company to do business with as a consumer
> Adobe's Brian Domingo told Creative Bloq that like other Adobe Innovation projects, there's still no guarantee that this feature will be released commercially.

Well, I confess I got a little bit confused here :/ . What's the purpose then for such an innovative solution if not commercialized?!

This looks very cool. I really hope the results are not overly cherry-picked like Adobe's first version of the text-to-vector generation that only worked particularly well for the showcased art styles.
I won't be excited until its live in an app, company demos are always exciting.
This captures the essence of what "modern AI" is great at! Relieving the tedium of a highly constrained task.

Great demo. This will really help animators and artists.

Looks like Adobe finally found a way to cut down on piracy.

None of these new AI features will work on a pirated copy because it's all server-side processing.

I found the Project Turntable page on Adobe's site more interesting (with embedded video) on mobile than the linked CreativeBloq site:

https://www.adobe.com/max/2024/sessions/project-turntable-gs...

Well, when is so big bad company going to bully us into using their tools to convert 3D sculpts into flawlessly animatable models? I'll submit to their abuse and surrender my lunch money to them. Though not if it is Adobe, I still have some self-love.
Makes me think of

https://lookingglassfactory.com/looking-glass-go-spatial-pho...

which needs multiple views of your image from different angle and tries to make it up with AI.

Does anybody know a DIY solution to get a similar result? I am asking because 300$ seems like a lot of money for this.
I thought this was one of those sarcastic headlines, highlighting the overuse of AI for basic processes.
preserving the vector art after transforming is really cool, anyone know the relevant papers? or was this original research done by Adobe?
Came here assuming they were using AI for "rotate 90°" ready to drop a rant, but this was actually impressive.
I had a similar negative reaction to the grandiose title but in this case it was totally deserved and I am pretty blown away.
As someone who otherwise hates genAI, I must admit, this is actually a very cool demo and a very sensible application of AI.
How very strange, my partner was mocking up a room for our home just a few hours ago, and I asked whether an AI tool existed to rotate the incorrect angle of a sofa in a photo being used within the mock up - and here it is on hackernews just an hour later, just that tool..

Edit/ apparently I misunderstood it's only possible with vectors - getting close though to the reality mentioned!

It took me a while to understand that the second picture is actually a muted video with hidden controls.
Amazing this will give ancient GIFs a facelift.
so, pacman will have 3 D characters now ?
Yes, Ms. Pacman has the DDs, which is why PacMan himself gives her the D.

3 Ds.

haven't been in the loop for a while, stupid question: why do people hate adobe
Not a graphic designer, so I can't speak for their reasons, only for mine. First, I had a photoshop subscription and when I cancelled they wanted to fine me for cancelling. Then they bought the Substance suite and made it subscription only and very expensive (unless you buy the Steam version, whose price they doubled). That also hurt me, when I could barely afford those tools

I'm better off now, but I have a long memory and prefer to vote with my wallet by paying multiples to any competitor...which generally speaking is better for me and everybody else, since competition is the mother of innovation.

Apropo, Marmoset Toolbag 5 is out; it comes with a permanent license, it has a huge materials library, and the interface is very snappy and it doesn't feel like it has been programmed using Electron. You don't need to pay for Substance Painter this year.

Ah, and Adobe's latest exploit was a confusing TOS that more or less stated they would use your work that you edited locally with their software to train their AI models. I think they walked that one back when the wave of outrage hit them.

I want the actual 3D models.

This looks like the perfect tech for a cel shaded game!

NeRF or gaussian splatting?
I don't think either of those would work with a single 2D vector image.
loading story #41875458
Pretty incredible
Completely agree. I thought this was going to be some underwhelming nonsense, but that is legit impressive and something even a non-artist could benefit from.
Arguably non-artists benefit the most. This is a time saver for skilled artists but a whole new ability unlock for the unskilled ones.
This is basically like taking your 2D drawing to an artist and saying "draw this for me from different angles." Only now the artist is a computer, and probably costs you a lot less than paying a real artist every time you want to do this.

Animators are even more out of a job I guess, but really have been for quite some time I think, almost no animation is entirely hand-drawn anymore.

A large amount of animation made in Japan is still initially animated by hand on paper, actually! The anime industry is remarkably conservative, technologically, which makes it all the more impressive that its animation production output dwarfs that of most other places, including ones that have largely switched over to 3D or puppet rigging for animation productions...
I was going to write how this would be cool in a kids drawing app but the thought that they might never feel the need to draw something from a different angle. I wonder what other activities have been lost to time and technology.
> what other activities have been lost to time and technology

- flintknapping

- the distaff activities: carding, spinning, weaving, etc.

- "teamster" as a very highly skilled occupation

EDIT: compare https://www.youtube.com/watch?v=JD2ua6q8FFA&t=475s with https://www.youtube.com/watch?v=gjZX6L5cnUg&t=11s

Socrates was against the invention of writing because it meant people lost the skill to memorize and recite

https://www.historyofinformation.com/detail.php?id=3439

It really has been destructive. He anticipated the day in which you could change people's memories by editing the internet.
Depends on whether the kid wants to learn to draw, or just wants to create drawings.
Why might a kid not want to draw something from a different angle? In my introduction to drawing course I was asked to draw my non-dominant hand every day for a week, each time from a slightly different angle.
Because instead of doing that, they could have the computer rotate their drawing to a new angle.
I'd love to see what this tool does with bad drawings, heh.
SIGGRAPH from over a decade ago has entered the chat...

https://www.youtube.com/watch?v=Oie1ZXWceqM

It may not be AI, but this single video blew my mind back in *2013* and I find myself thinking about it often.

The video you shared very much looks deserving of the 'AI' label to me.

Perhaps you mean it doesn't use some of the techniques driving the current AI boom, like LLMs or diffusion models.

This is great -- I'm always amazed how effective classical algorithms are at doing so-called "neural tasks". What's strange is how few SIGGRAPH tech ever makes it out as a consumer product
>SIGGRAPH from over a decade ago has entered the chat... > >https://www.youtube.com/watch?v=Oie1ZXWceqM

A version of this was available in Photoshop for a long time, but I think the feature was deprecated and removed completely this year. I had used it for a few things here and there, but dedicated 3D tools were much better if you were working in that space.

I'm pretty tired seeing AI slapped on everything but holy shit this is impressive.
Is there another source? None of the images loaded for me.
I am sure this is the right time for hobbysts to make your own movies, and animations.

I personally started programming, in part, to make simple animations like the ones you see in Scratch, and it’s incredible how accessible the tools are today for anyone looking to bring their ideas to life.

{"deleted":true,"id":41870220,"parent":41870040,"time":1729176717,"type":"comment"}
one thing is you can't be lazy when drawing the initial vector like a car for example, you can't just draw from the top and expect it to generate a side shot after rotating. You need to draw maybe an isometric version first.
People have been using 3D models for 2D graphics for at least a decade. 3D models rotate, by default.

This demo shows generating a 3D model from a simple 2D shape. It'll fall flat on its face trying to 3D model anything non-trivial which begs the question - who cares?

Also, you'll want to animate the 3D model - which this doesn't do, so you'll soon be back to your usual 3D toolkit anyway.

The difference is that you don't need a 3D model for this.

You start with 2D vector graphics that is significantly easier to create.

loading story #41873964
This is in a wholly 2D program. The input is implied to be one completely flat vector drawing, which Illustrator turns into a 3d model, and renders back into flat vectors at multiple rotations, with no further work on the part of the artist.

(I say "implied" because that's all they're showing in the video presentation, there may be additional setup involved that they're skipping. This is inside Illustrator though, which has a long history of 3d extensions being very awkwardly shoved into a corner of its toolset.)