AI chat is heading the same way. So I built a fully interactive demo that shows what an ad-supported AI chatbot could actually look like: https://99helpers.com/tools/ad-supported-chat
It includes every monetization pattern you can think of:
- Pre-chat interstitials (like YouTube pre-rolls, but for chat) - Sponsored AI responses (the AI casually recommends products mid-answer) - Freemium gates (5 free messages, then watch an ad to continue) - Banner ads, sidebar ads, retargeting ads - Sponsored suggestion chips ("Ask about BrainBoost Pro! ")
Tech question? Steer you to its cloud. Medical question? Steer you towards a sponsored treatment. Or maybe the mechanism of injury needs this lawyer to compensate?
Oh and I infer from your chat history you're about to expect a child. That house is probably too small now, so our realtor in that neighborhood can help!
How much would Vercel be willing to pay OpenAI and Anthropic to nudge ChatGPT and Claude towards producing Vercel-compatible next.js apps? Maybe the models could even ask, "Do you want me to deploy the app to Vercel using their free plan?".
Technically, that means being able to install Linux, run local models, and use open-source software as we see fit.
Legally, it's opposing compliance guises that erode those rights, like backdoors or restrictions on what can run so that we no longer really in control of the hardware we own but need to adjust to the whims of the controller/operator, which could, at a moments notice, default to these dark patterns for "pragmatic reasons" of their own which don't align with your interests.
We know enough bad stories for the "internet of things" devices. Anyone interested in FOSS and control should probably invest in this angle.
The incentives will be:
1. Get people psychologically dependent in any way possible.
2. Incentivize any "creators" that help with #1. Pose as "content neutral", while actually funding and pumping any content that creates "engagement" regardless of harm.
3. Collate as much information from external sources on each user as possible.
4. User every interaction with a user to improve information leverage being accumulated by #3.
5. Feed ads to users based on surveillance-informed predicted vulnerabilities, in order to maximize ad valuations. Special shout out to scams that work, because they work, they pay.
6. Once the user experience is thoroughly enshittified, start enshittifying the ad customer market by raising prices, minimizing the margins left for product and service advertisers.
7. Present company as evidence of US strength in tech, as apposed to a scaled up, centralized, multi-directed economic parasite.
TLDR: Surveillance leveraged ads are many times worse than just ads. With AI magnifying surveillance intake and leverage to unprecedented highs.
Privacy needs to start being treated like every other security risk. Because every vulnerability will be increasingly exploited, and exploited increasingly well.
As long as it is legal to scale up conflicts of interest, such as surveillance informed manipulation, paying for and pumping up harmful "creator" content, selling ads to scammers, harms will keep scaling up.
Sites should not have any safe harbor for content they pay for, and for content they are paid to deliver.
You also forgot to elaborate on the later company life cycle where the MBAs take over and only serve themselves and the Wall Street.
Product and product development is a cost center that is cut away to bare minimum skeleton crew. Customers are an inconvenience and only exist for the company to extract maximum benefit from while offering the minimum.
Actual product support is killed, and instead user supported forums are promoted. Useful idiots do the work unpaid for a mere digital badge.
Any new product feature that actually gets developed is not for the users but for the company. Features that make it through are either more data extraction, ads, surveillance or a dark pattern to try to trick the user for more money.
After they have their niche by the balls, they enshittify the product as much as the users are willing to tolerate and then some more.
There will I’m sure be the ability to pay and not have ads just like there is on streaming platforms, podcasts, etc.
Or should there be tax supported free AI?
These trends combined will mean that eventually it will seem old-fashioned to use a remotely-hosted model for anything other than the most demanding tasks. Just as we don't use mainframes for computation anymore outside of niche tasks like 3D render farms.
The only people using ad-supported AI will be people who can't afford a newer device with local inference. So it will be more or less like the web today, where ads are primarily targeted and viewed by less-affluent and less-technical users.
Of course, I can't see the future, but it would take a lot for those trend lines to not converge. The only thing that could delay the convergence is true AGI, but I'm currently not a believer.
Instead of interacting with the cloud model directly, run a simple local model to interact with the cloud model and have it filter out all the ads before they reach you.
This is already what the chatbots do when it comes to interacting with rest of the Web, instead of you visiting websites yourself, they collect the information from the websites for you and present it in a format of your choice without the websites ads.
I don't see the ad model working out for chatbots in the long run given that those AI models already are the perfect ad filter.