Hacker News new | past | comments | ask | show | jobs | submit
Is cloudflare becoming a mob outfit? Because they are selling scraping countermeasures but are now selling scraping too.

And they can pull it off because of their reach over the internet with the free DNS.

I think the simple explanation is that they weren't selling scraping countermeasures, they were selling web-based denial of service protection (which may be caused by scrapers).
This was always also sold as bot protection and anti-scraping / crawling features like https://www.cloudflare.com/lp/pg-ai-crawl-control/
Ask yourself, why would a scraper ddos? Why would a ddos-protection vendor ddos?
Because the scraper is either impatient, careless or indifferent; and if they scrape for training data they don't plan to come back. If they don't plan to come back they don't care if you tighten up crawling protections after they have moved on. In fact they are probably happy that they got their data and their competition won't
> they don't plan to come back

To me the current behavior of those scrapers tells me that "they don't plan", period.

Looks like they hired a bunch of excavators and are digging 2 meters deep on whole fields, looking for nuggets of gold, and pilling the dirt on a huge mountain.

Once they realize the field was bereft of any gold but full of silver? Or that the gold was actually 2.5 meters deep?

They have to go through everything again.

The number of git forges behind Anubis et al and the numerous public announcements should be enough.

Scrappers seem to be exceedingly careless in using public resources. The problem is often not even DDOS (as in overwhelming bandwidth usage) but rather DOS through excessive hits on expensive routes.

Ask yourself, why would everyone except you say that they do?
> Ask yourself, why would a scraper ddos?

Don't need to ask anything i can tell you exactly - because they have no regard for anything but their own profit.

Let me give you an example of this mom and pop shop known as anthropic.

You see they have this thing called claudebot and at least initially it scraped iterating through IP's.

Now you have these things called shared hosting servers, typically running 1000-10000 domains of actual low volume websites on 1-50 or so IPs.

Guess what happens when it is your networks time to bend over? Whole hosting company infrastructure going down as each server has hundreds of claudebots crawling hundreds of vhosts at the same time.

This happened for months. Its the reason they are banned in WAFs by half the hosting industry.

loading story #47338401
Their free DNS is only a small piece of the pie.

The fact that 30%+ of the web relies on their caching services, routablility services and DDoS protection services is the main pull.

Their DNS is only really for data collection and to front as "good will"

> The fact that 30%+ of the web relies on their caching services

30% of the web might use their caching services. 'Relies on' implies that it wouldn't work without them, which I doubt is the case.

It might be the case for the biggest 1% of that 30%. But not the whole lot.

>'Relies on' implies that it wouldn't work without them

Last time Cloudflare went down, their dashboard was also unavailable, so you couldn't turn off their proxy service anyway.

No: https://developers.cloudflare.com/browser-rendering/rest-api...
That's not the perfect defense you think it is. Plenty of robots.txts[1] technically allow scraping their main content pages as long as your user-agent isn't explicitly disallowed, but in practice they're behind Cloudflare so they still throw up Cloudflare bot check if you actually attempt to crawl.

And forget about crawling. If you have a less reputable IP (basically every IP in third world countries are less reputable, for instance), you can be CAPTCHA'ed to no end by Cloudflare even as a human user, on the default setting, so plenty of site owners with more reputable home/office IPs don't even know what they subject a subset of their users to.

[1] E.g. https://www.wired.com/robots.txt to pick an example high up on HN front page.

Cloudflare has been trying to mediate publishers & AI companies. If publishers are behind Cloudflare and Cloudflare's bot detection stops scrapers at the request of publishers, the publishers can allow their data to be scraped (via this end point) for a price. It creates market scarcity. I don't believe the target audience is you and me. Unless you own a very popular blog that AI companies would pay you for.
Next step will be their default "free" anti-bot denying all but their own bot. They know full well nearly nobody changes the default.
no? it takes 10 seconds to check:

> The /crawl endpoint respects the directives of robots.txt files, including crawl-delay. All URLs that /crawl is directed not to crawl are listed in the response with "status": "disallowed".

You don't need any scraping countermeasures for crawlers like those.

So what’s the user agent for their bot? They don’t seem to specify the default in the docs and it looks like it’s user configurable. So yet another opt out bot which you need your web server to match on special behaviour to block
>So yet another opt out bot which you need your web server to match on special behaviour to block

Given that malicious bots are allegedly spoofing real user agents, "another user agent you have to add to your list" seems like the least of your problems.

It is cloudflare who made the claim that they are well behaved unlike those other bots and that their behaviour can be controlled by robots.txt

If I need to treat cloudflare bots the same as malicious bots, that undermines their claim.

Not 'allegedly' - it's just a fact. Even if you're not malicious however it's still sometimes necessary because the server may have different sites for different browsers and check user agents for the experience they deliver. So then even for legitimate purposes you need to at least use the prefix of the user agent that the server expects.
Like they explain in the docs, their crawler will respect the robots.txt dissalowed user-agents, right after the section hat explains how to change your user-agent.
Was it ever not one? They protect a lot of DDoS-for-hire sites from DDoS by their competitors. In return they increase the quantity of DDoS on the internet. They offer you a service for $150, then months later suddenly demand $150k in 24 hours or they shut down your business. If you use them as a DNS registrar they will hold your domain hostage.
yeah, GP completely fails to realize that Cloudflare has always played both sides. that is their entire business model, and it was transparent from the beginning that they would absolutely do the same here.
I think there's some space being absolutely snuffed by the countless bots of everyone, ignoring everything, pulling from residential proxies, and this, supposedly slower, well behavior, smarter bot.

Like there's a difference between dozens of drunk teenagers thrashing the city streets in the illegal street race vs a taxi driver.

They always have been.

They also use their dominant position to apply political pressure when they don’t like how a country chooses to run things.

So yeah, we’ve created another mega corp monster that will hurt for years to come.

Well this scraper honours robots.txt so I'm sure most AI crawlers will find it useless.
[flagged]
Do you have any evidence to support this view?
Who else would MITM 30% of the internet?
Read who and how it was founded. It's not a secret at all.
loading story #47339629
Any kind of source for the claim?
For a long time cloudflare has proudly protected DDoS-as-a-service sites (but of course, they claim they don't "host" them)
Are you using the word "claim" to call them wrong or for a more confusing reason?

Because I'm pretty sure they are not in fact wrong.

The distinction between a caching proxy and an origin server is pretty meaningless when you're serving static content, if you ask me.
There's a blurry line there, true.

On the other hand when a page is small and static enough that it's basically just a flyer, I also care a lot less about who hosts it.

If they ever sell or the CEO shifts, yes. For the meantime, they have not given any strong indication that they're trying to bully anybody. I could see things changing drastically if the people in charge are swapped out.