Hacker News new | past | comments | ask | show | jobs | submit
interesting, why is this?
I can think of two reasons: 1. it's immediately clear to users that they're seeing content that doesn't belong to your business but instead belongs to your business's users. maybe less relevant for github, but imagine if someone uploaded something phishing-y and it was visible on a page with a url like google.com/uploads/asdf.

2. if a user uploaded something like an html file, you wouldn't want it to be able to run javascript on google.com (because then you can steal cookies and do bad stuff), csp rules exist, but it's a lot easier to sandbox users content entirely like this.

loading story #41514140
loading story #41516640
loading story #41514057
loading story #41514930
Because there's stuff out there (software, entities such as Google) that assume the same level of trust in a subdomain vs its parent and siblings. Therefore if something bad ends up being served on one subdomain they can distrust the whole tree. That can be very bad. So you isolate user provided content on its own SLD to reduce the blast radius.
I've read - because if a user uploads content that gets you on a list that blocks your domain - you could technically switch user content domains for your hosting after purging the bad content. If it's hosted under your primary domain, your primary domain is still going to be on that blocked list.

Example I have is - I have a domain that allows users to upload images. Some people abuse that. If google delists that domain, I haven't lost SEO if the user content domain gets delisted.

loading story #41515867
CDNs can be easier to configure, you can more easily put your CDNs colocated into POPs if it's simpler to segregate them, and you have more options for geo-aware routing and name resolution.

Also in the case of HTTP/1 browsers will limit the number of simultaneous connections by host or domain name, and this was a technique for doubling those parallel connections. With the rise of HTTP/2 this is becoming moot, and I'm not sure of the exact rules of modern browsers to know if this is still true anyway.

There's historical reasons regarding per-host connection limitations of browsers. You would put your images, scripts, etc each on their own subdomain for the sake of increased parallelization of content retrieval. Then came CDNs after that. I feel like I was taught in my support role at a webhost that this was _the_ reasoning for subdomains initially, but that may have been someone's opinion.
Search engines, anti-malware software, etc track sites' reputations. You don't want users' bad behavior affecting the reputation of your company's main domain.
Also subdomains could set cookies on parent domains. Also causes a security problem between sibling domains.

I presume this issue has been reduced over the years by browsers as part of the third-party cookies denial fixes...?

Definitely was a bad security problem.

Another aspect are HSTS (HTTP Strict Transport Security) headers, which can extend to subdomains.

If your main web page is available at example.com, and the CMS starts sending HSTS headers, stuff on subdomain.example.com can suddenly break.

{"deleted":true,"id":41513845,"parent":41513735,"time":1726078213,"type":"comment"}
{"deleted":true,"id":41513774,"parent":41513735,"time":1726077765,"type":"comment"}