The FANG code basis are very large and date back years might not necessarily be using open source frameworks rather in house libraries and frameworks none of which are certainly available to Anthropic or OpenAI hence these models have zero visibility into them.
Therefore combined with the fact that these are not reasoning or thinking machines rather probabilistic (image/text) generators, they can't generate what they haven't seen.
LLMs learn dynamically through their context window and this learning is at a rate much faster than humans and often with capabilities greater than humans and often much worse.
For a code base as complex and as closed source as google the problems an LLM faces is largely the same as a human. How much can he fit into the context window?
Even internal stuff is usable by the model because it’s a pattern matching machine and there should be documentation available, or it can just study the code like a human.