Anthropic Cowork feature creates 10GB VM bundle on macOS without warning
https://github.com/anthropics/claude-code/issues/22543Claude Cowork uses the Claude Code agent harness running inside a Linux VM (with additional sandboxing, network controls, and filesystem mounts). We run that through Apple's virtualization framework or Microsoft's Host Compute System. This buys us three things we like a lot:
(1) A computer for Claude to write software in, because so many user problems can be solved really well by first writing custom-tailored scripts against whatever task you throw at it. We'd like that computer to not be _your_ computer so that Claude is free to configure it in the moment.
(2) Hard guarantees at the boundary: Other sandboxing solutions exist, but for a few reasons, none of them satisfy as much and allow us to make similarly sound guarantees about what Claude will be able to do and not to.
(3) As a product of 1+2, more safety for non-technical users. If you're reading this, you're probably equipped to evaluate whether or not a particular script or command is safe to run - but most humans aren't, and even the ones who are so often experience "approval fatigue". Not having to ask for approval is valuable.
It's a real trade-off though and I'm thankful for any feedback, including this one. We're reading all the comments and have some ideas on how to maybe make this better - for people who don't want to use Cowork at all, who don't want it inside a VM, or who just want a little bit more control. Thank you!
https://code.claude.com/docs/en/devcontainer
It does work but I found pretty quickly that I wanted to base my robot sandbox on an image tailored for the project and not the other way around.
> All-in-One Sandbox for AI Agents that combines Browser, Shell, File, MCP and VSCode Server in a single Docker container.
That made me realize it wants to also run a Apple virtualization VM but can’t since it’s inside one already - imo the error messaging here could be better, or considering that it already is in a VM, it could perhaps bypass the vm altogether. Because right now I still never got to try cowork because of this error.
It would be really nice to ask the user, “Are you sure you want to use Cowork, it will download and install a huge VM on your disk.”
Sorry for the ask here, but unaware of other avenues of support as the tickets on the Claude Code repo keep getting closed, as it is not a CC issue.
https://github.com/anthropics/claude-code/issues/26457https:...
I'd like to explore that topic more too, but I feel like the context of "we deferred to MacOS/Windows" is highly relevant context here. I'd even argue that should be the default position and that "extensive justification" is required to NOT do that.
This is the most interesting requirement.
So all the sandbox solutions that were recently developed all over GitHub, fell short of your expectations?
This is half surprising since many people were using AI to solve the sandboxing issue have claimed to have done so over several months and the best we have is Apple containers.
What were the few reasons? Surely there has to be some strict requirement for that everyone else is missing.
But still having a 10 GB claude.vmbundle doesn't make any sense.
In a similar fashion, Apple Podcasts app decided to download 120GB of podcasts for random reason and never deleted them. It even showed up as "System Data" and made me look for external drive solutions.
I use my MacBook for a mix of dev work and music production and between docker, music libraries, update caches and the like it’s not weird for me to have to go for a fresh install once every year or two.
Once that gets filled up, it’s pretty much impossible to understand where the giant block of memory is.
I downloaded several MacOS installers, not for the MacBook I use, but intending to use them to create a partitioned USB installer (they were for macOS versions that I could clearly not even use for my current MacBook). Then, after creating the USB, since I was short of space, I deleted the installers, including from the trash.
Weirdly, I did not reclaim any space; I wondered why. After scratching my head for a while, I asked an LLM, which directed me to check the system snapshots. I had previously disabled time machine backup and snapshots, and yet I saw these huge system snapshots containing the files I had deleted, and kicker was, there was no way to delete them!
Again I scratched my head for a while for a solution other than wiping the MacBook and re-installing MacOS, and then I had the idea to just restart. Lo and behold, the snapshots were gone after restarting. I was relieved, but also pretty pissed off at Apple.
You can enable "calculate all sizes" in Finder with Cmd+J. I think it only works in list view however.
I should not have to hack through /Libary files to regain data on a TB drive because Osx wanted to put 200gbs of crap there in an opaque manner and not give the user ANY direct way to regain their space.
Your friend is called ncdu and can be used as follows:
sudo ncdu -x -e --exclude Volumes /System/Volumes/Data/
The exclude for Volumes is necessary because otherwise ncdu ends up in an infinite loop - "/Volumes/Macintosh\ HD/Volumes/" can be repeated ad nauseam and ncdu's -x flag doesn't catch that for whatever reason.One would think that's a extremely common use case and it will only grow the more years iMessage exists. Just offload them to the cloud, charge me for it if you want but every other free message service that exists has no problem doing that.
That's one way to drive sales for higher priced SSDs in Apple products. I'm pretty sure that that sort of move shows up as a real blip on Apple's books.
I also prompt warp/gemini cli to identify unnecessary cache and similar data and delete them
https://developer.hashicorp.com/vagrant is still a thing.
The market for Cowork is normals, getting to tap into a executive assistant who can code. Pros are running their consumer "claws" on a separate Mac Mini. Normals aren't going to do that, and offices aren't going to provision two machines to everyone.
The VM is an obvious answer for this early stage of scaled-up research into collaborative computing.
AI really give much user ability to develop a completed product, but the quality is decreasing. Professional developers will be in demand when the products/features become popular.
First batch of users of new products need to take more responsibility to test the product like a rats in lab
Looking at the amount of issues, outages and rookie mistakes the employees are making leads me to believe that most of them are below junior level.
If anyone were to re-interview everyone at Anthropic for their own roles with their own interview questions, I would guess that >75% of them would not pass their own interviews.
The only team the would pass them are the Bun team and some other of the recently acquired startups.
I also noticed this 10GB VM from CoWork. And was also surprised at just how much space various things seem to use for no particular reason. There doesn't seem to be any sort of cleanup process in most apps that actually slims down their storage, judging by all the cruft.
Even Xcode. The command line tools installs and keeps around SDKs for a bunch of different OS's, even though I haven't launched Xcode in months. Or it keeps a copy of the iOS simulator even though I haven't launched one in over a year.
Not a new problem, unfortunately. DevCleaner is commonly used to keep it under control: https://github.com/vashpan/xcode-dev-cleaner
ChatGPT's code execution container contains 56 vCPUs!! Back then, simonw mentioned:
> It appears to have 4GB of RAM and 56 (!?) CPU cores https://chatgpt.com/share/6977e1f8-0f94-8006-9973-e9fab6d244...
I'm seeing something similar on a free account too: https://chatgpt.com/share/69a5bbc8-7110-8005-8622-682d5943dc...
On my paid account, I was able to verify this. I was also able to get a CPU-bound workload running on all cores. Interestingly, it was not able to fully saturate them, though - despite trying for 20-odd minutes. I asked it to test with stress-ng, but it looks like it had no outbound connectivity to install the tool: https://chatgpt.com/share/69a5c698-28bc-8005-96b6-9c089b0cc5...
Anyways, that's a lot of compute. Not quite sure why its necessary for a plus account. Would love to get some thoughts on this?
It seems to me that the main issue here is painful disconnects between the VM and the host system. The kernel in the VM wants to manage memory and disk usage and that management ultimately means the host needs to grant the guest OS large blocks of disk and memory.
Is anyone thinking about or working on narrowing that requirement? Like, I may want the 99% of what a VM does, but I really want my host system to ultimately manage both memory and disk. I'd love it if in the linux VM I had a bridge for file IO which interacted directly with the host file system and a bridge in the memory management system which ultimately called the host system's memory allocation API directly and disabled the kernels memory management system.
containers and cgroups are basically how linux does this. But that's a pretty big surface area that I doubt any non-linux system could adopt.
Unfortunately, unlike Linux, macOS doesn't have a great out-of-the-box story there; even Apple's first-party OCI runtime is based on per-container Linux VMs.
Did you mean than accurate rather than and accurate? Having a more accurate issue description only sounds like a good thing to me
Imagine a user had a vague idea or something that is broken, then the LLM will choose to interpret his comment for what it thinks is the most likely actual underneath problem, without actually checking anything.
i kept telling myself this BUT NEVER ELECTRON AGAIN.
this is usual reason for divorce /s
For personal use, where I have a Pro subscription and adventure into exploring all the other features/products they have... I mean, the experience outside of Claude Code and the terminal has been... bad.
yeah they're shipping too fast and everything is buggy as shit
- fork conversation button doesn't even work anymore in vscode extension
- sometimes when I reconnect to my remote SSH in VSCode, previously loaded chats become inaccessible. The chats are still there in the .jsonl files but for some reason the CC extension becomes incapable of reading them.
It is quite normal for me to have to force-close Claude Desktop.
Try this if you have claude code -- ls -a your home dir and see all the garbage claude creates.
> Filed via Claude Code
I assume part of it is true, but determining which part is true is the hard part. I’ve lost a lot of time chasing AI-written bug reports that were actually something else wrong with the user’s computer. I’m assuming the claims of “75% faster” and other numbers are just AI junk, but at least someone could verify if the 10GB VM exists.
so crazy on a windows desktop I at most complain if it is hardcoded to the system drive (looking at you ollama)
Storage should be cheaper, complain about Apple making you pay a premium.
So contrary to the github issue, my problem is that it's not enough space. So the fix is to navigate to ~/Library/Application\ Support/Claude/vm_bundles, and then ask Claude Code to upsize the disk to a sparse 60 GiB file, giving cowork much more space to work in while not immediately taking up 60 GiB.
Bigger picture, what this teaches me though, is that my knowledge is still useful in guiding the AI to be able to do things, so I'm not obsolete yet!
Yea, that's a receipt for problems.
Pondering... Noodling... Some other nonsense...