Hacker News new | past | comments | ask | show | jobs | submit

Anthropic Cowork feature creates 10GB VM bundle on macOS without warning

https://github.com/anthropics/claude-code/issues/22543
Hi, Felix from Anthropic here. I work on Claude Cowork and Claude Code.

Claude Cowork uses the Claude Code agent harness running inside a Linux VM (with additional sandboxing, network controls, and filesystem mounts). We run that through Apple's virtualization framework or Microsoft's Host Compute System. This buys us three things we like a lot:

(1) A computer for Claude to write software in, because so many user problems can be solved really well by first writing custom-tailored scripts against whatever task you throw at it. We'd like that computer to not be _your_ computer so that Claude is free to configure it in the moment.

(2) Hard guarantees at the boundary: Other sandboxing solutions exist, but for a few reasons, none of them satisfy as much and allow us to make similarly sound guarantees about what Claude will be able to do and not to.

(3) As a product of 1+2, more safety for non-technical users. If you're reading this, you're probably equipped to evaluate whether or not a particular script or command is safe to run - but most humans aren't, and even the ones who are so often experience "approval fatigue". Not having to ask for approval is valuable.

It's a real trade-off though and I'm thankful for any feedback, including this one. We're reading all the comments and have some ideas on how to maybe make this better - for people who don't want to use Cowork at all, who don't want it inside a VM, or who just want a little bit more control. Thank you!

loading story #47222769
FWIW I think many of us would actually very much love to have an official (or semi official) Claude sandboxing container image base / vm base. I wonder if you all have considered making something like the cowork vm available for that?
There is this:

https://code.claude.com/docs/en/devcontainer

It does work but I found pretty quickly that I wanted to base my robot sandbox on an image tailored for the project and not the other way around.

Perhaps useful, I discovered: https://github.com/agent-infra/sandbox

> All-in-One Sandbox for AI Agents that combines Browser, Shell, File, MCP and VSCode Server in a single Docker container.

what would you use it for?
Not OP, but having the exact VM spec your agent runs on is useful for testing. I want to make sure my code works perfectly on any ephemeral environments an agent uses for tasks, because otherwise the agent might invent some sort of degenerate build and then review against that. Seen it happen many times on Codex web.
I think these are are excellent points, but the complaint talks about significant performance and power issues.
That's every virtual machine that's ever existed. They are slower than metal and you're running two OS stacks so you'll draw more power.
loading story #47221598
I tried to use it right after launch from within Claude Desktop, on a Mac VM running within UTM, and got cryptoc messages about Apple virtualization framework.

That made me realize it wants to also run a Apple virtualization VM but can’t since it’s inside one already - imo the error messaging here could be better, or considering that it already is in a VM, it could perhaps bypass the vm altogether. Because right now I still never got to try cowork because of this error.

Does UTM/Apple's framework not allow nested virtualization? If I remember correctly from x86(_64) times, this is a thing that sometimes needs to be manually enabled.
loading story #47221622
I accidentally clicked the Claude Cowork button inside the Claude desktop app. I never used it. I didn't notice anything at the time, but a week later I discovered the huge VM file on my disk.

It would be really nice to ask the user, “Are you sure you want to use Cowork, it will download and install a huge VM on your disk.”

loading story #47222200
Any chance you guys could get the Claude Desktop installer fixed on Windows? It currently requires users to turn on "developer mode."

Sorry for the ask here, but unaware of other avenues of support as the tickets on the Claude Code repo keep getting closed, as it is not a CC issue.

https://github.com/anthropics/claude-code/issues/26457https:...

Can you allow placing the VM on an external disk?

Also, please allow Cowork to work on directories outside the homedir!

I suppose you could just symlink the directory it's in?
loading story #47221607
Do you think it would be possible in the future to maybe add developer settings to enable or disable certain features, or to switch to other sandboxing methods that are more lightweight like Apple seatbelt for example?
There's a lot that's not being said in (2). That warrants more extensive justification, especially with the issues presented in the parent post.
They're using the harnesses provided by the respective underlying Operating Systems to do virtualization.

I'd like to explore that topic more too, but I feel like the context of "we deferred to MacOS/Windows" is highly relevant context here. I'd even argue that should be the default position and that "extensive justification" is required to NOT do that.

loading story #47222281
{"deleted":true,"id":47220614,"parent":47220118,"time":1772470735,"type":"comment"}
> (2) Hard guarantees at the boundary: Other sandboxing solutions exist, but for a few reasons, none of them satisfy as much and allow us to make similarly sound guarantees about what Claude will be able to do and not to.

This is the most interesting requirement.

So all the sandbox solutions that were recently developed all over GitHub, fell short of your expectations?

This is half surprising since many people were using AI to solve the sandboxing issue have claimed to have done so over several months and the best we have is Apple containers.

What were the few reasons? Surely there has to be some strict requirement for that everyone else is missing.

But still having a 10 GB claude.vmbundle doesn't make any sense.

Cowork has been an insane productivity boost, it is actually amazing. Thank you!
It's incredible how many applications abuse disk access.

In a similar fashion, Apple Podcasts app decided to download 120GB of podcasts for random reason and never deleted them. It even showed up as "System Data" and made me look for external drive solutions.

The system data issue on macOS is awful.

I use my MacBook for a mix of dev work and music production and between docker, music libraries, update caches and the like it’s not weird for me to have to go for a fresh install once every year or two.

Once that gets filled up, it’s pretty much impossible to understand where the giant block of memory is.

Yep, it is an awful situation. I'm increasingly becoming frustrated with how Apple keeps disrespecting users.

I downloaded several MacOS installers, not for the MacBook I use, but intending to use them to create a partitioned USB installer (they were for macOS versions that I could clearly not even use for my current MacBook). Then, after creating the USB, since I was short of space, I deleted the installers, including from the trash.

Weirdly, I did not reclaim any space; I wondered why. After scratching my head for a while, I asked an LLM, which directed me to check the system snapshots. I had previously disabled time machine backup and snapshots, and yet I saw these huge system snapshots containing the files I had deleted, and kicker was, there was no way to delete them!

Again I scratched my head for a while for a solution other than wiping the MacBook and re-installing MacOS, and then I had the idea to just restart. Lo and behold, the snapshots were gone after restarting. I was relieved, but also pretty pissed off at Apple.

loading story #47220925
loading story #47220962
loading story #47220527
Because Apple differentiates their products by their storage sizes, they also sell iCloud subscription. There is zero (in fact negative) incentive to respect your storage space.
loading story #47220443
I had the same problem and had some luck cleaning things up by enabling "calculate all sizes" in Finder, which will show you the total directory size, and makes it a bit easier to look for where the big stuff is hiding. You'll also want to make sure to look through hidden directories like ~/Library; I found a bunch of Docker-related stuff in there which turned out to be where a lot of my disk space went.

You can enable "calculate all sizes" in Finder with Cmd+J. I think it only works in list view however.

loading story #47219406
loading story #47219715
loading story #47220297
loading story #47219568
loading story #47219498
The trick is to reboot into recovery partition, disable SIP, then run OmniDiskSweeper as root (as in `sudo /Applications/OmniDiskSweeper.app/Contents/MacOS/OmniDiskSweeper`). Then you can find all kinds of caches that are otherwise hidden by SIP.
Even worse on ipad. My wife is an artist and 100gigs of "system data" is completely inscrutable and there's zero ways to fix it besides a full wipe.
I simply run GrandPerspective (GUI app, https://grandperspectiv.sourceforge.net/), or dust (terminal app, https://github.com/bootandy/dust), to give me an idea of what is going on with disk usage.
Seconding.

I should not have to hack through /Libary files to regain data on a TB drive because Osx wanted to put 200gbs of crap there in an opaque manner and not give the user ANY direct way to regain their space.

Equally egregious are applications that insist on using the primary disk to cache model data/sample data/whatever
What should they do instead?

Like, assuming they need the data and it's inconveniently large to fit into RAM, where/how should they store and access it if not the primary disk?

They should ask. Let users specify a scratch / cache location - preferably fast storage that’s not The OS drive
My 256gb Mac Mini currently has 65gb of "System Data" and 40gb of "MacOS"
Gotta hit that docker system prune -a
> Once that gets filled up, it’s pretty much impossible to understand where the giant block of memory is.

Your friend is called ncdu and can be used as follows:

    sudo ncdu -x -e --exclude Volumes /System/Volumes/Data/
The exclude for Volumes is necessary because otherwise ncdu ends up in an infinite loop - "/Volumes/Macintosh\ HD/Volumes/" can be repeated ad nauseam and ncdu's -x flag doesn't catch that for whatever reason.
Don't run "du -h ~/Library/Messages" then, I've mentioned that many times before and it's crazy to me to think that Apple is just using up 100GB on my machine, just because I enable iMessage syncing and don't want to delete old conversations.

One would think that's a extremely common use case and it will only grow the more years iMessage exists. Just offload them to the cloud, charge me for it if you want but every other free message service that exists has no problem doing that.

loading story #47221202
loading story #47219967
loading story #47219992
loading story #47220804
loading story #47219584
loading story #47221923
This one drives me nuts. Not just on Mac, also on iPhone/iPad. It's 2026, and 5G is the killer feature advertised everywhere. There's no reason to default to downloading gigabytes of audio files if they could be streamed with no issue whatsoever.
loading story #47220641
loading story #47220848
loading story #47220135
> Apple Podcasts app decided to download 120GB

That's one way to drive sales for higher priced SSDs in Apple products. I'm pretty sure that that sort of move shows up as a real blip on Apple's books.

This seems to be a recent popular tool to handle this - https://github.com/tw93/Mole

I also prompt warp/gemini cli to identify unnecessary cache and similar data and delete them

Suprisingly Claude is amazing at cleaning up your macbook. Tried, works like a charm.
{"deleted":true,"id":47219152,"parent":47218773,"time":1772465113,"type":"comment"}
loading story #47221452
Someone actually still uses the built-in podcasts app?
loading story #47219144
loading story #47219025
loading story #47219105
loading story #47219394
loading story #47219489
My WinSxS folder is 17Gb
The vibe coding giveth and the the vibe coding taketh away, blessed be the vibe coding
I guess it could warn about it but the VM sandbox is the best part of Cowork. The sandbox itself is necessary to balance the power you get with generating code (that's hidden-to-user) with the security you need for non-technical users. I'd go even further and make user grant host filesystem access only to specific folders, and warn about anything with write access: can think of lots of easy-to-use UIs for this.
Arguably, even without LLM, you too should be dev-ing inside a VM...

https://developer.hashicorp.com/vagrant is still a thing.

The market for Cowork is normals, getting to tap into a executive assistant who can code. Pros are running their consumer "claws" on a separate Mac Mini. Normals aren't going to do that, and offices aren't going to provision two machines to everyone.

The VM is an obvious answer for this early stage of scaled-up research into collaborative computing.

Yeah, very easy to do today. May VPS providers help with this, checkout:

https://exe.dev

https://sprites.dev

https://shellbox.dev

loading story #47220655
loading story #47221036
I believe that employees in Anthropocs use CC to develop CC now.

AI really give much user ability to develop a completed product, but the quality is decreasing. Professional developers will be in demand when the products/features become popular.

First batch of users of new products need to take more responsibility to test the product like a rats in lab

{"deleted":true,"id":47219628,"parent":47219151,"time":1772467117,"type":"comment"}
> AI really give much user ability to develop a completed product, but the quality is decreasing. Professional developers will be in demand when the products/features become popular.

Looking at the amount of issues, outages and rookie mistakes the employees are making leads me to believe that most of them are below junior level.

If anyone were to re-interview everyone at Anthropic for their own roles with their own interview questions, I would guess that >75% of them would not pass their own interviews.

The only team the would pass them are the Bun team and some other of the recently acquired startups.

loading story #47220434
{"deleted":true,"id":47219681,"parent":47219151,"time":1772467259,"type":"comment"}
{"deleted":true,"id":47219282,"parent":47219151,"time":1772465553,"type":"comment"}
loading story #47223795
I literally spent the last 30 mins with DaisyDisk cleaning up stuff in my laptop, I feel HN is reading my mind :)

I also noticed this 10GB VM from CoWork. And was also surprised at just how much space various things seem to use for no particular reason. There doesn't seem to be any sort of cleanup process in most apps that actually slims down their storage, judging by all the cruft.

Even Xcode. The command line tools installs and keeps around SDKs for a bunch of different OS's, even though I haven't launched Xcode in months. Or it keeps a copy of the iOS simulator even though I haven't launched one in over a year.

> Xcode…keeps around SDKs for a bunch of different OS's

Not a new problem, unfortunately. DevCleaner is commonly used to keep it under control: https://github.com/vashpan/xcode-dev-cleaner

Is there no crond and find on MacOSX ?
Yup it uses Apple Virtualization framework for virtualization. It makes it so I can't use the Claude Cowork within my VMs and that's when I found out it was running a VM, because it caused a nested VM error. All it does is limit functionality, add extra space and cause lag. A better sandbox environment would be Apple seatbelt, which is what OpenAI uses, but even that isn't perfect: https://news.ycombinator.com/item?id=44283454
loading story #47221147
seatbelt is largely undocumented.
loading story #47220047
loading story #47219492
As much as an inconvenience this may be, this is exactly what "agents" should be doing. If your tool doesn't have a builtin sandbox that is intended to be used at all times, you're using something downright hazardous and WILL end up suffering data loss.
loading story #47220959
On a similar tangent, but on the opposite end of the spectrum, check out this month-old discussion on HN: https://news.ycombinator.com/item?id=46772003

ChatGPT's code execution container contains 56 vCPUs!! Back then, simonw mentioned:

> It appears to have 4GB of RAM and 56 (!?) CPU cores https://chatgpt.com/share/6977e1f8-0f94-8006-9973-e9fab6d244...

I'm seeing something similar on a free account too: https://chatgpt.com/share/69a5bbc8-7110-8005-8622-682d5943dc...

On my paid account, I was able to verify this. I was also able to get a CPU-bound workload running on all cores. Interestingly, it was not able to fully saturate them, though - despite trying for 20-odd minutes. I asked it to test with stress-ng, but it looks like it had no outbound connectivity to install the tool: https://chatgpt.com/share/69a5c698-28bc-8005-96b6-9c089b0cc5...

Anyways, that's a lot of compute. Not quite sure why its necessary for a plus account. Would love to get some thoughts on this?

I imagined someone at Anthropic prompted "improve app performance", and this was the result.
Ok, so a lot of this boils down to the fact that this sort of software really wants to be running on linux. For both windows and mac, the only way to (really) do that is creating a VM.

It seems to me that the main issue here is painful disconnects between the VM and the host system. The kernel in the VM wants to manage memory and disk usage and that management ultimately means the host needs to grant the guest OS large blocks of disk and memory.

Is anyone thinking about or working on narrowing that requirement? Like, I may want the 99% of what a VM does, but I really want my host system to ultimately manage both memory and disk. I'd love it if in the linux VM I had a bridge for file IO which interacted directly with the host file system and a bridge in the memory management system which ultimately called the host system's memory allocation API directly and disabled the kernels memory management system.

containers and cgroups are basically how linux does this. But that's a pretty big surface area that I doubt any non-linux system could adopt.

Given that Claude Code runs without issues on macOS, I'd guess that it's more about sandboxing shell sessions (i.e. not macOS applications or single processes, for which solutions exist).

Unfortunately, unlike Linux, macOS doesn't have a great out-of-the-box story there; even Apple's first-party OCI runtime is based on per-container Linux VMs.

loading story #47220781
loading story #47220839
Sure it uses a few GB just like everything else these days, but some of the comments also mention it being slow?
The GitHub issue is AI generated. In my experience triaging these in other projects, you can’t really trust anything in them without verifying. The users will make claims and then the AI will embellish to make them sound more important and accurate.
> AI will embellish to make them sound more important and accurate.

Did you mean than accurate rather than and accurate? Having a more accurate issue description only sounds like a good thing to me

Making them look more accurate is not the same as being more accurate, and llms are pretty good at the former.

Imagine a user had a vague idea or something that is broken, then the LLM will choose to interpret his comment for what it thinks is the most likely actual underneath problem, without actually checking anything.

loading story #47219119
loading story #47219087
loading story #47219071
loading story #47222772
I see this as a feature. The cost of isolation
macbook pro m4 bought last year. worked on so many codes and projects. never hot after closing lid. installed electron claude. closed lid and went to sleep and woke up to macbook that has been hot all night. uninstall claude. problem went away.

i kept telling myself this BUT NEVER ELECTRON AGAIN.

It’s not electron
loading story #47221234
loading story #47220700
> woke up to macbook that has been hot all night

this is usual reason for divorce /s

I really love Anthropic's models, but, every single product/feature I've used other than the Claude Code CLI has been terrible... The CLI just "sticked" for me and I've never needed (or arguably looked in depth) any other features. This for my professional dayjob.

For personal use, where I have a Pro subscription and adventure into exploring all the other features/products they have... I mean, the experience outside of Claude Code and the terminal has been... bad.

> every single product/feature I've used other than the Claude Code CLI has been terrible

yeah they're shipping too fast and everything is buggy as shit

- fork conversation button doesn't even work anymore in vscode extension

- sometimes when I reconnect to my remote SSH in VSCode, previously loaded chats become inaccessible. The chats are still there in the .jsonl files but for some reason the CC extension becomes incapable of reading them.

I tend to agree here. Today, I tried to get the claude chat to give me a list of Jira tickets from one board (link provided) and then upload it to notion with some additional context. It glitched out after trying the prompt over again 4x. I eventually gave up and went back to the terminal.
Yes. This is my experience as well. The software quality is generally horrible. It surely has improved a lot over the last couple of months, but it is still pretty horrible.

It is quite normal for me to have to force-close Claude Desktop.

loading story #47223058
Aren't most these people recommending random tools in the github chat for this entry just attempting to exploit naive users? Why would anyone in this day and age follow advice of new users to download new repos or click at random websites when they already attempt to use claude code or cowork?
loading story #47220603
Way slower, but way better than chat mode. Nothing beats Claude Code CLI imo.
Yeah, that's why I do not install these tools on my personal devices anymore and instead play with them on a VPS.

Try this if you have claude code -- ls -a your home dir and see all the garbage claude creates.

This GitHub issue itself is clearly AI slop. If you’ve been dealing with GitHub issues in the past months it will be obvious, but it’s confirmed at the end:

> Filed via Claude Code

I assume part of it is true, but determining which part is true is the hard part. I’ve lost a lot of time chasing AI-written bug reports that were actually something else wrong with the user’s computer. I’m assuming the claims of “75% faster” and other numbers are just AI junk, but at least someone could verify if the 10GB VM exists.

If your codebase is entirely vibe coded, I feel it only appropriate to permit issues being vibed as well. It's hypocritical otherwise.
I wouldn't think it's inappropriate for an AI agent to file an issue against another AI agent, which itself is largely written by AI.
Mac Problems...

so crazy on a windows desktop I at most complain if it is hardcoded to the system drive (looking at you ollama)

Hey, they did admit that they vibed this in a week and released it to everyone.
That seems somewhat reasonable.

Storage should be cheaper, complain about Apple making you pay a premium.

Its just another example and just a detail in the broader story: We cannot trust any model provider with any tooling or other non model layer on our machines or our servers. No browsers, no cli, no apps no whatever. There may not be alternatives to frontier models yet, but everything else we need to own as true open source trustable layer that works in our interest. This is the battle we can win.
Why don't people form cooperatives, contribute to buy serious hardware and colocate them in local data centers, and run good local models like GLM on them to share?
loading story #47220403
What's funny is interacting with it in claude code. Claude-desktop-cowork can't do anything about the VM. It creates this 10 GiB VM, but the disk image starts off with something like 6-7 GiB full already, which means any of the cowork stuff you try to do has to fit into the remaining couple of gigs. It's possible to fill it up, and then claude cowork stops working. Because the disk is full. Claude cowork isn't able to fix this problem. It can't even run basic shell commands in the VM, and Opus4.6 is able to tell the user that, but isn't smart enough/empowered to do anything about it.

So contrary to the github issue, my problem is that it's not enough space. So the fix is to navigate to ~/Library/Application\ Support/Claude/vm_bundles, and then ask Claude Code to upsize the disk to a sparse 60 GiB file, giving cowork much more space to work in while not immediately taking up 60 GiB.

Bigger picture, what this teaches me though, is that my knowledge is still useful in guiding the AI to be able to do things, so I'm not obsolete yet!

So it's using it's binary disk/image as the cache/work disk also?

Yea, that's a receipt for problems.

This is exactly the kind of issues we will see more and more frequently with vibe-coding.
The amount of bad things this companies software does is staggering. The models are amazing, the code sucks.
Their code is written by their amazing models (this is what they claim anyway).
Are we sure that this isn't a sparse image? It will report as the full size in finder, but it won't actually be consuming that much space if it's a sparse image
Just write a Claude OS already.
All code in Claude™ is written by Claude™
Also apparently eating 2 GB RAM or so to run an entire virtual machine even if you've disabled Cowork. Not sure which of this is worse. Absolute garbage.
The software seems to get into more and more and communicate about what it's doing less and less. That's the crux.

Pondering... Noodling... Some other nonsense...

labelled "high priority" a month ago. No actual activity by Anthropic despite it being their repo. I'm starting to get the feeling they're not actually very good at this?
loading story #47222056