This looks really interesting. I'm curious to learn more about security around this project. There's a small section, but I wonder if there's more to be aware of like prompt injection
I'm happy you brought this up. I've been thinking about this and working on a plan to make it as solid as possible. For now, the best way would be to run each agent in a docker container (there is an example Dockerfile in the repo) so any destructive actions will be contained to the container.
However, this does not help if a person gives access to something like Google Calendar and a prompt tells the LLM to be destructive against that account.