HN tends to dislike Microsoft, but they went to great lengths to build a HoloLens system where eye tracking was both useful and safe.
The eye tracking data never left the device, and was never directly available to the application. As a developer, you registered targets or gestures you were interested in, and the platform told you when the user for example looked to activate your target.
Lots of subtlety and care went into the design, so yes, the first six things you think of as concerns or exploits or problems were addressed, and a bunch more you haven't thought of yet.
If this is a space you care about, read up on HoloLens eye tracking.
It's pretty inexcusable if Apple is providing raw eye tracking streams to app developers. The exploits are too easy any too prevalent. [EDIT ADDED: the article is behind a paywall but it sounds from comments here like Apple is not providing raw eye tracking streams, this is about 3rd parties watching your eyes to extract your virtual typing while you are on a conference call]
Apple is not doing that. As the article describes, the issue is that your avatar (during a FaceTime call, for example) accurately reproduces your eye movements.
The exploit requires analysing the avatar's eyes, but as they're not the natural movements but replicated ones, there should be a lot less noise. And of course as you need to intentionally focus on specific UI targets, these movements are even less natural and fuzzy than if you were looking at your keyboard while typing.
I don't think that's an accurate description, either. The SharePlay "Persona" avatar is a system service just like the front-facing camera stream. Any app can opt into using either of them.