Hacker News new | past | comments | ask | show | jobs | submit
I'm building a Java HFT engine and the amount of things AI gets wrong is eye opening. If I didn't benchmark everything I'd end up with much less optimized solution.

Examples: AI really wants to use Project Panama (FFM) and while that can be significantly faster than traditional OO approaches it is almost never the best. And I'm not taking about using deprecated Unsafe calls, I'm talking about using primative arrays being better for Vector/SIMD operations on large sets of data. NIO being better than FFM + mmap for file reading.

You can use AI to build something that is sometimes better than what someone without domain specific knowledge would develop but the gap between that and the industry expected solution is much more than 100 hours.

AI is extremely good at the things that it has many examples for. If what you are doing is novel then it is much less of a help, and it is far more likely to start hallucinating because 'I don't know' is not in the vocabulary of any AI.
> because 'I don't know' is not in the vocabulary of any AI.

That is clearly false. I’m only familiar with Opus, but it quite regularly tells me that, and/or decides it needs to do research before answering.

If I instruct it to answer regardless, it generally turns out that it indeed didn’t know.

loading story #47389297
I think the main issue is treating LLM as a unrestrained black box, there's a reason nobody outside tech trust so blindly on LLMs.

The only way to make LLMs useful for now is to restrain their hallucinations as much as possible with evals, and these evals need to be very clear about what are the goal you're optimizing for.

See karpathy's work on the autoresearch agent and how it carry experiments, it might be useful for what you're doing.

> there's a reason nobody outside tech trust so blindly on LLMs.

Man, I wish this was true. I know a bunch of non tech people who just trusts random shit that chatgpt made up.

I had an architect tell me "ask chatgpt" when I asked her the difference between two industrial standard measures :)

We had politicians share LLM crap, researchers doing papers with hallucinated citations..

It's not just tech people.

loading story #47389688
loading story #47390399
{"deleted":true,"id":47388583,"parent":47388257,"time":1773589964,"type":"comment"}
loading story #47390258
loading story #47391237
loading story #47388885
Wouldn't Java always lose in terms of latency against a similarly optimized native code in, let's say, C(++)?
loading story #47390402
Not necessarily. Java can be insanely performant, far more than I ever gave it credit for in the first decade of its existence. There has been a ton of optimization and you can now saturate your links even if you do fairly heavy processing. I'm still not a fan of the language but performance issues seem to be 'mostly solved'.
"Saturating your links" is rarely the goal in HFT.

You want low deterministic latency with sharp tails.

If all you care about is throughput then deep pipelines + lots of threads will get you there at the cost of latency.

As long as you tune the JVM right it can be faster. But its a big if with the tune, and you need to write performant code
loading story #47390376
Depends. Many reasons, but one is that Java has a much richer set of 3rd party libraries to do things versus rolling your own. And often (not always) third party libraries that have been extensively optimized, real world proven, etc.

Then things like the jit, by default, doing run time profiling and adaptation.

Java has huge ecosystem in enterprise dev, but very unlikely it has ecosystem edge in high performance/real time compute.
loading story #47390425
loading story #47390099
I am curious about what causes some to choose Java for HFT. From what I remember the amount of virgin sacrifices and dances with the wolves one must do to approach native speed in this particular area is just way too much of development time overhead.
loading story #47390641
"HFT" means different things to different people.

I've worked at places where ~5us was considered the fast path and tails were acceptable.

In my current role it's less than a microsecond packet in, packet out (excluding time to cross the bus to the NIC).

But arguably it's not true HFT today unless you're using FPGA or ASIC somewhere in your stack.

Software HFT? I see people call Python code HFT sometimes so I understand what you mean. It's more in-line with low latency trading than today's true HFT.

I don't work for a firm so don't get to play with FPGAs. I'm also not co-located in an exchange and using microwave towers for networking. I might never even have access to kernel networking bypass hardware (still hopeful about this one). Hardware optimization in my case will likely top out at CPU isolation for the hot path thread and a hosting provider in close proximity to the exchanges.

The real goal is a combination of eliminating as much slippage as possible, making some lower timeframe strategies possible and also having best class back testing performance for parameter grid searching and strategy discovery. I expect to sit between industry leading firms and typical retail systematic traders.

The one person who understands HFT yeah. "True" HFT is FPGA now and also those trades are basically dead because nobody has such stupid order execution anymore, either via getting better themselves or by using former HFTs (Virtu) new order execution services.

So yeah there's really no HFT anymore, it's just order execution, and some algo trades want more or less latency which merits varying levels of technical squeezing latency out of systems.

loading story #47389144
loading story #47389179
I've seen SQL injection and leaked API tokens to all visitors of a website :)