Hacker News new | past | comments | ask | show | jobs | submit
You could probably use just an X-scanner, and instead of a CCD line sensor, use a regular 2D image sensor if you used a "1 pixel wide" slit aperture to crop the image perpendicularly to the direction that the prism disperses the light. So instead of a single pixel being dispersed, you disperse a line.

You would reduce the time required by the root of the number of pixels you want (assuming a square image).

(This is what we do in momentum-resolved electron energy loss spectroscopy. In that situation we have electromagnetic lenses that focus the electrons that have been dispersed, so we don't have as bad a chromatic aberration problem as the other response mentions).

I would love to see e.g. a butterfly image with a slider that I could drag to choose the wavelength shown!!

> I would love to see e.g. a butterfly image with a slider that I could drag to choose the wavelength shown!!

Here[1] are some 31-band hyperspectral images of butterflies. Numpy/pillow can unpack the .mat files into normal images. Then perhaps vibecode a slider, or just browse the band images?

[1] http://www.ok.sc.e.titech.ac.jp/res/MSI/MSIdata31.html (includes 8 butterfly 31-band hyperspectral visible-light images). These butterflies are also their VIS-SNIR dataset, and others.

I knew of the site having explored "First-tier physical-sciences graduate students are often deeply confused about color. Color is commonly taught, starting in K... very very poorly. So can we create K-3 interactive content centered around spectra, and give an actionable understanding of color?"

Very nice idea! That makes it much easier!