What do you mean AI? This is an interferometric Image reconstructed from information of the very large baseline interferometer (VLBI)?
There is no AI nor even machine learning (ML) in any way involved. This is models independent and does into even use Bayesian inference to compute anything (which would be ML based).
Well in their paper they used two approaches, a model-independent image reconstruction. In this case not even ML is used. And a forward modelling approach, which makes use of Bayesian inference and is tangent to ML, but basically a fancy and performant grid search.
In the latter case there is no training as usual. As in you train for one thing and let it do other things with it (e.g. cats and dogs recognition). However, they compute physical models of emission (in this case as point sources) that create the intensities seen in the data, and then pick the model that has the best agreement with the data. So theoretically you could say it is similar to one training epoch in ML, but not really.
Someone: takes a selfie with their phone under low lighting conditions
You: "not a photo, it’s the output of an algorithm taking the luminosity from an array of light detectors, giving information of the colour and modifying it according to lighting conditions, and then using specific software to sharpen the original capture*
Its not hard to find that there are legitimate academic criticism of this ‘photo’. For example here. The comparison you made is not correct, more like I gave a blurry photo to an AI trained on paintings of Donald Trump and asked it to make an image of him. Even if the original image was not of Trump, the chances are the output will be because that’s all the model was trained on.
This is the trouble with using this as ‘proof’ that the. Theory and the simulations are correct, because while that is still likely, there is a feedback loop causing confirmation bias here, especially when people refer to this image as a ‘photo’.
This is one team that disagrees out of many that agree.
To explain what you are seeing. The above image is the inverse Fourier transform (FT) of different frequencies of sinus waves that compose an image.
The very large baseline interferometer (VLBI) applied in the event horizon telescope (EHT) is using different telescopes all over the world, in a technique called interferometry, to achieve high enough resolutions to observe different frequencies in Fourier space that make up an image. If you observe all, you can recreate the full image perfectly. They did not, they observed for a long time and thus got a hefty amount of these “spatial” frequencies. Then they use techniques that limit the image to physical reality (e.g. no negative intensities/fluxes) and clean it from artefacts. Then transform it to image space (via the inverse FT)
Thereby, they get an actual image that approximates reality. There is no AI used at all. The researchers from Japan argued for different approach to the data, getting a slightly different inclination in that image. This may well be as the data is still too few to 100 % determine the shape, but looks more to me like they chose very different assumptions (which many other researchers do not agree with).
Most of what you said is correct but there is a final step you are missing, the image is not entirely constructed from raw data. The interferometry data is sparse and the ‘gaps’ are filled with mathematical solutions from theoretical models, and using statistical models trained on simulation data.
We recently developed PRIMO (Principal-component
Interferometric Modeling; Medeiros et al. 2023a) for in-
terferometric image reconstruction and used it to obtain
a high-fidelity image of the M87 black hole from the 2017
EHT data (Medeiros et al. 2023b). In this approach, we
decompose the image into a set of eigenimages, which
the algorithm “learned” using a very large suite of black-
hole images obtained from general relativistic magneto-
hydrodynamic (GRMHD) simulations
Not a photo.
It’s the output of an AI model trained on simulations of black holes being asked to fill in the gaps from sparse observations.
What do you mean AI? This is an interferometric Image reconstructed from information of the very large baseline interferometer (VLBI)?
There is no AI nor even machine learning (ML) in any way involved. This is models independent and does into even use Bayesian inference to compute anything (which would be ML based).
I don’t mean LLM. I mean a specific ML model for the job, but still trained off simulations.
Well in their paper they used two approaches, a model-independent image reconstruction. In this case not even ML is used. And a forward modelling approach, which makes use of Bayesian inference and is tangent to ML, but basically a fancy and performant grid search.
In the latter case there is no training as usual. As in you train for one thing and let it do other things with it (e.g. cats and dogs recognition). However, they compute physical models of emission (in this case as point sources) that create the intensities seen in the data, and then pick the model that has the best agreement with the data. So theoretically you could say it is similar to one training epoch in ML, but not really.
Someone: takes a selfie with their phone under low lighting conditions
You: "not a photo, it’s the output of an algorithm taking the luminosity from an array of light detectors, giving information of the colour and modifying it according to lighting conditions, and then using specific software to sharpen the original capture*
Nah, the hivemind is being cringe as shit rn.
Recreating an image with Ai is not the same even remotely from capturing raw data directly from a digital sensor and cranking the exposure up.
The Ai is approximating what it sees, digital sensors are not, they don’t approximate anything. It’s either there or they don’t see it.
objective and subjective
Its not hard to find that there are legitimate academic criticism of this ‘photo’. For example here. The comparison you made is not correct, more like I gave a blurry photo to an AI trained on paintings of Donald Trump and asked it to make an image of him. Even if the original image was not of Trump, the chances are the output will be because that’s all the model was trained on.
This is the trouble with using this as ‘proof’ that the. Theory and the simulations are correct, because while that is still likely, there is a feedback loop causing confirmation bias here, especially when people refer to this image as a ‘photo’.
This is one team that disagrees out of many that agree.
To explain what you are seeing. The above image is the inverse Fourier transform (FT) of different frequencies of sinus waves that compose an image.
The very large baseline interferometer (VLBI) applied in the event horizon telescope (EHT) is using different telescopes all over the world, in a technique called interferometry, to achieve high enough resolutions to observe different frequencies in Fourier space that make up an image. If you observe all, you can recreate the full image perfectly. They did not, they observed for a long time and thus got a hefty amount of these “spatial” frequencies. Then they use techniques that limit the image to physical reality (e.g. no negative intensities/fluxes) and clean it from artefacts. Then transform it to image space (via the inverse FT)
Thereby, they get an actual image that approximates reality. There is no AI used at all. The researchers from Japan argued for different approach to the data, getting a slightly different inclination in that image. This may well be as the data is still too few to 100 % determine the shape, but looks more to me like they chose very different assumptions (which many other researchers do not agree with).
Most of what you said is correct but there is a final step you are missing, the image is not entirely constructed from raw data. The interferometry data is sparse and the ‘gaps’ are filled with mathematical solutions from theoretical models, and using statistical models trained on simulation data.
Paper: https://arxiv.org/pdf/2408.10322