If you take a photo of the moon on a Samsung device, it will return a detailed photo of the moon. Some people are mad about this.
The issue is that Samsung’s software fakes some details the camera can’t really see, leading a Reddit user called ibreakphotos to accuse the company of “faking” moon photos. The user’s post claims to be able to trick Samsung’s moon detection, and it went viral enough that Samsung’s press site had to respond.
Samsung’s incredibly niche “Moon Mode” will do certain photo processing if you point your smartphone at the moon. In 2020, the Galaxy S20 Ultra launched with a “100x Space Zoom” (it was really 30x) with this moon feature as one of its marketing gimmicks. The mode is still heavily featured in Samsung’s marketing, as you can see in this Galaxy S23 ad, which shows someone with a huge, tripod-mounted telescope being jealous of the supposedly incredible moon photos a pocketable Galaxy phone can take.
We’ve known how this feature works for two years now—Samsung’s camera app contains AI functionality specifically for moon photos—though we did get a bit more detail in Samsung’s latest post. The Reddit post claimed that this AI system can be tricked, with ibreakphotos saying that you can take a picture of the moon, blur and compress all the detail out of it in Photoshop, and then take a picture of the monitor, and the Samsung phone will add the detail back. The camera was allegedly caught making up details that didn’t exist at all. Couple this with AI being a hot topic, and the upvotes for faked moon photos started rolling in.
On one hand, using AI to make up detail is true of all smartphone photography. Small cameras make for bad photos. From a phone to a DSLR to the James Webb Telescope, bigger cameras are better. They simply take in more light and detail. Smartphones have some of the tiniest camera lenses on Earth, so they need a lot of software to produce photos that are anywhere near reasonable in quality.
“Computational photography” is the phrase used in the industry. Generally, many photos are quickly taken after you press the shutter button (and even before you press the shutter button!). These photos are aligned into a single photo, cleaned up, de-noised, run through a bunch of AI filters, compressed, and saved to your flash storage as a rough approximation of what you were pointing your phone at. Smartphone manufacturers have to throw as much software at the problem as possible because no one wants a phone with a giant, protruding camera lens, and normal smartphone camera hardware can’t keep up.
But lighting aside, the moon basically always looks the same to everyone. While it spins, the Earth spins, and the two spin around each other; gravitational forces put the moon in a “synchronous rotation” so we always see the same side of the moon, and it only “wobbles” relative to Earth. If you make an incredibly niche camera mode for your smartphone specifically targeted at only moon photography, you can do a lot of fun AI tricks with it.
Who would know if your camera stack just lies and patches in professionally shot, pre-existing photos of the moon into your smartphone picture? Huawei was accused of doing exactly that back in 2019. The company allegedly packed photos of the moon into its camera software, and if you took a photo of a dim light bulb in an otherwise dark room, Huawei would put moon craters on your lightbulb.
That would be pretty bad. But what if you took one step back from that and simply involved an AI middleman instead? Samsung took a bunch of photos of the moon, trained an AI on those photos, and then set the AI loose on users’ photos of the moon. Is that crossing a line? How specific are you allowed to get with your AI training use cases?
Samsung’s press release mentions a “detail enhancement engine” for the moon, but it doesn’t go into much detail about how it works. The article includes a few unhelpful diagrams about moon mode and AI that all basically boil down to “a photo goes in, some AI stuff happens, and a photo comes out.”
In the company’s defense, AI is often called a “black box.” You can train these machine-learning models for a desired outcome, but no one can explain exactly how they work. If you’re a programmer that hand-writes a program, you can explain what each line of code does because you wrote the code, but an AI is only “trained”—it programs itself. This is partly why Microsoft is having such a hard time making the Bing chatbot behave.
The press release mostly talks about how the phone recognizes the moon or how it adjusts brightness, but those points are not the issue—the issue is where the detail comes from. While there’s no choice quote we can pull, the above image shows pre-existing moon images being fed into the “Detail Enhancement Engine.” The whole right side of this diagram is pretty suspicious. It says Samsung’s AI compares your moon photo with a “high-resolution reference” and will send it back into the AI detail engine if it’s not good enough.
That does feel like Samsung is cheating a bit, but where exactly should the line for AI photography be? You definitely wouldn’t want an AI-free smartphone camera—that would be a worst-in-class camera. Even non-AI photos from a big camera are just electronic interpretations of the world. They’re not “correct” references of how things should look; we’re just more used to them. Even objects viewed with the human eye are just electrical signals interpreted by your brain and look different for everyone.
It would be a real problem if Samsung’s details were inaccurate, but the moon really does look like that. If a photo is completely accurate and looks good, it’s hard to argue against it. It also would be a problem if the moon detail was inaccurately applied to things that aren’t the moon, but taking a picture of a Photoshopped image is an extreme case. Samsung says it will “improve Scene Optimizer to reduce any potential confusion that may occur between the act of taking a picture of the real moon and an image of the moon,” but should it even do that? Who cares if you can fool a smartphone with Photoshop?
The key here is that this technique only works on the moon, which looks the same for everybody. Samsung can be super aggressive about AI detail generation for the moon because it knows what the ideal end result should look like. It feels like Samsung is cheating because this is a hyper-specific use case that doesn’t provide a scalable solution for other subjects.
You could never use an aggressive AI detail generator for someone’s face because everyone’s face looks different, and adding details would make that photo not look like the person anymore. The equivalent AI technology would be if Samsung trained an AI specifically on your face and then used that model to enhance photos it detected you were in. Someday, a company may offer hyper-personalized at-home AI training based on your old photos, but we’re not there yet.
If you don’t like your improved moon photos, you can just turn the feature off—it’s called “Scene Optimizer” in the camera settings. Just don’t be surprised if your moon photos look worse.