Windows Copilot AI unable to pinpoint image source in user test
When Microsoft launched Windows Copilot, the idea was straightforward: a built-in helper that could answer questions, draft emails and even make sense of pictures. In a recent test I tried, a writer tossed a screenshot of a crowded street at the assistant and asked it to name the exact spot. You’d think the AI, supposedly trained on billions of images, would pull up a location in a heartbeat, sparing a Google search.
Instead, it gave a few fuzzy guesses and then kind of froze when pressed for details. Not knowing where the photo came from, the writer dug through Trip Advisor archives. After scrolling past dozens of user-uploaded albums, an editor finally spotted a matching shot in a review collection and confirmed the place.
The gap between the AI’s missed guess and the tedious human sleuthing seems to be something early adopters are already feeling.
**It never got the location right……
At no point did it correctly identify the location of the image. (To be slightly fair to Copilot, if you don't already know where the image is from, it's not easy to figure out. After manually searching through Trip Advisor images, my editor found a match in a user review album that confirms Microsoft's ad was correct in pinpointing Rio Secreto. Since the video depicted in Microsoft's ad doesn't seem to exist, it's unclear what information Copilot was using to identify the cave.) Beyond simply looking at things and trying to identify them, Microsoft also depicts Copilot actually doing things.
When we tried the demo, Copilot couldn’t tell where the picture came from. I had to open Trip Advisor and hunt through reviews just to find the source. The ads keep calling it “the computer you can talk to,” but the reality felt more like a reminder of what it still can’t do.
Because the photo was pretty obscure, it’s hard to say if this is a systemic flaw in its visual-search or just a one-off mismatch between what we expected and what it was trained on. In the end I managed to locate the image by scrolling through user-generated albums, a good reminder that a human hand is still needed for tasks Copilot says it handles. So the promised convenience still feels a ways off, and I’m not sure a future update will close that gap.
For now, the tool can be handy for vague questions, but it’s still shaky when you need an exact image ID.
Common Questions Answered
Why did Windows Copilot fail to correctly identify the location of the street scene image in the test?
The AI offered only vague guesses and never pinpointed the exact spot, likely because the image was obscure and not part of its indexed dataset. The author had to resort to a manual Trip Advisor search to locate the source, highlighting current limitations in Copilot's visual‑search capability.
What location was eventually identified as the source of the image after a manual Trip Advisor search?
The manual search uncovered that the image matched a user review album showing Rio Secreto, a cave in Mexico. This discovery confirmed that Microsoft's ad had correctly referenced Rio Secreto, even though the AI itself did not make the connection.
How does the article assess the broader promise of Windows Copilot as a "truly conversational PC" based on this visual test?
The article suggests that the failed attempt to name the image’s origin reveals a gap between marketing claims and actual performance. While Copilot can handle text tasks, its current visual‑search abilities appear limited, especially with obscure images.
What uncertainty does the article highlight regarding the video shown in Microsoft's Windows Copilot advertisement?
The author notes that the video depicted in the ad does not seem to exist, making it unclear what data Copilot used to claim it could identify the cave. This ambiguity further questions the reliability of the assistant’s visual identification claims.