Close Menu
  • News
  • Bitcoin
    • Lightning Network
  • Metaverses and NFT
  • Artificial Intelligence
  • Opinions
    • Crypto Regulations
    • Layer 2
    • On-chain Analysis
    • Trading
  • Tutorial
  • Italiano
  • English
Facebook X (Twitter) Instagram
SEGUICI
BlockWorld.it
  • News
  • Bitcoin
    • Lightning Network
  • Metaverses and NFT
  • Artificial Intelligence
  • Opinions
    • Crypto Regulations
    • Layer 2
    • On-chain Analysis
    • Trading
  • Tutorial
  • Italiano
  • English
Telegram X (Twitter)
BlockWorld.it
Home » Artificial Intelligence » Google presents smart glasses with artificial intelligence

Google presents smart glasses with artificial intelligence

Google showed at MWC 2026 its prototype of smart glasses with Android XR: real-time translation, video calls, AR navigation and AI processing of photos at a glance.
RedazioneBy Redazione18 March 2026
Telegram Twitter WhatsApp Facebook
CONDIVIDI
Telegram Twitter WhatsApp Facebook

Thirteen years after the failure of Google Glass, Mountain View is at it again. And this time, judging by the prototype video shared on Reddit by product lead Dieter Bohn last week, something structurally different is emerging. Not a stage promise, but a working demo presented at MWC 2026 in Barcelona, with wearable glasses, a single integrated display and Gemini 3 running in real time above the user’s field of view.

The most representative scene in the video lasts a few seconds but is worth a paper of analysis. Bohn stands in front of a poster of a football field. He looks at it. He verbally asks Gemini how to get to the stadium. The model, without Bohn having typed anything, without him having opened an app, figures out where he wants to go, identifies the facility from the visual, and projects the directions directly into the field of view. When Bohn lowers his gaze, a map automatically appears. It is not computer use in the traditional sense: it is contextual multimodal reasoning, where the input is the user’s gaze and the output is augmented reality. This is exactly the promise of smart glasses that no one had yet managed to credibly deliver.

The other features shown in the demo complete a coherent picture. Live translation superimposed on the field of view, video calls with environmental contextualisation, recognition of album covers followed by the automatic opening of YouTube Music: all operations that an AI assistant on a smartphone already performs, but which on the glasses change completely in nature because they eliminate the friction of ‘pull out your phone’. The most interesting step on the creative front is the integration with Nano Banana: Bohn asks the glasses to take a picture of the people in front of him and reimagine them in front of Barcelona’s Sagrada Família. The model processes the photo, contextualises the request with the real location, and returns a generated image within seconds. It is a trivial use case on the surface, but it signals an architecture where camera, generative AI and AR are one continuous pipeline, not three separate apps.

It must be said, with honesty, what these glasses are not yet. Bohn makes it explicitly clear: they are prototypes, and the final version will have a different form factor. For the demo at MWC there were clip-ons for the graduated lenses, a temporary solution that Google has already excluded from the final product. The current design is clearly ‘lab’, not ‘shelf’.

This, however, is exactly where the competition gets interesting. Because Gemini 3.1 Pro on an AR display is not the same product running on a smartphone: the sensory context completely changes the value proposition. When an AI sees what you see, in real time, it stops being a tool to be consulted and starts becoming something else. Not an assistant you are waiting to interrogate: an interlocutor that reasons about what you are looking at before you have even finished the question.

Google already tried this in 2013 and lost. The difference today is that the models match the hardware. The problem is that so is the competition. And in this race, whoever comes second with a better product often finds that the market has already decided elsewhere.

Share. Telegram Twitter WhatsApp Facebook

Potrebbero interessarti anche

Claude Code Review: Anthropic launches a team of AI agents for code review

12 March 2026

The Great Artificial Intelligence Schism: Anthropic Challenges the Pentagon, Silicon Valley Divides. Full analysis

1 March 2026

Google launches Gemini 3.1 Pro: the AI model that doubles reasoning capabilities and builds interactive experiences from scratch

24 February 2026
Telegram X (Twitter)
  • HOME
  • Staff
  • Contact us
  • Disclaimer
  • Cookie Policy
  • Privacy Policy
© 2026 BlockWorld.it proprietà di Digital Dreams s.r.l. - Partita IVA: 11885930963 - Sede legale: Via Alberico Albricci 8, 20122 Milano Italy - info@digitaldreams.it

Type above and press Enter to search. Press Esc to cancel.