top of page

Anticipating AI at CES: The Future is Here, and It’s Ready to Talk to You

  • Feb 10
  • 3 min read

By Daniel W. Rasmus


I just left the Virtual Event Group’s (VEG) CES Pregame livestream, where I covered what I was looking for from AI at CES 2026. Here’s a brief overview of what I shared with attendees.


Those heading to Las Vegas for CES 2026 will see "AI” transition from a feature to AI as THE value proposition. While AI-powered devices will still find their spot at the show, exhibitors will focus on integration and delivering value. As AI drives other technologies, attendees will get a glimpse of how much more it will integrate with their lives in the future.


The Proactive Pivot

The dynamic between human and machine is shifting from command-based to context-aware. The Looki L1 serves as a primary example of this "proactive" AI wearable. By utilizing multimodal sensing: vision, audio, and motion, the device understands the environment and delivers insights without waiting for a specific prompt.


It represents a pivot toward the vision Gordon Bell outlined in MyLifeBits, emphasizing a total digital memory while maintaining a "privacy-first" stance. Sensitive data is processed locally and stored for only five days, giving the user total control over what enters the cloud.


This proactivity extends into personal care. Y-Brush dramatically shifts tooth brushing tech from a simple timer. The sonic toothbrush puts its time in a mouth to good use beyond removing plaque. It employs AI gas analysis to monitor up to 300 health indicators via the breath during a brushing routine. The as diagnostic hub: it listens, records, and analyzes.


For parents, the FALCON AI video camera offers a practical solution to "digital distraction" at youth events. It autonomously films sports games by tracking the ball and players, allowing parents to watch the game in person rather than through a recording window. It acts proactively so parents can engage.


Embodied AI and the Robotics Frontier

One of the most significant developments this year is the reimagining of humanoid robots. Rather than mere novelties, these machines are now being positioned as the primary interfaces for Large Language Models (LLMs). For AI to truly understand the world as a human does, it requires embodiment. It needs to see, touch, smell, and hear.


This is where the "parts" of the ecosystem become critical. Look for robotic components, like XELA uSKIN 3D tactile sensors, which provide robots with a human-like sense of touch.


Wearables become cyborg-adjacent tech in products like Vigx's AI-assisted exoskeleton, which adjusts to activities ranging from walking to competitive sports. It is not a suit to be worn, but a performance layer that integrates with the body's movement.


Those looking for light-hearted offerings can look to PlantPetz for robotic pots and vases and Folotoy for LLM-infused stuffed animals.


Final Thoughts: The Sphere and Beyond

The week includes Lenovo’s Tech World at the Sphere on the 6th. Expectations are high for a deeper look at the company’s nascent AI ecosystem, building on the recent expansion of Gemini into Chromebooks with Chromebook Plusand in the Chrome browser on iOS.


Microsoft’s recent suggestion that the desktop operating system will become “agentic” opens the door to new competitors for the primary interface between people and computing. That will make for some very interesting CES announcements in the years to come.


CES 2026 marks the moment when AI becomes visible. Whether it’s MobVoi’s TickNote recorder attached to a MagSafe phone, or Chef AI autonomously preparing a meal, AI has become the promise, not the feature, and it’s no longer hiding inside the machine.


Connect With Daniel

Instagram and X: @DanielWRasmus

 
 
 

Comments


bottom of page