Focus AI: If artificial intelligence dreams, then it dreams of new hardware

Glasses that can see, hear, and talk thanks to AI. iPhone designer Jony Ive, who is supposed to develop a new AI device together with OpenAI and an investment of billions: Generative artificial intelligence will influence the shape of future hardware.

Helmut Spudich

In the beginning, there was a touchscreen. Without a flexible interface that can be operated with fingers instead of a mouse and that, depending on the task, can be a keyboard, a brush for a painting program or a shutter button for a camera, there would be no smartphones. Former cell phone king Nokia failed because of this disruption: its cell phones could do everything the first iPhone could, but were lacking an intuitive way to use thousands of different apps.

Now, generative AI could reduce the dominance of smartphones – or cement it even further. According to the Financial Times, none other than former Apple designer Jony Ive, one of the fathers of the iPhone, is to develop a new consumer device with AI at its center in collaboration with OpenAI CEO Sam Altmann. Masayoshi Son will be responsible for financing the “iPhone of artificial intelligence” as the founder and CEO of the Japanese SoftBank and the main shareholder of the chip designer ARM. Most smartphone hearts already beat to the pulse of ARM design and future AI devices should do the same.

Plans for their collaboration are well advanced, although an actual product is still many years away. With a billion US dollars, the brightest talents from OpenAI, Ive’s company LoveForm, and SoftBank will be tasked to find out what type of device best suits the capabilities of AI.

If you follow Jony Ive’s earlier statements, new, AI-powered hardware should curb excessive screen consumption. Ive told the Financial Times several years ago that Apple had a “moral responsibility” to limit the unintended addictive side effects of apps. In fact, iPhones have been showing users their “screen time” for several years with tips on how to reduce it. In the future, cameras, microphones, and speakers will be as crucial for interacting with hardware as the screen. This corresponds to the current development of ChatGPT: OpenAI’s AI has recently been able to see, hear, and speak – thanks to the ability to recognize images and speech.

Facebook parent company Meta is also taking this direction. So far, Meta has relied entirely on virtual reality (VR) with its Quest VR glasses, with modest success. At the end of September, Meta CEO Mark Zuckerberg presented the new version of Meta’s Ray-Ban glasses. Thanks to the camera, microphone, and speaker, the AI assistant can answer questions from those wearing glasses – an acoustic version of augmented reality (AR).

Initially, camera and image recognition will be limited to posting on social media and providing wearers with information in question-and-answer form. But it is easy to imagine scenarios of future use. Looking at a sight, AI can describe it to tourists at any length or brevity. In cities, navigation based on camera assistance and object recognition could be possible without GPS. To the extent permitted, facial recognition can identify a counterpart and provide information about the person. Meta-style glasses would whisper all of this into the wearer’s ear.

In a few years, this could also be displayed directly in the lenses of glasses using AR. Such “hybrid glass” has been around for a long time and is used in the viewfinder of some high-end cameras or in showcases in museums to display further information when viewing exhibits. Use in glasses depends on the further miniaturization of electronic components and batteries – this seems possible in a few years.

Jony Ive could yet succeed in reducing our “screen time”. However, not without new side effects that the use of cameras in conjunction with AI will bring with it in everyday situations.

发布日期: 2023年10月30日

分享文章: