Wayve Lingo-2 AI Model With Autonomous Driving Capabilities, Ability to Take Passenger Instructions Showcased

Wayve unveiled its artificial intelligence (AI)-based vision-language-action driving model (VLAM) Lingo-2 on Wednesday. Lingo-2 comes as the successor of the Lingo-1 AI model and has multiple new capabilities. The autonomous driving AI can now offer commentary of its actions while driving as well as adapt its actions based on the passenger's instructions. It can also answer queries about its surroundings which is not directly related to its driving. The AI firm said Lingo-2 was designed as the path to build a trustworthy autonomous driving technology.

Showcasing the capabilities of Lingo-2 in a demo video on X (formerly known as Twitter), the company introduced the new Lingo-2 AI model that is capable of navigating roads while taking instructions from passengers. The post on X also includes a video of a Lingo-2 drive through Central London, where the model drives the car while simultaneously generating real-time driving commentary.

🚙💬Meet Lingo-2, a groundbreaking AI model that navigates roads and narrates its journey. Watch this video taken from a LINGO-2 drive through Central London 🇬🇧 The same deep learning model generates real-time driving commentary and drives the car. pic.twitter.com/eZB8ztDliq

— Wayve (@wayve_ai) April 17, 2024

The AI model combines three different architectures — computer vision, large language model (LLM), and action models — to create a combined VLAM model that can perform various complex tasks together in real time. Based on the demo, Lingo-2 can see what's happening on the road, make decisions on its basis, and inform the passenger about the decision. Additionally, it can also adapt its behaviour based on any instructions given by the passenger, and answer non-driving related queries such as information about the weather.

Wayve says that performing these actions consistently and reliably is an important step towards building autonomous driving technology. “It opens up new possibilities for accelerating learning with natural language by incorporating a description of driving actions and causal reasoning into the model's training.

Natural language interfaces could, even in the future, allow users to engage in conversations with the driving model, making it easier for people to understand these systems and build trust,” the company said on its website.

It is important to note that Lingo-2 does not really drive a vehicle as it is just an AI model and is not integrated with hardware to control a vehicle. It is trained and tested on Wayve's in-house closed-loop simulation called Ghost Gym.

With Tencent’s InstantMesh, You Can Convert Any Image Into a 3D Render Microsoft’s New AI Video Model Can Generate Eerily Realistic Videos Stable Diffusion 3 AI Image Models with These Features Are Now Available

Being a closed-loop simulation, the company can test the realistic reaction of other vehicles and pedestrians based on the control vehicle's behaviour. For the next steps, the AI firm plans to start limited testing of the AI model in a real-world environment to analyse its decision-making in more unpredictable situations.

.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } Is the Samsung Galaxy Z Flip 5 the best foldable phone you can buy in India right now? We discuss the company's new clamshell-style foldable handset on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.

Ads Links by Easy Branches
Play online games for free at games.easybranches.com

Guest Post Services www.easybranches.com/contribute