Wayve unveiled its synthetic intelligence (AI)-based vision-language-action driving mannequin (VLAM) Lingo-2 on Wednesday. Lingo-2 comes because the successor of the Lingo-1 AI mannequin and has a number of new capabilities. The autonomous driving AI can now supply commentary of its actions whereas driving in addition to adapt its actions based mostly on the passenger’s directions. It also can reply queries about its environment which isn’t instantly associated to its driving. The AI agency mentioned Lingo-2 was designed as the trail to construct a reliable autonomous driving expertise.
Showcasing the capabilities of Lingo-2 in a demo video on X (previously generally known as Twitter), the corporate launched the brand new Lingo-2 AI mannequin that’s able to navigating roads whereas taking directions from passengers. The submit on X additionally features a video of a Lingo-2 drive by Central London, the place the mannequin drives the automobile whereas concurrently producing real-time driving commentary.
🚙💬Meet Lingo-2, a groundbreaking AI mannequin that navigates roads and narrates its journey. Watch this video taken from a LINGO-2 drive by Central London 🇬🇧 The similar deep studying mannequin generates real-time driving commentary and drives the automobile. pic.twitter.com/eZB8ztDliq
— Wayve (@wayve_ai) April 17, 2024
The AI mannequin combines three totally different architectures — laptop imaginative and prescient, massive language mannequin (LLM), and motion fashions — to create a mixed VLAM mannequin that may carry out numerous complicated duties collectively in actual time. Based on the demo, Lingo-2 can see what’s occurring on the street, make choices on its foundation, and inform the passenger concerning the determination. Additionally, it might probably additionally adapt its behaviour based mostly on any directions given by the passenger, and reply non-driving associated queries corresponding to details about the climate.
Wayve says that performing these actions constantly and reliably is a vital step in direction of constructing autonomous driving expertise. “It opens up new possibilities for accelerating learning with natural language by incorporating a description of driving actions and causal reasoning into the model’s training.
Natural language interfaces could, even in the future, allow users to engage in conversations with the driving model, making it easier for people to understand these systems and build trust,” the corporate mentioned on its web site.
It is vital to notice that Lingo-2 does not likely drive a car as it’s simply an AI mannequin and isn’t built-in with {hardware} to manage a car. It is educated and examined on Wayve’s in-house closed-loop simulation known as Ghost Gym.
Being a closed-loop simulation, the corporate can check the life like response of different automobiles and pedestrians based mostly on the management car’s behaviour. For the subsequent steps, the AI agency plans to begin restricted testing of the AI mannequin in a real-world setting to analyse its decision-making in additional unpredictable conditions.