Google, at their recent developer conference, showcased what will be a new step in the development of Natural Language Artificial Intelligence (AI) systems.
Google has been long developing a system that can hold its own in natural conversations. The company finally announced the project LaMDA, during its yearly developer conference.
What is Google LaMDA?
Google Lamda is a conversational AI that stands for “Language Model for Dialogue Applications” and it’s built on previous models like BERT and GPT-3. LaMDA was open-sourced by Google in 2017 but unlike traditional chatbots which are simple in conversation, LaMDA can manage the openness of conversations. What’s interesting is Google soon aims to integrate LaMDA with Google Search Engine and other Google-based services.
How does Google Lamda work?
What we understand from this project move is that Google is trying to bring human-like conversations into their chatbot systems since they have this distinctive nature.
Generally, we start conversations with one topic and then end up in a totally different topic of matter. Humans naturally drive the conversations forward by connecting topics in the most unexpected ways.
By tackling these scenarios, LaMDA will eventually revolutionize chatbot systems completely. A chatbot with these superior abilities could perfectly engage in natural conversations with people. We will be able to ask for information or consult the internet more naturally.
LaMDA has also gone a step further. The system succeeds at detecting the sensibleness of the conversation — whether the words would make sense in the current context of a conversation — and is better able to keep its responses specific.
LaMDA’s conversational skills are built on an architecture that produces a model which will be trained to read multiple words (a sentence or paragraph, for example), to pay attention to how those words relate with one another, and make predictions on the words that should come next.
Currently, in order to trigger google assistant, users have to say “Hey Google”, then the specific commands like “turn on the lights” and so on. Now futuristic assistants like LaMDA will make such commands more natural, or actually aid conversations with such systems.
They will enhance chatbots and other automation applications. It could be integrated with Google’s other tools like Google Maps, etc. The exact functioning along with the challenges of LaMDA can be determined only after it has fully rolled out, which brings us to the next most important question.
Is It Ready For Use?
Currently, LaMDA has not been officially rolled out for commercial or personal use. It is still in the development and testing stage. It is expected to roll out in the coming years. And there is a possibility of a few challenges that the system might face which will emerge with time.
Meanwhile, you can find below some interesting snippets on LaMDA’s functioning.
— The Verge (@verge) May 18, 2021
During the Google I/O conference, the team demonstrated LaMDA’s conversation speaking as Pluto and paper plane in two different conversations. The team didn’t have to change LaMDA’s mechanics from impersonating Pluto to a paper plane for conversing.
LaMDA shows the following key qualities: sensibleness, specificity, interestingness, and also factuality. Here’s an example from their conversation with LaMDA impersonating Pluto and how it maintained these qualities:
1. Specificity: When the team questioned LaMDA about what we would see when we will visit Pluto in a flyby? It responded with “You would get to see a massive canyon, some frozen icebergs, geysers, and some craters.” The system could have said something along the lines of “An Icy Surface” or “Dwarf planet with lots of craters” but the response was very specific on details of Pluto’s geology.
2. Factuality: When the team then asked if Pluto ever had any visitors, the system responded, “I have had some. The most notable was New Horizons, the spacecraft that visited me.” This information was true because NASA had launched the New Horizons mission in 2006 to photograph Pluto and its moons from up close.
3. Interestingness: The team questioned LaMDA about what it wanted people to know about pluto, to which it responded, “I wish people knew I am not just a random ice ball. I am actually a beautiful planet.” This part of the conversation showed a bit of emotion, wherein it wanted people to know it better.
4. Sensibleness: After LaMDA said that it wished people knew Pluto was indeed a beautiful planet, the team responded “Well I think you’re beautiful,” for which LaMDA replied: “I’m glad to hear that. I don’t get the recognition I deserve. Sometimes people refer to me as just a dwarf planet.”
It even went on to explain why it wished it was better known. From a natural conversation point of view, this makes total sense. Even when someone supports our wishes, we may be expressing them to let the emotions flow. LaMDA doesn’t have emotions, but it perfectly captures that sensation here.
The uniqueness of human conversations captured by Google LaMDA
Human beings are complex, so are Human conversations too. Any single sentence we speak can lead the conversation to an entirely different path than it was originally intended and still the other people will be perfectly able to continue with that direction.
It’s very tough to predict or understand how any conversation can unfold. Try remembering a decent conversation you’ve had with a parent or any friend, and try to break it down.
What was built up to the points that it crossed? Did the conversation end the same way it started? We’re guessing no. That’s the uniqueness of complex human speech, language, and conversations. In any conversation, we could create numerous new, unique directions and end up on a new topic. LaMDA seems able to crack the code in this department.
Privacy and Ethics are priorities
It was revealed that when AI Models are trained on datasets from the internet, they may contain bias which can lead to dredging into hate speech. But further Pichai explained that they have ensured that LaMDA meets their incredibly high standards of fairness, accuracy, safety, and privacy.
Potential Use: As we saw earlier how LaMDA was demonstrated personifying the planet Pluto and a paper airplane, respectively. The conversations were normal Q&A-style between human and LaMDA systems, where the LaMDA system provided in-depth answers with nuances like a human with some humor instead of straightforward answers like Google Assistant.
Google is focusing on a complex model that can understand data across formats like text, images, videos, and audio. Although Google has not divulged into any other details on how LaMDA might be used in any of their future products, or how it can be integrated.
Depending on how sophisticated the model is, we do imagine LaMDA helping users find the products they’re looking for or check through local business reviews, for example.
Why should we care?
Conversational dialogue between users and Google may enable them to search for information or products in ways that are currently impossible. We may see a shift in search behavior, which may mean businesses have to adapt to ensure their content or products are still discoverable.
If Google incorporates it into existing products, which it almost certainly will, those products may become more useful for more users. That might give Google an important edge over its competitors and strengthen its own ecosystem unless those competitors are also able to deliver similar functionality.
We hope this blog has been a good read for you. Let us know in the comments section below about your views on Google LaMDA.