With the recent explosion in the global adoption of language-generative AI and other variants of conversational chatbots (no thanks to OpenAI ChatGPT), it becomes unavoidable to reason that the clear leader in this terrain, Google, may be under threat.
Although this may be far from the truth, let us consider some things.
First off, no! ChatGPT cannot be a threat to Google's leading role in language-generative chatbots.
This is because Google already had a strong grounding in the field of language generative chatbots long before the advent of ChatGPT which is still now only in its early stages of development, the fanfare notwithstanding.
So, one can really say that ChatGPT is still a long way from being able to compete with Google's leading language-generative chatbots.
As some would reason and the question remains, is Google truly flailing?
After years of single-minded worship of the false god, Google's Virtual Assistant, the company appears to be rushing its AI strategy because its competitors also seem to be joining their hands in raising their pitchforks.
The irony that lends credence to this is the fact that Google, perhaps, once thought it had the pitchfork market cornered.
Way back in 2017 when Google researchers published the paper “Attention is all you need,” then introduced the concept of the transformer, "thus vastly improving the potential capabilities of machine learning models.
You don’t need to know the technical side of it (and indeed I am not the one to teach you), but it has been enormously influential and empowering; let it suffice to say that it’s the T in GPT."
It then begs the question, why did Google give away such a powerful tip so carelessly freely?
While big private research outfits have been criticized in the past for withholding their work, the trend over the last few years has been toward going public.
This is a prestige play and also a concession to the researchers themselves, who would rather their employer not hide their light under a bushel.
Again, we cannot rule out an element of hubris to it but, having invented the tech, how could Google fail to best exploit it?
The capabilities we see in ChatGPT and other large language models today did not immediately follow.
It takes time to understand and take advantage of a new tool, and every major tech company got to work examining what the new era of AI might provide, and what it needed to do so.
There is no doubt that Google dedicates itself to AI work in much the same ways as others.
Over the next few years, it made serious strides in designing AI computation hardware, built useful platforms for developers to test and develop machine learning models, and published tons of papers on everything from esoteric model tweaks to more recognisable things like voice synthesis.
There appears to be a problem somewhere though.
Google employees and others in the industry have anecdotally revealed a feudal aspect of the way the company works: getting your project under the auspices of an existing major product like Maps or Assistant is a reliable way to get money and staff.
This can mean that despite Google's hoard of many of the best AI researchers in the world, channelling their talent into the ruts of corporate strategy seems attractive but ultimately costly.
Certainly, there's no gainsaying that these things are great! Most, however, were merely existing things, but with a boost from AI.
Lots feel a bit cringe in retrospect. You really see how big companies like Google act in thrall to trends as well as drive them.
In 2020, Google made an AI-powered Pinterest clone, then in December later fired Timnit Gebru, one of the leading voices in AI ethics, over a paper pointing out the limits and dangers of the technology.
Of course, it needs to be admitted that 2020 wasn’t a great year for a lot of people — with the notable exception of OpenAI, whose co-founder Sam Altman had to personally tamp down hype for GPT-3 because it had grown beyond tenable levels.
2021 saw the debut of Google’s own large language model, LaMDA, though the demos didn’t really sell it.
Presumably, they were still casting about for a reason for it to exist beyond making Assistant throw fewer errors.
OpenAI started the year off by showing off DALL-E, the first version of the text-to-image model that would soon become a household name.
They had begun showing that LLMs, through systems like CLIP, can perform more than language tasks, and acted rather as an all-purpose interpretation and generation engine.
Then again, in 2022, more tweaks were permitted for Assistant with more smart displays, more AR in Maps, and a $100 million acquisition of AI-generated profile pictures.
OpenAI released DALL-E 2 in April and ChatGPT in December.
But then, a fast-forwarding of things to this moment will surely get one to argue that it was ChatGPT that caused Google leadership to swiftly transition from anxiety to full-on flop sweat.
Indeed, it is far more reasonable to say that ChatGPT cannot be a threat to Google's leading role in language-generative chatbots.
Years before the emergence of ChatGPT, Google already established deep taproots in the field of language-generative chatbots that cannot be easily shaken.
However, it is still too early to say given that ChatGPT is still now only in its early stages of development, the fanfare notwithstanding.
One piece of advice for Google though, the best time to stamp your authority is when you are threatened.
Trusting Google once again to recoil into its creative and innovative shell as it often does and let the world enjoy the best-ever evolution of AI.
0 Comment(s)