Can OpenAI Keep Their Lead in Generative AI?
Technology keeps changing and growing. Yesterday’s supercomputer is tomorrow’s mobile phone. This change affects how we use these powerful tools in our lives.
One important area that is changing is the training of large language models (LLMs). Soon, it will be very easy for many companies to train these models. As more companies work with generative AI , the competition will grow.
OpenAI is a company that is a leader in AI development. They have a competitive advantage because they have great experience on scaling up LLMs. But they might face challenges in keeping their lead.
One big challenge is that we are running out of text data to train even bigger LLMs. That means they can’t just scale up the model and get better accuracy. It’s very likely their speed of progress is going to hugely slow down in the next few years.
Meanwhile, other companies like Google, Meta, and Amazon are pouring tons of resources into training LLMs. It’s just a matter of time before they build GPT-4 level models as what they have is money. Not to mention there are many researchers developing techniques to make LLM training more efficient. With new techniques and cheaper GPUs (thanks to Moore’s law), more startups will be able to train GPT-4 equivalent models.
There are two likely scenarios that will happen to OpenAI.
1. Good for OpenAI
OpenAI figures out how to incorporate videos into foundation models. They still have a lot of room to keep scaling up models as there is a zillion GB of video data for them to use. They continue building GPT-N for the next few years, keeping other companies 1 to 2 generations behind.
2. Bad for OpenAI
OpenAI fails to come up with new ways to scale up models. Training a GPT-4 level model becomes an easy thing to do. LLMs became part of a cloud service offering and there is no significant quality difference between different providers. This is similar to the fact that people care less about what machine translation model they’re using over the past few years as most of them have been doing a decent job over the past few years.
I believe Scenario 2 is more likely to happen, as incorporating videos into foundation models remains unclear. Achieving this requires numerous research breakthroughs, and it could take more than two years. This time frame provides ample opportunity for other players to bridge the gap.
Regardless of the outcome, the AI landscape will continue to evolve as new techniques, technologies, and competitors emerge. OpenAI's ability to stay ahead of the curve will depend on their adaptability, innovation, and dedication to pushing the boundaries of what's possible in generative AI.