top of page
  • Sergii Shelpuk

Can ChatGPT build a competitive advantage for your company, and if so, how?

Can ChatGPT build a competitive advantage for your company, and if so, how?

In the past six months, ChatGPT has taken the internet by storm. This top-rated OpenAI product surpassed 1 million users in just five days, setting a new record among internet products. Social media and reputable sources are abuzz with discussions on whether ChatGPT and similar technologies will disrupt entire industries and revolutionize the workplace.

Now, the question arises: Does ChatGPT offer any competitive advantages for your company? Alternatively, should you be concerned that your competitors might seize a portion of your market share by utilizing ChatGPT? Let's delve into this matter.

Can ChatGPT build a competitive advantage for your company?

The key to attaining a sustainable advantage lies in possessing something that cannot be easily replicated: assets, connections, reputation, brand, or a unique business design. Over the past few decades, machine learning and AI technologies have gained acclaim for transforming data into a sustainable competitive edge. This process operates in a cycle: AI has the power to convert your proprietary data into a perceived value for your product. As this perceived value increases, so does the amount of proprietary data, creating a positive feedback loop that culminates in a product that is exceedingly difficult to imitate. The optimal competitive positioning for an AI product can be visualized as follows:

  1. You must offer a superior product to capture market share from an AI product.

  2. Since the value stems from AI models, you need better models.

  3. Given that modern AI models are trained with machine learning on data, you require more data.

  4. As the data originates from product usage, you need an improved product.

AI virtuous cycle

This strategic design effectively deters competitors, even in the case of products like Google Search and YouTube, which, if reproducible, would propel a company into trillion-dollar territory. Nevertheless, their market position remains steadfast today: potential competitors either grasp the futility of the battle before it commences or learn the hard way through futile attempts.

Let us now return to ChatGPT, an OpenAI product in its current form. Where does it fit within this cycle?

An AI system should enhance its performance with more data as it undergoes further training. However, ChatGPT is an immutable product — you cannot train it using your proprietary data. Prompt engineering is akin to programming rather than machine learning training, meaning that your prompts can be equally copied by skilled engineers or even by one of your own competitors. From this perspective, you can liken ChatGPT to a sophisticated spreadsheet — it can provide value and save time, but it does not create a sustainable competitive advantage for your company.

To establish a competitive edge, your AI system should be trained with proprietary data—data your company possesses, which is hard for competitors to acquire. That makes your AI system different from the one your competitors can train without accessing your data.

Anything you send to ChatGPT ends up in OpenAI's datasets and contributes to the product's performance, benefiting not just your company but all your rivals willing to pay $20 per month to access a tool that has access to your proprietary data. Extracting training data from language models is an ongoing challenge, and you should consider any data you share with ChatGPT as essentially being open to the entire world. From this standpoint, if you transmit your proprietary data to ChatGPT, you are doing the opposite of transforming it into a competitive advantage — you give your proprietary data away.

In summary, ChatGPT is an AI product you cannot customize by training with your own data. Although OpenAI can improve the model by harnessing your proprietary data, it makes those benefits (and potentially your proprietary data) available not only to you but also to all your competitors.

You can and should utilize ChatGPT to enhance efficiency where applicable. Still, it should not be mistaken for AI technologies that have the potential to make your company a market leader in your industry — ChatGPT cannot fulfil that role.

Large Language Models (LLMs)

Indeed, it is evident that ChatGPT alone cannot provide you with a competitive advantage. However, what about other large language models? Is it possible to develop your own system resembling ChatGPT and ignite the AI virtuous cycle?

Let us examine the technologies that drive opportunities for AI competitive advantages. First and foremost, we have those that enable the construction of this virtuous cycle using new types of data. The "Eureka" moment came with AlexNet at the 2012 ImageNet competition, where convolutional neural networks reached a level of sophistication that allowed them to extract value from vast datasets of real-world images — a challenge that earlier computer vision algorithms based on feature extractors struggled with. In simple words, it showed that you can build a sustainable and defensible business with a product that collects real-world photos while utilizing them to improve itself with the convolutional neural networks. Those who grasped this concept early on swiftly identified applicable niches and capitalized on them. An excellent example is Blue River Technology, which emerged soon after the breakthrough of AlexNet and applied the newly available convolutional neural networks to images of farm fields.

Now, let's return to Large Language Models (LLMs). Do they enable the virtuous cycle of AI for new data types?

The LLM, resembling ChatGPT, undergoes training in three stages.

In the first stage, a language model is developed by training the Transformer model to predict the next word based on the preceding text of a specific length (for ChatGPT, it was 2048 words). At this stage, the model can auto-complete the input text but lacks the ability to engage in dialogue or respond to requests.

In the second stage, you train the model to answer user requests on a small, high-quality "question-answer" dataset. The resulting model is called the SFT (supervised fine-tuning) model.

The third stage is Reinforcement Learning from Human Feedback (RLHF). Simply put, this involves requesting the LLM to rank different responses, reflecting their proximity to the expected outcome. The examples would be Anthropic and Stanford Alpaca datasets. RLHF essentially transforms the LLM into the chat-like assistant that we have today.

Stanford Alpaca dataset

Stanford Alpaca dataset

Do any of these technologies open up new niches by enabling new data types, similar to the breakthrough of AlexNet?

Training the LLM to predict the next word can be a demanding and costly endeavour. The estimated training cost for training the GPT3 model from scratch exceeds $4 million, and the training time can take up to 9 days. Very few companies can afford such expenses regularly (remember that the model needs regular retraining to capitalize on new proprietary data). Additionally, although smaller and more cost-effective LLM architectures are available, a model that predicts the next word alone is of limited use, even if trained with proprietary data. You still need to enable chat capabilities through RLHF.

Here lies an intriguing opportunity: "question-answer" (SFT) training and RLHF technology can actually build a sustainable competitive advantage.

If you can envision a product that delivers value through chat or assistant capabilities and simultaneously generates RLHF-applicable data from user interactions, you might be onto something significant.

This way, your product will improve with new "question-answer" or response quality metric data and, in turn, generate more training data as it enhances its performance.

The size of this opportunity is difficult to determine. Clearly, there are specific niches where such feedback can be obtained, primarily in the writing aid market where companies like Grammarly operate. How easily LLM technologies can disrupt this obvious market remains to be seen (for instance, Grammarly has already integrated LLMs into their product and does not plan to relinquish its market share easily). However, if you wish to evaluate your own LLM-based startup idea or someone else's, consider two questions.

  1. How do LLM improvements bring more value to the users?

  2. How do users generate training data when using the product?

If at least one of them does not have a good answer, chances are it will be hard to build a defensible business around this idea.

Competition among LLMs

Ever since the explosive debut of LLMs, prominent tech giants have entered the fray. Microsoft invested a staggering $10 billion in OpenAI and incorporated ChatGPT into its Bing search engine. Google introduced Bard, while Meta open-sourced the LLaMA large language model, among other notable developments. Many view LLMs as a potential disruptor in the online search market, an area that has long been dominated by Google Search. To better comprehend their endeavours, let's apply the logic of competitive advantage.

Feedback loop

ChatGPT exhibits remarkable proficiency in various tasks, including coding—a subject that often sparks debates about the automation of software engineering. However, these capabilities heavily rely on RLHF training data. OpenAI, for instance, employs software engineers dedicated to producing such data by solving programming challenges akin to those found on LeetCode and providing detailed explanations in human language. Additionally, the company engages a significant number of data labellers to create other forms of RLHF training data. There are rumours that Google extracts RLHF data from ChatGPT to train their own LLMs. Nevertheless, from the perspective of AI competitive advantage, these approaches have inherent flaws.

While manual data labelling can yield substantial training data, a product design that generates RLHF data directly through user interactions as part of their journey can surpass manual labelling through a stronger feedback loop.

OpenAI attempted to address this by introducing a "like/dislike" feature in ChatGPT. However, liking or disliking chat responses is not an essential aspect of the user journey — I can benefit from the product without explicitly expressing my preference.

Data access

Moreover, a robust AI competitive advantage should be built upon proprietary data that is difficult for competitors to acquire. Google Search's dominance is largely due to leveraging our historical search queries and interactions, complemented by vast amounts of data collected through various Alphabet products. Our Gmail emails, YouTube searches, views, and interactions, browsing data from Google Chrome, and even phone usage data from Android devices all contribute to the intricate user models and embeddings that Google meticulously constructs for each individual. These personalized user models, combined with our search queries, enable Google Search to present tailored and relevant results. As search results improve, we naturally gravitate back to Google for further searches, providing the company with even more data to enrich our individual user models. The power of Alphabet's AI products stems from this exclusive dataset. Attempting to replicate Google Search is a doomed endeavour precisely due to the lack of access to this data.

Does OpenAI, Google, or any other company possess unique training data or a product design that generates it? Hardly so.

Open-source alternatives

Consider the thriving ecosystem of open-source LLMs and LLM chats that has flourished over the past six months. There are hundreds of open-source LLMs, many of which can be utilized commercially. These models are smaller, more cost-effective to train, and, most importantly, customizable — since the competitive advantage lies in having a unique model. Open-source pretraining and RLHF datasets are readily available, and new developments emerge on a daily basis. In this environment, neither OpenAI, Google, nor Microsoft possesses a distinctive asymmetry that would enable them to create superior LLMs while being challenging to replicate by other companies or even individual engineers.

Interestingly, even Google, a company well-versed in building AI competitive advantages, appears to acknowledge this reality. In a leaked document, Google senior software engineer Luke Sernau admits that Google lacks a competitive advantage in developing LLMs. The document notes, "The uncomfortable truth is, we aren't positioned to win this arms race, and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch... I'm talking, of course, about open source. Plainly put, they are lapping us." Sernau suggests that without a "secret sauce," Google's best approach is to learn from and collaborate with external efforts while prioritizing third-party integrations. Sernau said open-source engineers were doing things with $100 that "we struggle with" at $10 million, "doing so in weeks, not months."

In summary, the development of general LLMs appears to be beyond the control of any individual large company, even the largest ones. Currently, none of them seems to possess a foolproof strategy to entrench their position — this includes Google, which has successfully established itself with Search, YouTube, and numerous smaller products.

Once again, this confirms my earlier suggestion: finding a niche where LLMs can establish a sustainable competitive advantage that cannot be swiftly replicated is incredibly challenging.

What should you do about it?

Having started my career in AI long before the current hype and even before the "AlexNet moment" of ImageNet 2012, I can confidently say that the current ChatGPT frenzy even overwhelms someone like me who is well-versed in the field. I can only imagine how it appears to those outside the industry, with this immense buzz generating anxiety and the fear of missing out among many individuals. Since the launch of ChatGPT, numerous friends and colleagues have approached me, seeking advice on whether they should be concerned about their businesses or even their careers.

Allow me to provide you with a framework for thinking about current LLM development.

A good analogy is worth thousands of words, so consider LLMs akin to the IBM PC wave of the 1980s and 1990s.

Personal computers gained tremendous popularity during those decades, primarily due to word processors and spreadsheet software that simplified routine tasks. By 1984, IBM's revenue from the PC market exceeded $4 billion, more than double that of Apple. A study in 1983 found that two-thirds of large customers who standardized on one computer opted for the PC, while a mere 9% chose Apple. A 1985 Fortune survey revealed that 56% of American companies with personal computers used PCs, while 16% used Apple.

However, a series of intellectual property protection mistakes made by IBM led to the emergence of the entire "IBM PC compatible" clone industry. "You don't ask whether a new machine is fast or slow, new technology or old. The first question is, 'Is it PC compatible?'" wrote Creative Computing in November 1984. This was devastating for IBM, but for the tech world, the IBM PC and its clones became one of the most significant events in history.

If you are a business executive contemplating the ChatGPT hype today, put yourself in the shoes of your colleagues who lived through the PC era. PCs were undoubtedly a great innovation, automating many routine tasks, simplifying accounting, and replacing typewriters. However, these machines were also affordable and readily available.

Would you expect your competitor or a young startup to oust you from the market simply by introducing PCs to their workplace? For most, the answer is: hardly. After all, you can acquire those computers for your own company whenever you see fit. While they may enhance your business, they are not a threat to your market share.

And if you are an entrepreneur considering a new business, beware that the IBM PC clone industry of the 1980s and 1990s had incredibly low profit margins.

While you could assemble and sell your own computers using readily available IBM PC-compatible components, so could everyone else. In such a scenario, the only way to compete is through price, which ultimately leads you and everyone else in the market to extremely thin profit margins.

However, take inspiration from Microsoft, which developed operating systems for IBM PCs and eventually transformed Windows into one of the most successful platforms of all time. If you can create a product that every LLM user needs and build it with the concept of sustainable competitive advantage, you may find yourself with a thriving business in due course. But that is a different story altogether.

This blog is dedicated to AI competitive advantage, and we are doing our best to explain how it works and how you can build one for your product or company. You can check our other posts to get an extensive explanation of what the network effect is and how AI enables it, how to build an AI competitive advantage for your company, what culture helps you build the right AI products, what to avoid in your AI strategy and execution, and more.

If you need help in building an AI competitive advantage for your business, look no further. Our team of business experts and AI technology consultants has years of experience in helping technology companies like yours build sustainable competitive advantages through AI technology. From data collection to algorithm development, we can help you stay ahead of the competition and secure your market share for years to come.

Contact us today to learn more about our AI technology consulting offering.

If you want to keep posted on how to build a sustainable competitive advantage with AI technologies, please subscribe to our blog post updates below.


bottom of page