Let's say you're a chef and you want to use cheddar for your latest gastronomic creation. Will you produce your own cheese or choose to buy it from the right supplier? Yes, it is possible to imagine a world in which it pays to produce the ingredients by yourself: the niche end product requires it, you have the resources and expertise to do it, you simply enjoy the process. (There is a little bit of vanity, maybe?) But let's face it, it's much easier, quicker and even more reliable to get the ingredients from the right place and just work on the final product - after all, that's why people choose restaurants in the first place.
You know we are not really talking about cheese, don’t you?
Last year was all about AI. Regional and the global investment markets showed a huge decline - with growth almost exclusively in AI technologies. In fact, this trend was not only evident among investment market players that are known for their high risk appetite, but also among the more conservative S&P 500 companies. In this climate, it is perfectly understandable that SMEs want to build their products around some kind of AI solution. However, it is still worth considering which parts of the development process should be kept in-house and which parts are worth outsourcing or outright buying.
Some examples of what we have found is not at all necessary to tackle on your own:
building you own personal large language model (LLM),
teaching your own model,
building your own infrastructure to host and maintain LLMs.
These are huge investments that are unlikely to pay off, because AI driven products have a different scale of economics than regular SaaS business models. The time it takes to develop a LLM can vary significantly based on the scale of the model, computational resources available, complexity of the architecture, and the expertise of the research team involved. For instance, OpenAI's GPT-3 (one of the largest language models developed to date), required extensive R&D efforts over several years. And from the moment there is one widely available solution on the market with such high quality, it will be the benchmark for everyone else - including your product!
Elements of the development process to consider:
Research, design: to improve performance, you have to carefully design the architecture of the model, then experiment with different neural network architectures, activation functions, training methodologies, etc.
Data: LLMs require vast amounts of text data for training. The data collection process involves gathering diverse sources of text from the internet, books, articles, and other written materials. Preprocessing involves cleaning and formatting the data to make it suitable for training.
Training: this requires powerful computational resources, including high-performance GPUs or TPUs. During this phase, it is important to feed the model with the preprocessed data and fine-tune its parameters through iterative training periods.
Evaluation: the model's performance should be evaluated using various metrics and benchmarks. Researchers have to iterate on the model architecture, training methodology, and hyperparameters to improve performance and address any issues identified during evaluation.
These development elements require huge and (still) scarce resources: both domain expertise and computational. Therefore, before deciding to build their own LLM, SMEs should carefully evaluate their requirements, resources, and long-term objectives. They should consider factors such as the availability of skilled talent, access to computing resources, potential regulatory hurdles, and the overall feasibility and return on investment of developing and maintaining a custom LLM. In many cases, leveraging pre-existing LLMs or working with specialized vendors may offer a more practical and cost-effective solution for SMEs that decide to incorporate advanced language capabilities into their products/services.
The development of large language models can range from several months to a few years, depending on the resources available and the complexity of the model. As technology advances, it may become more efficient, but until this shift occurs, we strongly recommend small restaurants to choose from existing cheddar suppliers rather than aging their own cheese.
Crunchbase (2023): Global Startup Funding In 2023 Clocks In At Lowest Level In 5 Years
Crunchbase (2023): The Biggest Of The Big: AI Startups Raised Huge — These Were The Largest Deals Of 2023
Goldman Sachs (2023): How much could AI boost US stocks?
Medium (2023): How long will it take to train an LLM model like GPT-3?
Nasdaq (2023): The S&P 500 Just Had Its Best Week of 2023. 2 Artificial Intelligence (AI) Stocks to Buy Before the Stock Market Hits New Highs
Towards Data Science (2023): How to Build an LLM from Scratch
Comments