Sponsored By

The AI Edge: How to Deploy Generative AI in the EnterpriseThe AI Edge: How to Deploy Generative AI in the Enterprise

First article in a series on customizing language models for business

Vishal Nigam

June 26, 2023

4 Min Read
Illustration of a computerized AI head in profile with letters LLM in the middle
a-image/Getty Images

The rise of generative artificial intelligence (GAI) is pushing decision-makers to think about developing or using it for their businesses. Many companies from different sectors have either started creating their business strategies around GAI or planning to do it soon. This change in business approach has compelled product managers to customize their products in line with the latest AI adoption.

GAI systems primarily function around knowledge-sharing, code generation, question-answers exchange, media generation and data augmentation capabilities. A customized GAI can provide custom outputs tailored to a specific outcome.

Have a question for our AI expert? Email him at [email protected]

For instance, a company can generate infographics based on the input instructions. Moreover, customized GAI offers improved control over data privacy, ensuring compliance with regulations and safeguarding intellectual property. In addition, GAI fuels innovation and the creation of unique applications. Aligning AI models to cater to specific needs or ideas provides a competitive advantage to businesses.

The GAI component discussed here is mainly about text-generation models called Large Language Models (LLMs). The development of such systems differs from conventional AI or machine learning systems; the LLM models are often trained on textual data and validated by humans.

Related:The AI Edge: How to Tackle AI Bias

Here are three different strategies for developing and deploying GAI:

1. Train your own LLMs: Computationally expensive with high data control and extremely customizable

The recent development of open-source models and technologies to fine-tune them make this approach significantly feasible. This approach can be computationally expensive, but it keeps data in-house, thus resulting in better security of the data. It also can be highly customizable to business.

The most capable open-source LLM at present is the Falcon-40B model. Your company’s data can be fine-tuned using this model, which is available under the Apache license. The process involves first preparing and converting the data into tokens. The next step is  loading the model and its corresponding tokenizer using Hugging Face's ‘transformers’ library.

An approach called QLoRA (Quantised Low-Rank Adapters) using the PEFT (Progressive Embedding Finetuning) library can be used to make this learning efficient and manageable. After completion of training, the model should be checked for performance quality and then its learned knowledge can be saved for future applications.

2. Customize commercial LLMs like ChatGPT using frameworks such as LangChain: Very low computation cost and customizable using in-house data but with low data control

Another way to develop a customized GAI for business data is by using recently developed frameworks such as LangChain or AutoGPT. This approach is computationally and technically inexpensive due to the use of external LLMs. But it does expose the commercial GAIs to company’s data.

Frameworks such as LangChain allow incorporation of most of the document types to develop customized use cases such as document analysis and summarization, chatbots, and code analysis. Its first requirement is to have a trained LLM or LLMs APIs − for instance, the ChatGPT API.

Stay updated. Subscribe to the AI Business newsletter

The next step involves converting text to vector embeddings using the OpenAIEmbeddings library. The vector embeddings can be saved in a vector storage for pre-processing, vector search, and handling using a library like FAISS (Facebook AI Similarity Search).

The set up will require the OpenAI API key or other LLMs configured beforehand. After this process, vector embeddings need to be loaded. Once the set up is ready to use, LangChain will use the vector storage’s Similarity Search and fetch the relevant data upon querying. The search results with the help of LLM API then generate readable answers based on the query.

3. LLMs like ChatGPT/BARD/other GAIs for business: Computation cost is outsourced but with low data control and customization potentially comes a cost

The procedure primarily involves obtaining a business-centric GAI solution or a license from an AI company that may or may not provide customizability for your business. The approach offers the well-tested tools available in the market but requires a legal agreement to maintain the security of the in-house data.

Several platforms in the market provide GAI capabilities, such as Chatsonic by Writesonic, Perplexity AI, Jasper AI, Open AI for business, etc. These tools should be evaluated for their cost and use-case fit with the business requirements. This approach requires minimal technical capabilities and potentially are the best fit for organizations that want to invest in core business capacities than developing AI capabilities in-house.

As a final thought, any of the above approaches can be chosen based on company’s business criteria, for example, cost of development, engineering strength, data security, time-to-market, and most importantly, the risk it poses to business solutions.

Next column: How to manage changes GAI brings to existing systems, processes and data storage

Important links:






Read more about:

ChatGPT / Generative AI

About the Author(s)

Vishal Nigam

Global Data Science and Development Director at Informa

Vishal is a data science and ML software engineering leader with expertise in AI, advanced analytics and AI consulting.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like