An LLM for you

I’ve talked with CIO and CTO who find Large Language Models (LLMs) scary.

In fact numerous companies have enough concerns that they have restricted their employees from using LLMs like ChatGPT.

Why?

They are concerned ChatGPT will eat their data and not give it back. At this point, it’s safest to assume anything typed into the prompt is consumed by the model and used to continue training the model.

The minute a developer starts dropping code blocks into GPT for help finding a code bug, a potential code leak occurs.

The same thing with marketing, finance, HR, product, or any other type of confidential company information.

It’s a real problem.

Early solutions are forming:

Bing Enterprise promises “user and business data is protected and will not leak outside the organization. You can be confident that chat data is not saved, Microsoft has no eyes-on access to it, and it is not used to train the models.”

Essentially, you can use these great LLMs, but the LLM won’t eat your data.

Another alternative is to build an LLM for your company, on your company’s data, with security restrictions guarding access. In this case, you might want the LLM to eat your data and continue to train based on how your business teams are interacting with it.

Finding LLM use cases at your company is quickly becoming not just feasible, but a competitive advantage.

Hit reply and tell me how you are thinking about LLMs at your company.

I might be able to help bring some of your dreams to reality (or tell them that they should stay dreams for the time being)

It was good to see you today,

Sawyer

Previous
Previous

Quick Question

Next
Next

LLM Bottlenecks