In an ideal world, as our whole team uses Claude or GPT, the context and memories of the LLMs we interface with will tune towards our use cases. This would allow meta learning on top of our usage. Currently, GPT and Claude have customization via memories and projects which can help tune this contextual awareness in the language models.

I’d recommend saving common workflows and prompts in an AI Cookbook like we do at 50Y. That way as you tune these prompts the whole team gets better at using these tools. Can be used cross platform too.

ChatGPT doesn’t allow you to share projects yet. So the only shared knowledge base option is a custom GPT shared with your team. That custom GPT could have a shared knowledge base but there is no memory transference between users. Using projects is still a good way to tune a context with the memories and use cases towards a specific business goal. I can’t imagine OpenAI won’t add project sharing as a feature in the future.

Claude allows shared projects across team accounts. Chats within projects as well as project knowledge bases can be collaboratively evolved with teammates. This is great for the evolving team use case. More documentation here: https://support.anthropic.com/en/articles/9519189-project-visibility-and-sharing.

The more involved path is building an agent or facade that the whole team chats with, but that is technically involved. Building a team wide agent with long (context, memories, knowledge) and short term (session specific) memory would allow the entire team to collaboratively tune and contextualize the performance of the agent (or agents).