Documenting Your Thoughts in Plain Text is the Key to Effective AI Tools

This would not be the first time I have started one of my articles by reminiscing on how ‘simple’ LLM chatbots were just a couple of years ago. They were riddled with hallucinations, logic issues, and biases when they first came out. Still, their abilities were absolutely mind boggling, and now highly skilled engineers have tackled many of the limitations we faced as the beta users.

These tools have been getting sharp… really sharp. Using a combination of agent layering strategies and improved model training techniques, hallucinations are virtually a thing of the past. Perhaps more importantly, context windows have gone up in size by a tremendous margin, opening the door to tools that can simultaneously consider hundreds of pages of text when formatting a response. There are very reliable retrieval processes for documents of even larger magnitudes that integrate seamlessly into the current models. It seems like there is only one thing left to tackle: context curation.

The way I see it, the three things you need for an AI tool to fully transform your business tasks are logic, accuracy, and context.

Logic

The available models are quite savvy now. There are certainly still some improvable areas but across many markers, LLMs are at or above the average person’s logical skills. For what they lack in ‘super intelligence’, they make up in response speed.

Accuracy

Models are dishing out relevant information and getting exponentially better at avoiding incorrect statements in their outputs. Like I said, hallucinations are thinner with checks and balances built into the UI on platforms like Claude, ChatGPT, and Gemni.

Context

The most popular models today are supporting a context window of over 100K tokens (or about 100 pages of plain text, give or take).

Sounds pretty great, especially when you consider that these pillars are still improving rapidly. But no amount of development will provide the same utility as simply utilizing the full extent of a large context window, and barring serious invasions of privacy, nobody else is going to be able to provide a tool with quite as much context about your life as you are. It is up to you to contextualize your thoughts in order to build truly seamless AI tooling.

Context is the drawer where you leave your keys in the house, the street you always avoid because there are potholes, the coworker you cannot reliably ask questions because you know they will give you an incorrect answer. All of these facts allow us to function at a minimum day to day, yet we take these pieces of information stored away in our brain for granted!

  • If I want a bot to communicate for me at work, it must know what I think, how I talk, who I communicate with, what my job is.

  • If I want a bot to write SQL for me, it needs to know the structure of my database, nuances of my datasets, where to find the most up to date tables, what costs and constraints I am faced with.

  • If I want a bot to adequately diagnose my medical issues, it needs to have the data from my physicals, history of my prior conditions, information about my environment.

AI might have some room to grow in areas related to reasoning, but grow it will. Personalized context will always be a limitation and you can have a competitive edge on powerful use cases for this technology today and in the future if you build a data curation strategy now.

The solution? Journaling. Whether your goal be to tackle your health issues, have a powerful coding bot for your team to utilize, or solicit the help of a worthy assistant, journaling is your key to AI success. A simple document with a list of organized thoughts that is relatively well maintained can act as a key to get any chat bot up to speed on the task at hand.

This is simply an example of how one might choose to document the information they learn at a new job. The layout is not super important beyond following a generally logical flow.

I often employ this practice myself in my day to day work. My organization has a large database that has little documentation and is not on a platform with automatic AI integrations. By taking the painstaking hours to document the available tables, schema, and other important considerations in plain text, I open up the ability for any LLM to ingest my prompts and spit out flawless queries in one shot without any further manipulation. Additionally, I spend time documenting in excruciating detail many of the analytical questions I receive (however vague) and what my query solution to retrieve the answer would be in a given circumstance. Consistency in this practice will lead me to an AI tool that will be able to ingest difficult questions and generate an employee quality response.

Want to learn more about how your business can leverage the power of today’s technology? Read about how we can help at Abel Analytics and schedule a free consultation here.

Previous
Previous

Part I: I "Vibe Coded" a Full Stack App with Claude

Next
Next

Your New Intern Is Here: How To Use Claude to Complete Menial Tasks