Full open disclosure - I am starting to dislike the term "Artificial Intelligence". I am irritated and frustrated with how the term gets excessively used, incorrectly reported and added to every service proposition and product on the market. So, for this article, I will refer to AI as just the new technology that can interact with humans, make predictions, categorise, and generate content. Importantly, it can utilise human-like processing. "Human-like", and not 'as a human', as the technology cannot yet act entirely as a human - we are close, but not there yet, but it will come, and in our lifespan.
If you are struggling to get your head around all the AI buzz, it may help to just view AI as the new technology that (if done right) can reliably interact, predict, categorise and generate.
Three Essential Questions To Ask Before Launching Your AI Project
Faptic Technology
April 15, 2024This article is written and published in conjunction with Faptic Technology
What you can do
The Starting Question
Typically, the first questions when starting an AI journey revolve around what your objectives are with this innovative technology, the gains you envisage, and the benefits you hope to reap. However, for this article, I wish to delve into some additional questions in this discussion, the not-so-obvious that, if addressed suitably, could help not only in cost and effort control, but significantly mitigate the associated risks as well. All too often, I've witnessed organisations diving headlong into AI without adequately weighing up the strategy and the corresponding risks.
Imagine visiting a brand-new hotel resort and quickly sizing up its swimming pool after a long flight. You're eager to enjoy a relaxing dip after your lengthy journey, but a wise traveller knows not to just jump in; perhaps, like me, you've encountered some less-than-stellar hotel pools in the past. Once burnt, you learn to assess the situation before taking that quick plunge: Is it deep enough to dive in? Is the water comfortably warm or chillingly cold? Is it well-maintained and clean, and are there unidentified objects at the bottom or floating on top? Similarly, the points I want to address in this article aim to ensure that your venture into AI meets expectations and safeguards your interests.
A Quick Fundamental
If you're already well-versed in the workings of AI, feel free to move ahead.
If, however, you are still trying to wrap your head around it, it is worth understanding that a key fundamental is that AI consists of three building blocks: a dataset, an algorithm, and the model.
In short, an algorithm is run on the dataset (text, numerals, images, audio etc.) to build and refine a model; with "refine" often referred to as "learning". The model is the front face of AI that commonly interacts with us, gives us predictions, categorizes, and generates the content we want. Think of the model as the program the algorithm builds and, importantly, refines (through "learning"). The model, however, is only as accurate and safe as the dataset used and the algorithm's suitability and configuration (its weighting). It is the dataset and the algorithm that give us an accurate and practical model to create the output we want.
Note that to help with the explanation, I am simplifying a lot here. Typically, AI services utilize several models, often termed "compound model systems," to produce their results. These are different connected models that interact with the user, understand the request, create and refine the results, and finally communicate the return (generate the image, code, etc.).
(If you want a quick read on data, models and algorithms, I highly recommend https://machinelearningmastery.com/difference-between-algorithm-and-model-in-machine-learning/ by Jason Brownlee)
Three critical questions to ask
1. Will a pre-built model work, or am I building my own?
There are now a good array of pre-built models to integrate and work with - this is what ChatGPT or Google's pre-built AI APIs are. Rather than build your own trained models, there are now many pre-built models that are easy to use and have already been refined over time. Just be aware, however, that although they generally are significantly quicker to integrate, you have little or no control over the algorithm used, its weighting and the dataset to build and refine the model - it is someone else's data and someone else's approach to producing the output. And "someone else's data" is highly significant when considering what output you want.
It is the eternal consideration in tech - build your own or use off the shelf; a cost risk and return decision. However, unlike when choosing to build your application or buying a package, AI's risk is much higher if accuracy and safety are essential. For example, when looking at accounting packages, rather than building your own, one of your starting expectations is that it will do the sums correctly, i.e. that 2+2=4. However, with pre-built AI, you can not assume with the same confidence; 2+2 may equal 4 or infinity or an elephant. Someone else has trained the model with their data.
I am absolutely not advocating that you can't trust or use pre-built models, just that in assessing the suitability, consider that the risks are significantly higher than when making similar (not-AI) decisions. Committing to comprehensive testing of pre-built AI before use, may significantly reduce your risk and be a wise investment.
2. Do we have sufficient and true data?
If you're taking the route of custom-building your model and not going for the pre-built options, you'll have a wide range of algorithms at your disposal. These algorithms are well-established for their ability to contribute towards building a successful model, so your choice isn't usually a high-risk factor. The critical part here lies in adjusting and testing your model's weighting. A crucial cautionary note, if your source data is flawed, be it from misrepresentation, incorrect bias, or insufficiency, the result that you get from your algorithm will be less reliable; compromising the efficacy of your model. In essence, it's vital to ensure that the data you're working with is both of sufficient volume and accuracy.
It's entirely feasible to create a model that performs tasks accurately and safely, provided you have the right data — a lot of well-intentioned, but misguided ideas ("Hey, why don't we use AI for this...") are often undone due to the lack of quality and adequate data, rather than technological or resource constraints. When it comes to AI, a vital step before embarking on any project is critically reviewing the data required.
3. What level of accuracy must we have, and how do we prove it?
Having the algorithm and the data might give you the capability to craft a model; however, the crucial query lies in its precision and security. Figuring out your model's potential accuracy and safeguarding features is of immense importance. Stepping into the realm of AI implementation, especially one publicly accessible, without being aware of inherent risks could invite unsolicited trouble.
Measuring the precision and security of AI is a nuanced task that goes beyond standard testing procedures. If your team lacks the essential mathematical skills, don't hesitate to seek assistance from an expert.
People will want to break it
It will not have gone unnoticed to most people using the internet and social media that breaking and embarrassing AI is now the new internet sport. If you unleash an AI system to the general public, or even your clientele, there's an innate instinct to scrutinize it and, unfortunately, humiliate it - expect no mercy. In this connected world, any blunders your AI commits will race through the internet faster than any sensational celebrity photos (AI fake or real). More than any technology we've ever faced, introducing AI into any business carries significant reputational risks. You do not want to be another entry into the AI Hall of Shame.
FINAL COMMENT
In conclusion, always remember that just as humans don't operate with absolute certainty, neither does AI, we generally make educated guesses based on our experiences and insights. We err on occasion, and the same goes for AI; just ensure that it will be as accurate and safe as needed. If you can ensure it, go for it and have fun - the rewards from this new technology are truly game-changing.
Finally, I can't wrap up this article on AI without a nod to the incomparable Alan Turing, who stated, "Expecting a machine to be infallible means it can't be intelligent." This, however, seems contrary to my comments and the need to deal with the infallibility of AI tech. I admire Turing too much to dare argue with him on his point; instead, his words prompt me to reconsider how we label this groundbreaking technology. Given Turing's sentiments, using the term "Artificial intelligence" at this time might just be misinforming; what we really have is just an assortment of revolutionary and creative, but always remember infallible, tech advancements.
Alan Turing: https://en.wikipedia.org/wiki/Alan_Turing
For those wanting to understand more about the fundamentals of AI, I can recommend:
- How AI Thinks: How we built it, how it can help us, and how we can control it, by Nigel Toon; published by Torva; ISBN-13: 978-1911709466 https://nigeltoon.com/about
- Machine Learning for Absolute Beginners: A Plain English Introduction, by Oliver Theobald; published by Scatterplot Press; ISBN: 1549617214 https://scatterplotpress.teachable.com/
- Applied Artificial Intelligence: A Handbook for Business Leaders, by Mariya Yao, Adelyn Zhou, and Marlene Jia; published by TopBots; ISBN-13: 978-0998289021 https://appliedaibook.com/
- Jason Brownlee's Machine Learning blog: https://machinelearningmastery.com/about/