1. Will a pre-built model work, or am I building my own?
There are now a good array of pre-built models to integrate and work with - this is what ChatGPT or Google's pre-built AI APIs are. Rather than build your own trained models, there are now many pre-built models that are easy to use and have already been refined over time. Just be aware, however, that although they generally are significantly quicker to integrate, you have little or no control over the algorithm used, its weighting and the dataset to build and refine the model - it is someone else's data and someone else's approach to producing the output. And "someone else's data" is highly significant when considering what output you want.
It is the eternal consideration in tech - build your own or use off the shelf; a cost risk and return decision. However, unlike when choosing to build your application or buying a package, AI's risk is much higher if accuracy and safety are essential. For example, when looking at accounting packages, rather than building your own, one of your starting expectations is that it will do the sums correctly, i.e. that 2+2=4. However, with pre-built AI, you can not assume with the same confidence; 2+2 may equal 4 or infinity or an elephant. Someone else has trained the model with their data.
I am absolutely not advocating that you can't trust or use pre-built models, just that in assessing the suitability, consider that the risks are significantly higher than when making similar (not-AI) decisions. Committing to comprehensive testing of pre-built AI before use, may significantly reduce your risk and be a wise investment.
2. Do we have sufficient and true data?
If you're taking the route of custom-building your model and not going for the pre-built options, you'll have a wide range of algorithms at your disposal. These algorithms are well-established for their ability to contribute towards building a successful model, so your choice isn't usually a high-risk factor. The critical part here lies in adjusting and testing your model's weighting. A crucial cautionary note, if your source data is flawed, be it from misrepresentation, incorrect bias, or insufficiency, the result that you get from your algorithm will be less reliable; compromising the efficacy of your model. In essence, it's vital to ensure that the data you're working with is both of sufficient volume and accuracy.
It's entirely feasible to create a model that performs tasks accurately and safely, provided you have the right data — a lot of well-intentioned, but misguided ideas ("Hey, why don't we use AI for this...") are often undone due to the lack of quality and adequate data, rather than technological or resource constraints. When it comes to AI, a vital step before embarking on any project is critically reviewing the data required.
3. What level of accuracy must we have, and how do we prove it?
Having the algorithm and the data might give you the capability to craft a model; however, the crucial query lies in its precision and security. Figuring out your model's potential accuracy and safeguarding features is of immense importance. Stepping into the realm of AI implementation, especially one publicly accessible, without being aware of inherent risks could invite unsolicited trouble.
Measuring the precision and security of AI is a nuanced task that goes beyond standard testing procedures. If your team lacks the essential mathematical skills, don't hesitate to seek assistance from an expert.