Answering Your Top AI Questions...
Without Using AI
PART ONE

Picture of Kai Andrews
Kai Andrews

Data/AI/Power Platform Practice Leader

Part One of a Two-Part Series: Big Questions with Not-So-Big (But Still Really Good) Answers

As eGroup Enabling Technologies’ Data & AI Practice Director, I get the privilege (seriously!) of meeting with individuals and organizations every day, and together, explore how to integrate modern data and AI capabilities into their business functions. For each conversation, we want to “meet folks where they are on their journey.” That’s just simple code for saying that some conversations are educational in nature, only discussing basic AI capabilities, while other meetings dive deep into specific functionality and project needs. Regardless of the type of conversation, some topics and questions come up over and over again. So I figured—why not compile these questions and answers in a couple of blog posts so that we all can benefit from these insights?

Before we begin, I am a proponent of giving credit where credit is due, and therefore I want to send a shout out to my talented team of architects and engineers. These individuals have contributed most of the learnings that make up the answers below. Our team collaboration highlights that the rapidly evolving data and AI space is too big for one person to embrace alone. It requires a team of passionate researchers and developers sharing knowledge and assisting one another on a regular basis. As such, the answers below are not “my” answers, but rather represent the learnings and perspectives from a broad team of contributors. Now, let’s get to those questions and answers, shall we?

Question 1: What Is Artificial Intelligence? (AI)

Entire books have been written on this topic, so I’m not sure how to use a simple blog post to even begin tackling that subject. So let me refer to another question that I was asked recently to help narrow my answer. A customer challenged me to define AI without using the words ‘artificial’ or ‘intelligence.’ I love a challenge! Simply put, we see AI as “smart automation. I don’t want to downplay the power of these new technologies and advances, but if we want to break these capabilities down to their basics, they are just automation routines. Mind you, the processing power, models, and adaptability of these new tools are truly revolutionary and allow us to leap way beyond simple process automation. 

Systems can now decipher documents and images. They can generate new content and aggregate and distill large amounts of data. They can be easily integrated into many daily activities, freeing up time for more valuable activities (more on that point in a different Q&A pair). Finally, these new technologies are approachable. There are low-code development environments, and even the pro-code tools allow for relatively rapid proof-of-concept development. So, there you have it. AI is indeed the evolution of automation… or dare we say—revolution. 

Question 2: How Does AI Work?

Hey, what’s with all the hard questions? How am I supposed to use a paragraph to address how all the different AI capabilities work? Now don’t laugh, I do have a relatively concise answer here, and like my answer to Question 1 above, I’m going to use a two-word answer, “learning models.” Just like humans, AI needs to be taught and ultimately use those teachings to evolve its knowledge without additional human intervention—the very definition of artificial intelligence. AI is presented with “models” that allow it to learn and evolve. AI that interprets pictures is fed a large library of images (the model) to learn from. Document intelligence services are fed sample documents (the model) and taught how the document content and structure interrelate. Generative AI uses large language “models” that, in turn, allow the AI to respond to our questions and prompts.

For example, Microsoft 365 Copilot builds a model, called a semantic index (actually, it builds two if you are counting) from all of your emails, shared content, texts, and relationships. It is then able to use this model, combined with a large language model, to “read” your prompts and generate answers based on the relationships. The more it learns about you and your associations, the better it will be able to respond. The last part in making AI act effectively and “work, is providing context. Each AI solution is provided with guidance as to its purpose and the desired output. An AI system designed to detect cancer in patient images is given context as to what cancer is and what it does (and does not) look like. Microsoft 365 Copilot understands the professional work environment that it is deciphering. This context keeps the AI aligned to its tasks and should keep it from hallucinating. There you have it… AI works through learning models.  

Question 3: What Is The World’s Largest Artificial Intelligence?

This is a fun one! I can sum up my answer by simply quoting one mythical green Jedi Master; “Size matters not!” Seriously, though, the effectiveness of AI is not necessarily based on the size of the model being used to teach the AI. Sure, feeding an AI more examples allows for improved accuracy, but it is not necessary to employ the “world’s largest” AI to get effective results. For example, we recently designed a document intelligence solution that read through complex PDFs to find and extract certain data elements. We only needed to feed the system three sample documents before we started to see the AI function with an 80%+ success rate. Your mileage will surely differ, but it will come down to your use case/need and not necessarily the size of your training model. 

Another proof point that size doesn’t necessarily matter, is the fact that AI vendors are publishing more and more small models that are designed to address specific functions. Instead of having to use one of the largest natural language models (like OpenAI), there are smaller, more efficient, and more affordable models are available for use. Now, you are probably wondering, “Is he going to answer the actual question?” Well, that would be difficult to do because I’m not sure how to define and measure “size. Are we talking large language models or some other AI tech? I’ll let OpenAI, Meta, Amazon, IBM, and others battle it out and keep innovating, providing us with the “right” models, but maybe not the largest. 

Question 4: Is AI Dangerous?

To answer this one, I’m going to employ what many of you see as an annoying consultant trait—answering a question by asking another question.

How do you define dangerous? And from whose perspective?

Yes, AI can be dangerous. It is, after all, a form of automation (see my answer to Question 1 above). Automate the wrong task, or put too much decision making and trust into faulty AI logic, then bad results could follow. 

What if an AI model was tasked with identifying outdated information and given the ability to delete said content based off defined retention logic? What if that AI was not trained well enough to understand exceptions, deleting critical messages and documents that put the firm at risk in legal proceedings, for example? That is dangerous in its own right. This is where the concepts embodied in “responsible AI” come into play. Every AI solution needs to be evaluated on an ethical basis and designed with human checks and balances, reviewing output and logic on a regular basis to mitigate the dangers. 

What if we redefined “danger” altogether and take the perspective of someone whose job is at risk of being eliminated by AI? In that case, even the most effective AI tool is a threat. Once again, responsible AI practices can help mitigate this situation. Organizations can evaluate job impacts and elect to retrain employees. Not only should AI systems be transparent in their logic and intent, but so should organizational leadership. Publishing AI charters can go a long way in warding off fear and uncertainty posed by “dangerous” or “threatening” AI. I have no intention of downplaying the impacts, good or bad, that AI can have on our future. I simply want to highlight that humans are the orchestrators of our future and bear the responsibility to proactively address these concerns.

Question 5: What Are The Benefits of AI?

We believe that AI-produced benefits can be visualized as a hierarchy of benefits. If we start at the top of this hierarchical pyramid, AI benefits can be summed up with two words: “efficiency” and “quality.” AI, at its core, is a modern and advanced form of automation (again, see my answer to Question 1 above). Automation should, by its very nature, produce outcomes more quickly due to process steps being removed or optimized. These same automations can improve the quality of the output due to computerized processes being more consistent and predictable. These outcomes are dependent on the AI solutions being both well-designed, as well as checked by responsible AI principles. 

Each of these core benefits now expands into sub-benefits. Efficiency, for example, can lead to improved work/life balance, which leads to happier employees, which results in higher retention rates. Another benefit trail leads us from higher-quality customer experiences to an improved marketplace reputation, and ultimately to higher sales. There are many dependencies to realize these cascading benefits. I already mentioned the need for responsible AI and good design. Other requirements include a defined vision, adequate funding, long-term commitment, a willingness to experiment and possibly fail, and most importantly, transparency and communications across the organization. But, all of these dependencies are best tackled in a different answer…

 Hop on over to PART TWO of this blog post to find out!

We Can Help!

If you’re unsure where to begin with AI, or need help planning, designing, and/or implementing your vision, visit our Artificial Intelligence page or reach out to info@eGroup-us.com.

Need Assistance Implementing AI?

Contact our team today to schedule a call with one of our experts.