How Ecosystems Can Solve Critical Enterprise AI Problems
Picture of Wissam "Will" Yafi
Wissam "Will" Yafi
CEO & Founder

How Ecosystems Can Solve Critical Enterprise AI Problems

While many consumers are enjoying Artificial Intelligence (AI) benefits, particularly the generative elements of it, lately, several risks have surfaced with Enterprise AI, among which are security, privacy, accuracy, relevancy, and timeliness. This is leading some organizations to re-think AI, some even piping down some of the initiatives ‘awaiting new technology to emerge.’

These negative AI symptoms have an underlying cause, that some are attributing to the AI “brains” themselves, highlighting inexplicable hallucination and the like. In this blog, I will argue that these symptoms have other underlying root causes that transcend the AI brain. Rather, they are related to the knowledge that the AI brain is having to draw upon to provide its output. I will delve into 3 key root causes: Generalized knowledge feeds that go into training AI, Integration issues within and outside of the organization, and AI infrastructure that is incompatible with the organizational structure. Together, these factors are leading Enterprise AI deployments to underperform. Let us try to unpack each one of these causes:

General Knowledge Feeds: 

Contrary to popular opinion, more knowledge does not necessarily mean a smarter or more accurate Enterprise AI. The common misconception here is that more of the organization’s data will actually make the AI smarter. It won’t, any more than reading books might make a child have a bigger brain. The brain of a child, just like that of AI, already comes with amazing thinking abilities the minute it is born. In the case of AI, it is already trained on Billions of parameters. What both, the human and AI brains, need is the proper context (or knowledge feed) to draw out this intelligence in meaningful and productive ways.

For example, feeding a heart surgeon knowledge about rocketry may be interesting, but it will not help make a better surgeon. Feeding it information about the latest surgical techniques is much more useful. Enterprise AI in many ways works similarly. The AI brains out there are already VERY smart and capable of ingesting knowledge. However, more general knowledge will not make them more accurate, more relevant, or timelier. Rather, it is the focused and specialized feed of knowledge that will home in the thinking of the Enterprise AI brain to make it deliver more accurate, relevant, and a higher level of performance. Therefore, somewhat counterintuitively, curating the knowledge feeds within an Enterprise AI system to what is essential, is likely to improve performance. This curation process could occur before or after vectorization, which is a topic that will be tackled in a later blog.

Limited Integration with Knowledge/AI Outside the Organization:

Having discussed the importance of the knowledge feed to the performance of Enterprise AI, in many cases, organizations will find that said knowledge feeds need to be complemented from outside sources. So, what happens to an enterprise AI in the case where the knowledge it needs to sift through or be trained on resides outside of an organization? Let’s call this the upstream scenario. Here is how it could play out: Partner B needs knowledge and learning that happens to reside inside of Provider A. On the one hand, all the knowledge resides protected inside of Provider A’s IT environment, which Partner B is not privy to, except perhaps through some closed and protected portal. On the other hand, Partner B, which has its own backend systems, and wants to access the knowledge from Provider A in a curated manner and apply its own Enterprise AI. A slightly more realistic scenario could have Partner B wanting to do the same not only from upstream Provider A, but also Provider X, Provider Y, and Provider Z, all of whom have different knowledge feeds and possibly even different Enterprise AI systems. How would Partner B integrate with all of them seamlessly? The simple answer is that if each Provider has a siloed AI stack, they wouldn’t be able to; and the AI’s benefits will always be limited.

As an example, if a hospital’s knowledge is based on dozens of siloed pharma’s or healthcare suppliers’ knowledge feeds, it wouldn’t be able to streamline its Enterprise AI. Similarly, if a local government office needs information from state and federal agencies to complement its own, it wouldn’t be able to do it easily either. Nor would a university, which may have knowledge feeds from multiple science or tech providers. In all these examples, the Enterprise AI will be limited to the knowledge inside of the organization, which may or may not be complete, relevant, or constantly updated, all of which will inevitably result in the deterioration of the quality of AI results.

Applied AI infrastructure that is Compatible with the Organizational Structure:

We are increasingly finding that Enterprise AI implementations are almost all monolithic in nature—meaning one knowledge feed and one AI interface. Unfortunately, that’s not how organizations operate. Different divisions within an enterprise rely on different knowledge to do their specific jobs and meet their goals. Sometimes knowledge can be shared amongst them. At other times there are issues of privacy, security, intellectual property, and relevancy, making it pointless, if not outright dangerous, to flood everyone with all the knowledge. What organizations need is to be able to parse knowledge, and henceforth AI, to meet the specific needs of their different constituencies.

For example, the line engineers of an oil and gas operation could use AI for maintenance purposes or to re-design a process; whereas human resources (HR) may need it to access and analyze employee records. The sales division might use it to understand the pipeline data and perhaps prepare proposals; whereas the finance or auditing departments might want to understand trends or need help preparing confidential reports. Of course, within highly secure environments, such as government military or intelligence agencies, the problem becomes even more acute. If an organization dumps all its knowledge into one monolithic AI system, how would it be able to parse the output to match the need and access profile? The short answer is it wouldn’t be able to easily. Curation, orchestration, and automation becomes imperative to keep the enterprise AI secure, private, accurate, relevant, and timely.

How Ecosystem AI Can Help?

The above critical issues will ultimately be key determining factors for the success of any enterprise AI deployment. Fortunately, they can all be resolved through an Ecosystem AI approach. Ecosystem AI provides organizations with the ability to orchestrate and govern the flow of knowledge in, out, and within an organization. Because of its unique nodal architecture, Ecosystem AI provides the ability and flexibility to safely share knowledge between several organizations, maintaining privacy, security, relevancy, accuracy, and timeliness across the entire ecosystem. It provides organizations with the platforms and automation tools to:

  1. Feed focused knowledge that is relevant to training the Enterprise AI
  2. Integrate curated sources of knowledge from outside the organization
  3. Parse access so that only the right audience can prompt/query the pertinent and authorized knowledge

The ultimate success of enterprise AI will not depend so much on the differential power of the AI brain being used (ChatGPT, Bedrock, Gemini, and Llama2 are all equally magnificent tools), as much as it will the ecosystem within which the knowledge is housed, fed, shared, and trained and the ecosystem tools that allow this knowledge to be seamlessly and safely be sourced, governed, updated, distributed, and secured.

Please visit https://www.tidwit.com/solutions/tidwit-ecosystem-ai/ to learn more about Ecosystem AI solutions from TIDWIT or contact us at info@tidwit.com

Leave a Reply

Your email address will not be published. Required fields are marked *