Documentation

Generative AI Governance

illumex Omni

illumex Omni was created to streamline your organization’s interaction with Large Language Models (LLMs), like Llama2, or LLM Services, like GPT-4. Seamlessly integrated within your organizational chat workspace (like Slack), Omni enables real-time, reliable data querying with an enhanced contextual understanding and applied governance. Using a domain-specific engine that combines LLM and graphs, Omni interprets your prompts, matches them to your organization’s unique business data semantics, and auto-generates SQL queries. This ensures your questions are not only well-understood but are also directed to the appropriate data assets, effectively minimizing the risks associated with generative AI hallucinations and ambiguous outputs.

When you pose a data question, Omni engages immediately. Leveraging illumex’s Generative Semantic Fabric, it maps the question to the corresponding semantic and data objects. The query is then translated into SQL format and all of the above is forwarded to your selected LLM as a context. To maintain transparency, Omni interacts with you throughout this process, offering clarifications if your query seems ambiguous and even suggesting relevant follow-up questions. All of that is according to the augmented governance flows defined in the platform.

In essence, Omni acts as your business-context-aware interpretive layer between you and the LLM, ensuring that the data retrieved is both accurate and relevant to your organizational context. This makes illumex Omni an invaluable asset for organizations aiming to harness the power of generative AI without compromising their specific business language and data governance structures.

Was this article helpful?

Can you tell us more about your experience?

Ask us anything!

Reach out to learn more or schedule a demo

.
.