Every quarter, a new model tops the leaderboard. GPT-5 replaces GPT-4o. Claude improves. Gemini catches up. For HR technology leaders evaluating AI readiness, this creates a seductive distraction: the belief that picking the right model is the critical architecture decision.
It isn't. The model you select today will be outperformed within months. What persists, and what determines whether your AI investments produce reliable workforce intelligence or expensive hallucinations, is the knowledge layer architecture sitting beneath the model. Connectors, orchestration, and structured layering are the durable investment. As Andreessen Horowitz documents in their emerging architectures for LLM applications, the orchestration layer between data sources and model inference is where production-grade systems succeed or fail.
The connector catalog problem
ServiceNow, Workday, SAP SuccessFactors: every major platform now markets its AI connector ecosystem. Dozens, sometimes hundreds, of pre-built integrations. The pitch is compelling: connect everything, let the model reason over it.
The problem is that connection is not orchestration. A connector moves data from point A to point B. It does not clean that data, resolve conflicting definitions across systems, extract discrete facts from unstructured sources, or structure knowledge for accurate retrieval. Most enterprise HR environments have five to fifteen systems generating workforce data, each with its own taxonomy, its own update cadence, and its own version of the truth about the same employee, role, or process.
Diagnostic question: When two systems in your HR tech stack define the same workforce concept differently (say, "headcount" or "time to fill"), which system wins, and does your AI architecture know the answer before it retrieves?
Without a deliberate knowledge layer architecture, your AI connector strategy produces volume without fidelity. The model gets more data; it does not get better data.
The five-layer intelligence refinery: from raw data to reliable HR AI answers
At Ikona Analytics, we process qualitative and organizational data through a five-layer intelligence refinery. Each layer performs a distinct transformation, and each has a specific failure mode when skipped.
Layer 1, Raw: Source material enters the system untouched. Interview recordings, document uploads, system exports. No transformation, full provenance.
Layer 2, Transcribed: Audio and unstructured inputs are converted to text with speaker attribution and temporal markers. The failure mode here is subtle: without accurate transcription, downstream extraction inherits errors that compound at every subsequent layer.
Layer 3, Cleaned: Redundancies, filler, and off-topic material are removed. Terminology is normalized against a controlled vocabulary. This is where conflicting definitions get flagged. When cleaning fails or is skipped, the fact-extraction layer inherits contradictions it cannot resolve, producing the downstream hallucination patterns that make executives distrust AI outputs entirely.
Layer 4, Fact-extracted: Discrete, atomic knowledge units are extracted from cleaned text. Each unit carries metadata: source, confidence, domain, and relationship to other units. This layer is the most technically demanding. It requires human-in-the-loop validation for ambiguous or conflicting claims, particularly when two interview subjects describe the same process differently. IBM Research's work on retrieval-augmented generation demonstrates why retrieval quality depends on the structure of the knowledge base, not the sophistication of the generation model.
Layer 5, Anonymized: Source attribution is stripped for governance compliance while provenance metadata (domain, confidence level, extraction date) is preserved. This allows the system to answer "which knowledge layer produced this insight?" without re-identifying individual contributors.
The output is a structured, queryable knowledge store optimized for retrieval relevance and accuracy.
Where tacit knowledge enters the architecture
Most HR AI readiness conversations focus on structured system data. They miss the largest knowledge gap entirely: the tacit knowledge that lives in the heads of practitioners and has never been documented.
Across our engagements, we consistently see that 40 to 60 percent of the operational knowledge driving HR decisions exists nowhere in any system. Compensation workarounds. Exception processes for specific business units. The real reason a prior transformation stalled. This knowledge is captured through 44 to 64 structured diagnostic interviews per engagement, each building on the findings of the last, and processed through the same five-layer refinery.
Diagnostic question: If your three most senior HR operations leaders left tomorrow, how much of what they know about how your processes actually work would survive in any system?
The refinery does not sit as a replacement for your existing analytics or BI tooling. It operates alongside your HR tech stack as an independent knowledge layer, one that captures what your systems of record were never designed to hold.
A maturity lens for self-assessment

A dynamic, abstract geometric composition optimized for a LinkedIn feed, in the Ikona Analytics brand style. Emphasize the co
Where does your organization sit today?
Ad hoc connectors: Data moves between systems, but no orchestration layer normalizes, cleans, or structures it for AI retrieval. Model outputs are inconsistent.
Structured pipeline: Connectors feed into a managed data pipeline with transformation rules. Structured data is reliable; unstructured and tacit knowledge remain uncaptured.
Knowledge layer architecture: Structured and unstructured data, including tacit knowledge, flows through a layered refinery. Retrieval is optimized for relevance and accuracy. New questions can be answered from the existing knowledge store without re-collecting data.
Most HR organizations we assess are between the first and second level. The gap between the second and third is where durable AI advantage lives.
Curious whether your organization is actually ready for AI? Ikona's AI Readiness Assessment maps your current knowledge layer architecture against the five-layer intelligence refinery model and identifies where connector gaps are creating retrieval risk. We'd welcome the conversation.
References
Andreessen Horowitz. "Emerging Architectures for LLM Applications." https://a16z.com/emerging-architectures-for-llm-applications/
IBM Research. "Retrieval-Augmented Generation (RAG)." https://research.ibm.com/blog/retrieval-augmented-generation-RAG
Harvard Business Review. "AI Strategy: All the Best Models Are Wrong." https://hbr.org/2024/07/ai-strategy-all-the-best-models-are-wrong
Written by
Bennet Voorhees
Bennet Voorhees is a founding partner at Ikona Analytics, bringing deep expertise in workforce intelligence, diagnostic methodology, and HR technology transformation from experience across Fortune 100 organizations.
Learn more about our team