top of page

Blog

The First Step in AI Strategy May Not Be Strategy

  • Lee Akay
  • 6 days ago
  • 4 min read

Updated: 5 days ago

Lee Akay


As AI systems rapidly shift from lab experiments to real-world platforms, more organizations are asking: “What’s our strategy for AI?” But too often, that question leads to months of internal discussions, vendor presentations, and stalled pilot projects. The challenge isn’t usually a lack of vision, it’s the widening gap between strategy and execution. And in AI, that gap grows fast.


At the Innovation Discovery Center (IDC), we’ve seen this firsthand. We’ve worked with organizations across the U.S. and China to scale advanced AI platforms in healthcare and enterprise environments. What’s become clear is this: AI deployment and user adaption isn’t failing because the models aren’t impressive enough. It's failing because the critical infrastructure around them is incomplete or missing. Even the most advanced AI models can falter if they lack the operational scaffolding required to convert potential into performance.


That’s why we launched IDC's AI That Works series with a technical deep dive into system-level constraints. While it may have seemed like an unconventional opening for a dual-track series, it reflects a key finding: Models don’t succeed on their own. Production-ready solutions do.


Execution First: What China’s Getting Right 

Chinese organizations are increasingly adopting an infrastructure-first approach, one that prioritizes deployment speed, technical integration, and scalable adoption. This rapid, scaffold-first mindset accelerates implementation and improves adaptability. Resulting in faster time to value and broader user uptake.

The layers that support large language models—retrieval layers, memory components, orchestration logic, security wrappers are just as important as the model itself. Organizations can have a well-defined strategy and still fall short because the technical foundation cannot support execution at scale.

Bridging the gap between technical possibility and operational reality demands a fundamental shift in mindset. One that favors experimentation, modularity, and production-readiness. The AI era is going to reward adaptability and decisive quick movement. The companies thriving in this environment are those willing to iterate forward, treating AI deployment as an evolving product, not a one-time transformation.


Beyond the Model: The Architecture of Scalable AI 

End-to-End Integration 

Successful organizations treat AI as an integrated ecosystem, not a standalone model.

One of our Chinese partners integrated large language models with domain-specific data pipelines, edge computing, and real-time monitoring. This pre-built scaffolding reduced deployment timelines for hospital AI projects by 45% compared to U.S. equivalents.

Popular AI frameworks are engineered to integrate seamlessly into cloud-native ecosystems, minimizing friction and accelerating time to deployment.

Budget priorities reflect this shift:

  • China allocates over 60% of its AI budget to Integration layers and orchestration.

  • The U.S. allocates less than 40% for AI integration layers and orchestration, with greater emphasis on foundational model R&D.

Standardization and Interoperability 

China’s New Generation AI Development Plan emphasizes interoperability across tools, data lakes, and infrastructure. This reduces duplication and accelerates scaling.

  • Hospitals using medical AI inherit built-in FHIR compliance and automated patient data retrieval.

  • Shared infrastructure and standardized protocols allow for faster feature rollout and easier cross-vendor collaboration.

Meanwhile, many U.S. firms still struggle with fragmented tools (e.g., AWS vs Azure vs open-source stacks) and overhead, slowing their ability to respond to market opportunities.

Risk Tolerance and Rapid Experimentation 

China’s “fail fast, scale faster” culture contrasts with Western risk aversion.

  • At companies like Douyin, AI features are deployed in 2–4week cycles using modular architectures that isolate failure risk.

  • Instead of waiting for perfect accuracy, firms release features, observe usage, and refine continuously.

Speed trumps perfection. While most firms delay launches to fine-tune performance, Chinese firms iterate in production to gain market advantage.


Where Projects Stall: “Last-Mile” AI 

There’s no shortage of innovation in U.S. AI research. Technologies like GPT-4 Turbo and Gemini Ultra are globally respected. But the gap between research and operational deployment remains wide.

Here are three recurring challenges we see in U.S. enterprise AI projects:

  • Integration Gaps - Technological Overwhelm: new models, toolkits, and architectures emerge weekly. Sorting hype from reality takes time. AI rarely works “on top” of existing systems, it requires thoughtful, staged integration. A hospital deploying GPT-4 for diagnostics often needs to bolt on retrieval, memory, and compliance layers—adding months of engineering.

  • Talent Gaps - Talent is scarce. Successful AI deployment requires a harmonious blend of organizational readiness and robust technical infrastructure. Many projects stall due to a lack of AI SME's who can bridge data, tooling, and model logic

  • Strategic Paralysis - On one hand, the urgency is clear. AI is evolving faster than any prior wave of technology, creating “deploy or fall behind” pressure. On the other, execution readiness is fragile. Business executives juggle operational demands and compliance and cannibalization fears delaying decisions until it’s too late to lead. AI’s potential spans every function, but most companies struggle to prioritize and align use cases with measurable ROI. AI requires budget, executive attention, and cultural shift—three things that are often locked in short-term cycles.


While Google’s LLM token processing for casual users skyrocketed 50x (from 9.7 trillion in April 2024 to 480 trillion in April 2025), U.S. enterprises are struggling to harness this scale. U.S. companies often focus on the model, Chinese firms are investing in the system around the model and seeing results. 


The Operational Backbone of High-Performance AI 

To close the operational gap, organizations must rethink where they start and what they prioritize:

Start with Infrastructure, Not Models 

Deploy integrated platforms with wired memory, retrieval, and control layers not just standalone models.

Adopt Modular Architectures 

Use rapid cycles by isolating components. For example, test an AI discharge planning assistant without overhauling the entire hospital EMR. This reduces complexity while validating utility.

Partner for Interoperability 

Work with vendors committed to standardization to reduce integration friction over time.

Embrace “Start-Now” AI 

Pilot narrow use cases to build momentum. Scale later. Velocity breeds internal confidence and confidence attracts investment. Start with projects that offer measurable ROI and infrastructure reuse. Balance centralized AI planning with localized execution. Prioritize delivery velocity.

Scaffolding for Impact 

As high-adoption AI platforms have shown, the “last-mile” of AI is built on operational scaffolding; the understated, behind-the-scenes architecture that turns demos into deployed systems. Before drafting another roadmap, ask a different question: Are we investing in the system that will make AI work?

 
 
 

コメント


All Posts
Archive
Follow Me
  • Google+ - Grey Circle
  • LinkedIn - Black Circle
  • Facebook - Grey Circle
  • Twitter - Grey Circle
bottom of page