What AI Adoption Reminds Me of Past IT Transformations
Admittedly, I’m still in the early phases of learning about AI, but I’ve spent enough years in IT to recognize familiar patterns. Many of the conversations around AI adoption today sound almost identical to the ones I heard during earlier technology shifts, such as automation and cloud computing.
It’s easy to treat AI as something entirely new, but the real challenge is not the technology itself. It is how organizations expect it to behave. Common assumptions, such as expecting plug-and-play deployment, immediate cost savings, or vendor-led ownership, often create friction and risk. Organizations that approach AI with the same discipline applied to earlier IT transformations are, in my opinion, far more likely to achieve sustainable results.
A good illustration of these assumptions comes from Star Trek IV: The Voyage Home. In one scene, Scotty comically issues commands to a 20th-century computer by speaking directly into a computer mouse, expecting it to understand natural language and execute tasks flawlessly. This humorous moment highlights a common misconception about AI today: the belief that modern systems “understand” commands like a human would and can work perfectly without guidance. Just as in the movie, expecting AI to be fully intuitive and plug-and-play can lead to frustration and errors. AI, like the older computer in the scene, still requires structure, clear instructions, and oversight to produce reliable results.
Below are several common observations and assumptions about AI implementation that echo lessons many IT teams have already learned on past technology transformations.
The Assumption That AI Is Plug-and-Play
Summary: AI is not a one-time installation; it requires ongoing tuning, data management, and operational alignment.
In past IT projects, new software systems were often treated as something that could simply be installed and turned on. Anyone who has lived through large enterprise implementations knows how rarely that worked. Data needed cleanup, integrations took longer than expected, and business processes had to change.
AI implementation follows the same pattern, but with less obvious failure signals. AI systems depend heavily on data quality, clearly defined use cases, and ongoing tuning. When those are missing, outputs may still look reasonable, even when they are not reliable.
AI adoption is not a one-time deployment. It is an ongoing operational capability that requires ownership, monitoring, and adjustment as the business evolves.
The Belief That AI Will Quickly Replace People
Summary: AI is best used to augment human work, not to fully replace it.
Automation has long been associated with cost reduction and workforce replacement. Earlier IT initiatives made similar promises, but the reality was usually more nuanced.
In practice, AI adoption tends to shift work rather than eliminate it. AI can automate specific tasks, but its greater value often comes from supporting people, improving consistency, accelerating analysis, and reducing repetitive effort. When organizations focus too heavily on replacement, they often introduce operational risk and resistance to change.
As with previous IT transformations, change management matters. People need to understand how AI systems affect their work and decisions. Without that clarity, adoption slows and trust erodes.
The Idea That Vendors Can Own the Entire AI Solution
Summary: Internal ownership and understanding of AI systems is essential for sustainable adoption.
Relying heavily on vendors or consultants is not new in IT. In many past initiatives, this approach led to long-term dependency and limited internal understanding.
AI makes this issue more visible. Even when organizations buy AI tools rather than build models themselves, internal teams still need to understand data sources, model outputs, limitations, and performance tradeoffs. Without that knowledge, it becomes difficult to govern AI systems or explain outcomes.
This also ties directly to AI governance. Clear internal ownership is essential. Someone must be accountable for decisions, changes, and results.
The Expectation That More Data Automatically Improves AI
Summary: Quality, relevance, and context of data matter more than sheer volume.
Data volume has been a focus of digital transformation efforts for years, but volume alone has never guaranteed insight. AI adoption reinforces this lesson.
AI systems depend on relevant, well-curated, and representative data. Large datasets that are poorly labeled or loosely aligned to the business problem often lead to inconsistent or biased outcomes. Smaller, higher-quality datasets aligned to specific AI use cases are usually more effective and easier to manage.
From an IT perspective, this is familiar ground. Data discipline matters more than data accumulation.
The Push to Use AI Everywhere
Summary: Focused, high-value AI use cases deliver better results than broad, unfocused adoption.
When a technology gains momentum, organizations often feel pressure to apply it broadly and quickly. This pattern has repeated itself across multiple IT transformations.
AI adoption works better when it starts with targeted, high-value use cases. Focused AI implementations make it easier to define success metrics, manage risk, and build internal confidence. Not every process benefits from artificial intelligence, and recognizing that early prevents unnecessary complexity.
Experience also helps identify where not to use AI, particularly in decisions that require judgment, accountability, or ethical consideration.
The Impression That AI Understands Context
Summary: AI predicts patterns; it does not inherently understand nuance, intent, or consequences.
Because modern AI systems can generate fluent language and structured recommendations, they can appear to understand intent or business context.
In reality, AI systems operate on statistical patterns, not understanding or judgment. They do not inherently recognize priorities, consequences, or ethical boundaries. Human oversight remains essential, especially for decisions with operational, legal, or reputational impact.
This limitation connects directly to risk management and compliance. AI outputs must be reviewable, explainable, and governed, just like other enterprise systems.
The Assumption That AI Governance Will Work Itself Out
Summary: AI requires early, clear governance to ensure accountability and prevent model drift.
In many past IT initiatives, governance was addressed after problems emerged, such as security gaps, compliance issues, or unclear accountability. AI adoption shows similar tendencies.
AI systems require defined ownership, review processes, and decision rights from the start. Without governance, models drift over time as data and business conditions change. These shifts are often subtle and easy to miss.
Strong AI governance does not slow progress. It enables sustainable adoption by making responsibility clear and outcomes traceable.
Summary
From an IT standpoint, AI adoption looks less like a disruption and more like a continuation of familiar challenges. Clear goals, high-quality data, internal ownership, and ongoing oversight still determine success. Organizations that apply lessons from past IT transformations are better positioned to adopt AI in a way that is both practical and sustainable.