Designing AI Services for the Energy Sector: Talking to Engineers About Design (Without Losing Them)
Sometimes I forget how uneven the design maturity landscape is across the energy and industrial sectors. I’ve been fortunate—working alongside some of the sharpest design minds at major energy companies. Over time, you get comfortable with your pitch. It gets easier when trust is established. But what happens when you’re speaking to a client who doesn’t see the value in service design at all?
I work closely with engineers of all stripes. This article isn’t just for them—but they’re the lens I see this world through. And honestly, “lost in translation” doesn’t begin to describe the gap that often exists between design and engineering.
As companies rush to modernize, digital tools like AI, automation, and simulation are being deployed to boost efficiency and resilience. Yet many of these efforts fall short—not because the tech is broken, but because it doesn’t mesh with how people actually work. That’s where service design steps in.
But drop the term service design in a room full of engineers, and you might as well be speaking elvish. To many, it sounds soft, vague, or even unnecessary. Engineers build systems—concrete, measurable, and constrained. Service designers? We traffic in journeys, sticky notes, and ambiguous user needs. No wonder these two worlds often talk past each other.
This article is a translation guide. If you’re a service designer working with engineers—or an engineer trying to make sense of design jargon—I’ll break down how service design maps to engineering logic. We’ll connect personas to user requirements, journey maps to process flows, and blueprints to system diagrams—transforming service design from a “nice-to-have” into a strategic tool.
Bridging Two Mindsets
Both disciplines share a common mission: making complex systems usable, reliable, and efficient. But their starting points are different.
Engineers often begin with what’s technically feasible. They work within constraints like voltage ranges, API limits, safety regulations, and hardware tolerances. Their job is to ensure the system performs as expected under known operating conditions. They optimize for functionality, scalability, and resilience.
Service designers, by contrast, begin with what’s humanly desirable. They focus on how people actually interact with those systems—what makes them adopt, trust, understand, or reject them. Their domain includes everything from onboarding flows to error states, from how information is presented to when and how a human intervenes. They optimize not for raw functionality, but for usability, fit, and experience quality across time.
The difference isn't just philosophical—it's systemic.
Where an engineer might ask, “Does the system behave correctly under edge conditions?”, a designer might ask, “Does the user know what to do when it does?” One ensures the output is mathematically valid; the other ensures it’s actionable and comprehensible.
These aren't opposing goals—they're complementary perspectives on system performance. In fact, engineers are designers in their own right: they design for mechanical stability, data integrity, or process efficiency. What service designers add is the human element—designing for perception, cognition, motivation, and error. It’s not about “soft skills.” It's about accounting for variability in the most unpredictable subsystem of all: the user.
This is especially critical in modern industrial systems, where humans and machines are increasingly intertwined. Whether it’s a technician responding to AI-generated maintenance alerts, or an operator managing a semi-autonomous system, the interface between human and machine is no longer a side concern—it’s the control surface.
Translation Tip:
When talking to engineering audiences, reframe service design as the optimization of the human-machine interface over time. Forget buzzwords like “delight” or “moments that matter.” Instead, anchor your language in system outcomes: reducing task friction, minimizing cognitive load, lowering error rates, and increasing throughput at the user level. Think of it as debugging workflows, not just UIs.
Ultimately, great service design isn’t about making systems prettier. It’s about making them perform better—because a system that works technically but fails to integrate with human behavior is still a broken system.
Personas = User Requirement Specs
What designers call personas, engineers might describe as user requirements, operator profiles, or interaction models. But regardless of the label, the purpose is the same: to simulate real-world usage so that the system performs reliably in practice—not just in theory.
A persona isn’t fluff—it’s a structured representation of a user role, grounded in field research and behavioral patterns. It captures how people actually interact with a system under environmental, technical, and organizational constraints.
Imagine your client wants to develop a machine learning model to assist control room operators by flagging anomalies in sensor data. On paper, the AI may perform with 95% accuracy. But if the operator persona reveals that these users work in 12-hour shifts, manage multiple screens, and must respond to alerts within 30 seconds—then that AI isn’t just an algorithm anymore. It's part of a real-time decision-making workflow, and the way alerts are surfaced, prioritized, and dismissed becomes mission-critical.
A persona might uncover that operators tend to ignore low-priority alerts due to alert fatigue, or that they distrust AI suggestions unless accompanied by clear evidence. That’s not a user preference—it’s a constraint on system trust and adoption. If ignored, the AI won't just be underutilized; it might be actively circumvented.
For engineers, personas offer a way to simulate not only technical interactions but human variables like trust thresholds, time pressure, error tolerance, and response patterns. These become essential design inputs—especially for AI systems, where the success of the technology hinges not just on model performance, but on how it's received and used by people.
Translation Tip:
Present personas as operational variables within human-machine systems. Instead of emotional goals or abstract motivations, focus on practical constraints:
Say:
"This user profile reflects cognitive load, trust patterns, and workflow interruptions under typical shift conditions. We’re using it to validate whether the AI system fits the operational envelope—not just the accuracy spec."
Journey Maps = Human-Centered Process Flows
Engineers love process flows. Journey maps are the same thing—with people in the loop.
Engineers use process flows to visualize how inputs move through a system—tracking assets, signals, materials, or data as they pass through states, decisions, and constraints. These diagrams are essential for optimizing throughput, identifying failure modes, and ensuring system stability.
Journey maps serve the same function—but for humans.
Where a process flow might chart the movement of a product through a refinery or the logic of an automation script, a journey map traces the movement of a person through a system: a technician submitting a work order, an operator responding to an alert, or a stakeholder reviewing a dashboard. It visualizes not just what happens, but how it feels, why delays occur, and where breakdowns emerge from real-world friction.
In a journey map, you’re tracking:
The user’s goals and tasks
The sequence of actions they take
The tools or interfaces they interact with
Decision points and ambiguity
Pain points, workarounds, and bottlenecks
The value is in surfacing mismatches between the system’s intended flow and the user’s lived experience.
Take this real example:
At a major energy company, a design team mapped the journey of a maintenance request—from offshore rig to onshore approval. The assumption was that delays were caused by digital workflow inefficiencies. Instead, the journey map uncovered that once the digital request was submitted, it was printed out by an assistant and physically walked to a supervisor’s desk for sign-off—a workaround no one had documented. This analog step created a 72-hour bottleneck. No dashboard, ticketing system, or asset tracker would have identified it—because the delay didn’t occur in the system, it occurred around it.
That’s the power of journey mapping: it brings visibility to the informal, undocumented, human-shaped parts of a process that are invisible in standard system diagrams.
Translation Tip:
Frame a journey map as a user-centered process diagram with behavioral variables. Instead of materials or data moving through valves or gateways, you're tracing tasks, handoffs, and decisions through people and interfaces.
Say:
“This is a user-view process map. We're tracking task sequences, decision gates, time delays, and where human behavior deviates from expected system logic. It's the same as a control flow diagram—but the control loops include people.”
This helps engineering teams realize that journey mapping isn’t about storytelling—it’s about surfacing hidden inefficiencies in human-system interaction. And in complex operations, that’s often where the real risks—and opportunities—live.
Service Blueprints = Layered System Diagrams
A service blueprint is a multi-layered system diagram—with human, system, and support layers.
Engineers are deeply familiar with system diagrams—whether they’re drawing out electrical circuits, control logic, API interactions, or infrastructure dependencies. These diagrams help break down complexity, isolate fault domains, and trace cause and effect.
A service blueprint works exactly the same way—but applies that logic to human-centered systems.
At its core, a service blueprint visualizes how a service is delivered, across both the visible and invisible parts of the system. It consists of multiple “swimlanes,” typically including:
Frontstage (User Interaction Layer): What the user directly sees and does—interfaces, touchpoints, decision points.
Backstage (System and Staff Operations): What happens behind the scenes—API calls, support teams, internal workflows, data pulls, approvals.
Support Processes: Background systems and dependencies that enable the service to function—scheduling systems, ticketing platforms, asset databases, notification engines, etc.
Think of it like a layered control system: instead of sensors and actuators, you have users and interfaces. Instead of PLCs or code modules, you have support staff, backend logic, and business rules.
This layered view allows teams to trace how a user-facing experience is actually supported across the full operational stack—people, platforms, and processes.
Consider a case where an operator opens a mobile app expecting to see real-time asset health—but instead sees outdated or missing data. Is the problem in the UI? The API? The data pipeline? Or maybe the operator’s shift wasn’t correctly synced with the asset tracking system?
A service blueprint allows you to “debug” the experience—just like you would trace a broken data flow or control signal in a system diagram. You follow the flow across the layers to find the root cause, which might not be visible in any one part of the system alone.
This is especially useful in complex, multi-touchpoint environments—like field services, control room dashboards, or automated ticketing systems—where human and machine responsibilities are tightly interwoven.
Translation Tip:
When talking with engineers or technical stakeholders, describe the service blueprint as a cross-functional system diagram that ensures experience reliability. It provides visibility into how well the “service stack” performs under real-world conditions.
Say:
“This is like a control diagram for the service experience. It shows dependencies across users, systems, and teams—so we can trace where breakdowns happen. Just like a signal diagram ensures data integrity, this ensures service integrity across the human-machine interface.”
Framed this way, service blueprints become not just design tools—but operational risk management assets. They enable teams to spot breakdowns before they escalate, and to design resilient services that perform under both human and technical constraints.
Prototypes = Simulations
Engineers are deeply familiar with simulation—whether it’s modeling structural load, fluid dynamics, electrical behavior, or system throughput. Simulations let you test assumptions under controlled conditions, predict failure points, and optimize performance before anything goes live.
Service design prototypes serve the exact same function—except the subject being tested is human behavior.
A prototype might simulate a new digital form, ticketing process, or decision support tool. It might take the form of a clickable wireframe, a role-play script, or even a paper mock-up. Regardless of fidelity, the purpose is the same: test how a real user reacts under realistic conditions.
For example, a design team might prototype a mobile workflow for field technicians to log equipment issues. The prototype helps answer questions like:
Do users understand which fields to fill out?
Can they complete it with one hand, wearing gloves?
Do they know what happens next once the form is submitted?
Does the workflow align with actual field conditions and mental models?
These tests reveal more than preference—they expose points of confusion, delay, misuse, or abandonment. They surface behavioral failure modes. And just like engineering simulations, they’re cheap to run and expensive to skip.
In complex environments—especially where humans and machines interact—prototypes are the behavioral equivalent of a digital twin. They allow you to observe how people interpret the system, where they improvise, where they stumble, and where assumptions break down.
Crucially, the goal isn’t perfection—it’s risk reduction. Just like an engineering team wouldn’t deploy new infrastructure without simulating load, designers use prototypes to de-risk human interaction before full implementation.
Translation Tip:
When speaking with engineers, equate design prototyping with system simulation.
Say:
“This is a behavioral simulation. We’re modeling how a person interacts with the system under real conditions—just like you’d simulate power draw, thermal expansion, or failure modes before physical rollout. We’re testing for breakdowns in comprehension, trust, and actionability.”
This shifts the perception of prototyping from something subjective or experimental to a structured method of validation—one that engineers already recognize and respect.
Human Factors = System Constraints
Engineers design around physical constraints. Service designers account for human constraints.
In engineering, variability is expected—and designed for. Systems are built to withstand fluctuating inputs, environmental changes, and partial failures. Whether it’s using buffer tanks to handle pressure spikes, redundant nodes in a control network, or flywheels to manage load shifts, the goal is the same: keep the system stable, even when things go wrong.
Service design follows the same philosophy—except the variability we design for comes from humans.
People make mistakes. They forget passwords, misread instructions, ignore alerts, or create workarounds to save time. They operate under fatigue, multitask during critical operations, or adapt to unspoken norms shaped by culture or habit. These behaviors aren’t anomalies—they’re part of the system. And just like any other variable, they need to be modeled, anticipated, and accommodated.
Rather than designing idealized workflows that assume perfect users, service designers build systems that absorb human inconsistency—without failure. A good design doesn’t break when someone skips a step. It guides them back on track. It doesn’t punish them for uncertainty—it clarifies their next move. It’s fault-tolerant by design.
Take for example an operator responding to alarms during a high-pressure incident. In theory, they should follow the standard procedure. But in practice, they may rely on shortcuts, ignore less urgent alerts, or delay documentation to focus on the task at hand. A system that doesn’t account for those human decisions is one that’s likely to fail at the exact moment it’s needed most.
In industrial settings, this kind of human variability is just as critical as a fluctuating voltage or bandwidth drop. And like those physical conditions, it can be measured, modeled, and designed around.
Human constraints include:
Cognitive load – How much a person can realistically process at once
Attention span – Especially during repetitive or prolonged tasks
Trust thresholds – Particularly with AI and automation
Time pressure – Where speed may override accuracy
Organizational culture – Informal rules, habits, and legacy behaviors
Shift patterns and fatigue – Especially in 24/7 operations
Translation Tip:
Frame human behavior as a system variable—one that adds entropy just like environmental or mechanical stress.
Say:
“We’re designing for human variability the same way you’d design for noise in a signal or load variation in a circuit. This isn’t about assuming the user will always follow the process—it’s about making sure the system still performs when they don’t. It’s fault-tolerant service design.”
This helps technical teams see user-centered design not as soft or subjective, but as essential to operational stability—especially in systems where humans are still the last mile of control, response, and recovery.
Making Design Actionable for Engineers
Trust between design and engineering doesn’t come from flashy prototypes or emotional storytelling. It comes from showing that design decisions are grounded in system performance, operational impact, and delivery feasibility. When design behaves like a disciplined contributor to system integrity—not an aesthetic overlay—it earns a seat at the table.
Here’s how to build that trust:
1. Link design decisions to performance metrics.
Engineers live and breathe measurable outcomes—uptime, error rates, MTBF, throughput, cycle times, and safety margins. If a design improvement can’t be traced to a concrete metric, it risks being dismissed as non-essential.
Instead of saying, “this new UI is cleaner,” say, “this redesign reduces the steps required to complete a task by 40%, which decreases operator handoff errors and improves average task time.”
If a service design change improves onboarding, link it to reduced training hours or faster ramp-up time. If a redesign clarifies system status, show how it reduces misinterpretations during shift handovers.
Goal: Translate experience improvements into system outcomes.
2. Map user pain directly to operational cost.
Every inefficient workflow or confusing interface has a hidden cost—whether it's rework, increased support tickets, delays, or compliance failures.
Engineers are used to root-cause analysis. So when presenting design insights, treat user pain points like failure modes:
“This approval flow has a 36% drop-off rate, leading to backlog accumulation.”
“Operators routinely bypass this screen due to poor alert visibility, which causes missed maintenance events.”
“This step adds 90 seconds per task. Multiplied across 10,000 users per day, that’s over 2,500 labor hours monthly.”
Goal: Make the business case for design by quantifying inefficiency like a performance drag.
3. Treat design artifacts like engineering specs.
Design documentation can sometimes come across as too visual, informal, or open-ended. To engineers, that can signal imprecision.
Instead, elevate journey maps, blueprints, and prototypes to the level of technical artifacts. Include annotations, constraints, and expected system responses. Show failure points, latency risks, system dependencies, and conditional behaviors.
In short, treat these documents like specs—because they are. They're specs for how the user should interact with the system under real-world conditions.
Goal: Bring rigor and traceability to design documentation to match engineering standards.
4. Include engineers early—and frame workshops as simulations.
Engineers have some of the best user insights because they spend time troubleshooting what went wrong. Don’t just “hand over” a journey map—invite them to help build it.
But be careful how you frame it. Many engineers resist “brainstorming” or “ideation sessions” that feel abstract or unstructured. Instead, frame design sessions as simulations or workflow reviews:
“Let’s walk through the current-state process and stress-test it.”
“Where would failure occur under high load or time pressure?”
“Does this handoff point match what actually happens in the field?”
Goal: Position design workshops as structured problem-solving—not creative playtime.
5. Respect build logic and implementation constraints.
Design is only valuable if it can be built. That means aligning with system architecture, backend logic, integration limits, and safety/compliance requirements.
Avoid dropping fully-baked solutions late in the process. Instead, bring rough ideas early—sketches, lo-fi flows, or modular options—and collaborate on feasibility.
Ask:
“What would break if we did this?”
“Is this within latency or memory constraints?”
“Can we align this interaction with your control logic?”
When design respects technical reality, engineers become allies—not blockers.
Goal: Treat engineering not as an execution team, but as a design partner.
Bottom Line:
To earn engineering trust, service design must operate with the same discipline, precision, and performance mindset that engineers apply to system architecture. The more design behaves like a system layer—not a cosmetic wrapper—the more it’s embraced as essential infrastructure.
Conclusion: Design Is an Engineering Discipline
At its core, service design isn’t about aesthetics—it’s about how systems behave when people interact with them.
Too often, design is mistaken for surface polish—colors, icons, and layouts. But in high-stakes environments like energy, manufacturing, or infrastructure, the true role of service design is far more fundamental. It’s not about how a system looks—it’s about how it performs when real people try to use it under real conditions.
Service design is the layer where human intent meets system capability. It’s about mapping out how decisions are made, how handoffs occur, where delays happen, and what happens when things go wrong. It's the operating logic for human interaction within technical environments.
In the industrial world, the challenge isn’t whether digital tools work. It’s whether they get used.
A system with cutting-edge AI, perfect API integrations, and flawless technical architecture is still a failure if people avoid it, misunderstand it, or bypass it. Adoption—not availability—is the bottleneck. And adoption is never just a training issue. It’s a design issue.
Whether it’s a technician ignoring predictive maintenance alerts, a control room operator reverting to paper logs, or a field engineer distrusting sensor data, these are not user failures. They are symptoms of design gaps. Systems that don’t account for human behavior, context, and trust will not succeed—no matter how powerful the underlying technology.
This is where service design brings critical value. It ensures that tools are not only technically sound, but behaviorally viable. That they match user workflows, support mental models, and account for cognitive and cultural constraints. In short: that they fit into the real world.
When designers and engineers speak a shared systems language, design stops being a distraction—and becomes a differentiator.
Design doesn’t need to compete with engineering—it needs to integrate with it. When service design is framed in systems thinking—when it maps user journeys like control flows, treats behavior like a variable, and builds resilience into human-machine interactions—it gains credibility, traction, and impact.
This shared language transforms the role of design from cosmetic to strategic. It becomes a tool for risk reduction, performance improvement, and sustainable innovation.
The future belongs to teams that can co-design tools that work in the real world.
As complexity rises—through AI, automation, and digital twins—the weakest point in any system will be the interface between human and machine. That’s not just a UX problem. It’s a system design problem. And solving it will require teams that bring both technical depth and human insight to the table.
The most successful organizations will be those where designers and engineers collaborate from the start. Where problems are explored holistically, constraints are surfaced early, and ideas are pressure-tested across both technical feasibility and human usability.
That collaboration starts with:
Mutual respect for each other’s discipline
System fluency across human and technical variables
One shared goal: building resilient, usable, and intelligent systems
Build it better. Together.
Because when tools fit the people who use them, performance isn’t just possible—it’s inevitable