top of page
Search

From Sales Brochure to Superpower: The 25-Year Evolution of the Configurator.

  • Writer: Richard Wright
    Richard Wright
  • 1 day ago
  • 7 min read

Richard Wright | Wright Thinking


There's a feature on almost every car brand's website that most people use without thinking twice about it. You pick a model, choose a colour, add some options, and watch the image update in front of you. It feels effortless. Intuitive. Almost obvious.



It is none of those things. Behind that experience sits a remarkable body of technology, creative thinking, and commercial ingenuity that has been evolving for over two decades.


I know, because I was there at the beginning — and I've been watching, and shaping, what it has become ever since.


How a Sales Brochure Became Something Else Entirely


It was the early 2000's. I was running projects at Burrows, and we were deep in the world of automotive CGI. We had built comprehensive digital twins of cars for a number of clients — complete, photorealistic three-dimensional models with every component, every finish, every variant meticulously created. It was forensic work, and it produced an asset of extraordinary richness.


One day, the logic of what we had assembled became impossible to ignore.


We had the digital twin of a car in every colour and every trim configuration. We had the data that told us which combination of components was available in which market — which colours went with which wheel options, which interior choices were permitted with which exterior finishes.


We had the existing business of producing sales brochures, which meant we were already versioning images by market and specification. We were, in other words, already doing all the constituent parts of a configurator. We just hadn't connected them yet.


When we did, our first configurator was born.


By today's standards it was simple. But it was real, it worked, and it pointed somewhere important: if you have a complete digital twin and the right data, you can give someone a meaningful experience of a product that doesn't yet physically exist in front of them.


That idea has turned out to be one of the most commercially significant in the history of CGI.


Beyond the Car: Every Configurable Product


Automotive remains the largest and most sophisticated market for configurator technology — and for good reason. A modern car can be specified in tens of thousands of combinations. Managing that complexity visually, and making the customer feel confident and excited about their choices, is a genuine commercial challenge.


But the logic applies wherever a product is meaningfully configurable. I've seen it work across an enormous range of categories.

Kitchens and bathrooms, where customers need to see how their choices of cabinet, worktop, and tap will actually look together in their space. Luxury private jets, where buyers are spending tens of millions and want to see every finish, every fabric, every detail of an interior that will be built to their specification. Watches and jewellery, where the combination of case material, dial colour, strap, and stone can run into the thousands.


White goods, phones, computers, furniture — almost any product with meaningful optionality benefits from a configurator, because the alternative — asking a customer to imagine it — is a commercially inferior experience.



The underlying principle is constant: replace imagination with certainty. When a customer can see exactly what they're buying, in a way that is accurate, beautiful, and responsive to their choices, they buy with more confidence.


And confidence, in high-consideration purchases, is the difference between a sale and an abandoned basket.


The Technology Evolves — Slowly, Then Quickly


The earliest configurators were, by modern standards, extremely basic. A handful of static 2D images — three or four camera angles on the product, switching between them as the user made selections. The images were pre-rendered, the transitions were abrupt, and the experience was closer to a clever image gallery than anything you'd call interactive. But it was a start. It was demonstrably better than a static brochure, and clients could see the potential even if the technology was still finding its feet.


The next step was an improvement in scope rather than fundamental approach. Instead of three or four static angles, you might have eighteen or thirty-six — enough frames to simulate a slow camera orbit around the product. The user could rotate the car, or the kitchen, or the watch, and the experience felt more spatial, more real. It was still pre-rendered, still a series of static images stitched together, but the illusion was more convincing and the commercial value was clearly there.


The step change — the moment when configurators became something qualitatively different — came with real-time rendering.


Real-Time Changes Everything (Almost)


When game engine technology, and Unreal Engine in particular, matured to the point where it could deliver photorealistic output in real time, the configurator experience was transformed.


No longer a series of pre-rendered frames serving the illusion of interactivity — the product was now genuinely alive. The camera could move freely. Lighting could shift. Every combination of options could be explored from any angle, instantly, without waiting for a render to complete.


The UX improvement was dramatic. For the first time, a customer could genuinely inhabit the experience of a product they were configuring.




They could walk around a car, lean into a kitchen, examine the clasp of a watch, all in real-time, all-in response to their choices.




There was, however, a significant complication: cost.


Real-time rendering at this quality level doesn't happen on the customer's laptop. It happens on high-powered servers in the cloud, with the output streamed to the user's screen. That means a per-user streaming cost that, at scale, is substantial. For a brand running a configurator that serves hundreds of thousands of users, the infrastructure bill is a serious commercial consideration — and for some, a barrier that changes the calculus of whether the investment is justified.


This cost challenge has driven significant innovation in how streaming experiences are delivered and managed. The technology has improved, the economics have shifted, and smart platforms have emerged to handle the complexity of serving real-time 3D at global scale reliably.


But the tension between experience quality and infrastructure cost remains, and any honest account of where configurators are today must acknowledge it.


The Second Revolution: From Consumer Tool to Creative Engine


For most of their history, configurators were designed with one user in mind: the end consumer. Someone choosing a car, specifying a kitchen, personalising a watch. The tool existed to help that person make their decision, visualise their choices, and arrive at a purchase with confidence.


That is changing — and the change is, I think, the most interesting development in this space since real-time arrived.


The same digital twin that powers a consumer configurator can be turned to face inward — towards the brand's own marketing and creative teams — and used as something rather different: a precision tool for generating personalised visual content at scale.


Think about the problem a global automotive brand faces when launching a new model. They need imagery. Not one set of imagery — hundreds of sets. Different colours for different markets. Different specifications for different customer segments. Different hero angles for different channels.

A social post for one audience, a dealer brochure for another, a performance marketing asset for a third. In the traditional world, each of these required either a physical shoot or a bespoke CGI render. Both are expensive. Both take time. And neither scales elegantly when the number of required combinations runs into the thousands.


A configurator-derived content tool solves this. The digital twin — photorealistic, accurate to the specification, built in Unreal Engine — becomes the source of unlimited visual output. Brand managers work with the hero angles they know and trust.


A marketing executive can position the product from any angle, at any time of day, in any context. The output is a render, not a streamed interactive experience, which makes it dramatically more cost-efficient than a consumer-facing real-time tool.


And because every image comes from the same digital twin, consistency is guaranteed across every variant, every market, every channel.


The Next Frontier: AI Environments and Infinite Context


The final step in this evolution is the one that excites me most.


In the configurator model I've just described, the product is a digital twin — completely accurate, completely controlled. But the environment it sits in — the road, the room, the context — has traditionally been either a physical location shot on camera or a CGI environment built by hand. Both have costs and constraints.


What's emerging now is a different approach: the product from the digital twin, combined with environments that are generated by AI. The product remains 100% accurate — the same photorealistic model, the same verified geometry, the same controlled lighting response.



But the world it inhabits can be created from a prompt, infinitely varied, infinitely bespoke, and produced in a fraction of the time of either a location shoot or a hand-built CGI scene.


A car on a mountain road at dusk. The same car on a city street in the rain. A perfume bottle in multiple different scenarios. A kitchen in a Scandi apartment. In a Georgian townhouse. The same watch on aged leather, on brushed concrete, on silk. Each image the product of a digital twin and an AI environment — accurate and original in equal measure.


This is where the technology is going. It compresses the cost and time of producing vast volumes of contextualised, personalised, channel-specific content to a point that would have seemed implausible five years ago.


And it does so without compromising the visual quality of the product itself — which, as I've written before, is the floor that cannot move.


What Twenty-Five Years Teaches You


When I look back at that first configurator we built at Burrows — born from the accidental convergence of a digital twin, a data feed, and a brochure business — what strikes me is not how primitive it was. It's how clearly it pointed to everything that has followed.


The logic was always the same: if you can make a product real in the digital world, you can use that reality to do things that the physical world alone cannot support. You can show it to a customer before it exists. You can personalise it for a thousand different audiences simultaneously.


You can generate content at a scale and speed that no traditional production model can match.


The tools have changed beyond recognition. The logic hasn't moved an inch.


By Richard Wright


Richard Wright is a studio leader in Digital Twin/CGI/Realtime Immersive and digital experience, with over two decades of work across automotive, aerospace, consumer electronics, FMCG, property development and F&B. He drives strategy and delivery across digital twins, CGI content creation, realtime visualisation and immersive expereinces. Connect with him on LinkedIn. immersive technology. Connect with him on LinkedIn
 
 
 

Comments


Richard has flair for building and nurturing high performing teams that have efficiency and creativity at the heart of their culture. His energy and appetite for growth and innovation - alongside his humour - creates an infectious environment for transformation. I thoroughly enjoyed working with Richard at Hogarth.

GET IN TOUCH

I'm always open to new ideas and opportunities. If you'd like to hear more of my thinking or share your own, please feel free to connect with me on linked in.  

  • linkedin

2021 Richard Wright

bottom of page