Generative adversarial networks are AI darlings right now – and they have the potential to transform contact centres.
The generative adversarial network (GAN) is total bad-ass in the world of machine learning. If you’ve ever seen those neat doodle toys that turn sketches into, say, sorta photorealistic cats, you’ve seen a GAN at work.
But the GAN could upend so many fields that listing them would be sort of pointless. Because really, the playfulness of its most public-facing applications belies an exceptionally versatile method of solving some of deep learning’s most tenacious problems.
Before we get into how they could benefit contact centres, let’s take a brief (and extremely simplified) look under the hood.
To understand how a GAN functions, you need to know that a deep-learning algorithm – like a pet? – is only as good as its training. Which is to say, it’ll produce utter nonsense if you don’t tell it what nonsense looks like. So if you want a neural net to be able to tell you when a picture has a cat in it, you need to show it a lot of cat photos, then set it loose and tell it when it screws up. Loosely speaking, at least.
It’s a real pain in the gumbo to manually train a deep-learning algorithm, as you might imagine. It also involves the tricksy issue of having to deal with the human element. So back in 2014 one Ian Goodfellow and a bunch of other extremely clever people at the University of Montreal figured out how to get a machine to train itself.
At core, it’s real simple: what you want to do is pair up two little computer brains and basically turn them into siblings who hate one another. With image training, for example, one side (the generator) invents fake images, while the other side (the discriminator) does its best to detect those fake images.
Here’s the trick: every time one sibling scores a hit, the other gets better at its own job. It’s basically like having a supercritical teacher looking over your shoulder while you work, only somehow in a way that isn’t completely off-putting.
With time – and, crucially, not as much need for oversight – the inventor becomes really good at making stuff up, and the detector becomes really good at spotting pork pies.
Now images get a hog-share of the attention pie with GANs, but the little beasties can be turned with exceptional effect on any number of things. They can be used to make music, produce apparently meaningful sentences, invent recipes, play video games – the works.
We’re so glad you asked. The answer is: so, so, so much.
Imagine a system that co-listens to a call and identifies potentially fraudulent behaviour, or one that watches a video-chat stream for spikes in temper or markers of readiness to buy. Generative adversarial networks have the capacity to augment human attention tremendously, offering both cognitive aid to overburdened teams and training wheels to brand new recruits.
If any of this sounds far-fetched, consider that GANs have shown promise in identifying markers of autism from a child’s speech.
But wait that’s not all. There is also tremendous promise in the field of generative compression. Which is to say, it turns out that getting an algorithm to recreate an image (for example) can be far more space-efficient than a heavily compressed version of the image itself. And the results are better. One research paper shows an image compressed to a factor of 150 – and it’s far sharper than, say, a JPEG at 30-fold compression.
This is a big deal. Transmitting voice and video across long distances is still prone to horrendous glitches. But with generative compression, instead of compressing a voice we could simply send the rules for generating it on the other side.
Perhaps the best bit? Because of the ability of next-gen GANs to stabilise themselves with far less supervision, it’s more than feasible that you could train your system on your own data, tailoring it to your circumstances far more than any out-the-box system could hope to achieve. Talk about carving up time to full productivity.
We’re mad excited, to tell the truth.