Intelligent Investor

AI: enhancing human interaction

Zia Chishti is the CEO of the US company Afiniti — an artificial intelligence business Zia started in 2006, designed to enhance the performance of call centres by matching customers with call centre operators. Alan Kohler spoke to Zia to find out more about the company and for his thoughts on the world of artificial intelligence.
By · 16 Oct 2018
By ·
16 Oct 2018
Upsell Banner

Zia Chishti is the CEO of the US company Afiniti — Zia's third billion dollar start up.

Afiniti is an artificial intelligence business Zia started in 2006, designed to enhance the performance of call centres by matching customers with call centre operators. 

The AI inside Afiniti determines the best match — they do pairing of customers with call centre operators and it improves the performance of the call centres by generally 5% to 7%. 

They operate in Australia and have hired former MP Wyatt Roy (who I spoke to in May) as their Australian Managing Director.

Here's Zia Chishti, the CEO and founder of Afiniti to explain in more detail how their algorithm works and his thoughts on the world of artificial intelligence.


Zia, you started your business, Afiniti, I think it was called something else then but you started that in 2006.  It’s interesting that the artificial intelligence you’re going with is something that enhances human interaction rather than replaces it.  Was that intentional, is that because you think that that’s what artificial intelligence should do, to enhance the interaction of human beings rather than replace them?

Yeah, it’s a great question because it raises a bunch of philosophical issues and technical issues.  Let me try to dissect some of them but I’ll fit ourselves into that slicing and dicing.  The first insight is that there really are next to no AI applications today that actually reduce the need for humans.  It’s a very common trope to believe that there are these thousands of AI applications that are taking away jobs as we speak, that’s really not the case.  There are a few such cases maybe but the impact of them is really quite small.

I thought we were getting chatbots and even voice calls can be replacing human beings, I thought that’s going on now.

Yeah, you might want to believe that given the hype that you hear but have you every actually interacted with a human sounding but ultimately machine created voice?  Not really, right?

I haven’t, no.

If you’ve done the voice IVR that’s one of the most annoying things you’ll do where it’s “thank you for calling Telstra”, I just made that up but whatever the company name is, “would you like sales or service?” and you say “sales” and then it says “did you say tomato?” and you go “no, sales” and it’s like “you want to go to Florida?”.  The actual quality of human live voice recognition is terrible at best and there’s no danger to human replacement any time soon.  There is a very strong argument that the whole concept of an IVR is bad and that in fact it increases the amount of human labour rather than reduces it.  That’s sort of a philosophical position though, it wouldn’t be broadly held.  Anyway, there’s no fear about robot machines taking over human customer service jobs in the near run, what will happen and what is happening on mass is the general migration of those jobs to the web.  Historically if you called up Westpac and said “hey, what was my balance?” they’d say “thank you, Alan, it’s $500” or whatever.  Now you just say well why do I need to call Westpac, I’ve got an app on my phone, my handy Westpac app, I just press a button and all the information is right there.

This isn’t AI taking away jobs this is a simple migration of low-end jobs to the web.  I would not put AI in the crosshairs for that, that’s just the internet is pretty darn good for doing simple things and it’s easier and less of a hassle to just press a button than get put in a queue waiting for an agent with some number that you have to fumble for.  That’s point one, is that this whole notion of jobs getting taken away by AI is really more fiction than fact and if you’re on the inside of the field you don’t really lose any sleep over that changing any time soon.  Now on this chatbot idea there’s about 100 different companies doing chatbots now, every company is.  IBM has a chatbot.  This is now commonplace as a product that they’re trying to sell to companies but when is the last time you interacted with chatbots.  I think the actual percentage of enterprise customer care interactions that are being handled by chatbots is some number significantly less than 1% and that’s because chatbots aren’t very good.

In highly constrained environments they work some of the time but this is not something that’s going to take away…

Zia, you sound like newspaper publishers that I used to work for 15 years ago who were saying then this is never going to happen, the internet is never going to work for newspapers and for classified advertising because it’s not good enough.  Surely we’re just at the beginning of this, aren’t we?

With one possible exception, that older guys like me have seen exactly this movie before.  You may recall that in the mid to late Seventies and early to mid-Eighties there was a massive interest in AI.  You had chatbots back then except they were called Eliza, I don’t know if you remember that or not.  You had prologue was a whole language that they created for AI and Lisp and Scheme were going to be taking over the world as mechanisms for mimicking human analytical capabilities, and neutral networks, actually believe it or not, were invented in the Sixties and started getting traction in the Seventies.  There was a massive boom back then with this exact same sentiment, that AI is going to turn the world around and we won’t need humans.  In fact, actually, Alan if I may ask and be so bold what year were you born in?

I was born in 1952, I’m 66.

You’re in my camp, so you’ve seen all this.  Did you see Bladerunner back in 1984?

I sure did.

Remember, you had Harrison Ford chasing that pretty replicant, Rachel, and then they had a flying car and that was predicted for 30 years forward all the way to 2014.

I know.  Science fiction shows us that things always take longer than you think, particularly with those sort of predictions but eventually stuff does actually happen.  I should say also I have interacted with voice recognition and my experience of it is not what you’ve said.  I don’t want to be some sort of big promoter or advocate of AI but my interactions with voice recognition on the phone with telecommunications companies has generally been okay, you say whatever it is and they kind of get you to where you need to go.  All these stores now, they’re selling these Google Home things and Alexa.  I haven’t bought one yet but you seem to be able to walk into your house and tell the thing to play Joni Mitchell at you or whatever.

Sure.  Here is the difference.  The difference is that hardware has gotten much better.  If you look back to the AI algorithms that guys like me were running in the late Eighties and Nineties the fundamental AI algorithms haven’t changed.  We were running neutral networks back then, we are running neutral networks right now they’ve just been rebranded, now they’re called deep learning.  In the end it’s just the same thing, it’s backdrop multi-layer neutral network models.  The confusing bit is that hardware has grown in performance at Moore’s Law rates, approximately double every two years.  What’s happened in the intervening 40 years or 30 years is that the processing power available to you at your fingertips is many orders of magnitude higher than where it used to be.  If you were trying to translate voice into text in real time good luck with that 30 years ago but you could do the same translation more or less just delayed by a factor of 100,000 back then too.

No real change in algorithms but a remarkable change in hardware performance, that is what gives the illusion of progress.  When you talk into your Alexa thing and you say play Joni Mitchell and it does you go wow, this is cool.  The answer is that in the Eighties you could have done the same thing it just would have taken a long time for it to figure out what play Joni Mitchell meant or what you wanted from it.  Now let’s parch that a little bit further.  The fundamental breakthrough that everybody has been waiting for in AI, or breakthroughs I should say in plural, are around semantics and qualia.  What do those two words mean?  Semantics is meaning itself, right, so when you say play Joni Mitchell actually Alexa has no idea what the word play means, or Joni Mitchell, or any of that.  It just recognises those words, does not imbue those words with any meaning, does a table look up with those words and says okay, the mechanical action that I want to do as a result of this table look up is play the following song.

Computers have no idea what the term play means, they don’t know what song means, they just know that they need to do some pre-programmed action if those words come up in some calculated correspondence in the speech detect that they’ve just conducted.  The next big step in AI is semantics where machines understand what words and sentences and paragraphs and thoughts actually mean.  Sadly there is nobody in the field of AI that has any depth of knowledge who would say that we’re a step closer to semantic understanding than where we were 25 or 30 years ago.  The algorithms haven’t really advanced in that regard and that’s the next big step if you want actually machines to take over jobs.  The other aspect of this, and now we’re talking somewhere closer to the intersection of philosophy and religion, is the subject of qualia.  It’s kind of a really cool word but what it means is the human perception of things.  You know what a word’s meaning is if you say the word sunlight, you know what sunlight is, but the perception of sunlight, the impact of sunlight on your consciousness and then that involves understanding what consciousness itself is.  Absolutely zero idea in the industry, nobody has any clue what this thing called consciousness is and how consciousness perceives actions around it, smell, sound, light, whatever.

Until those two steps are made what you basically have is algorithmic systems employing technologies from 30 years ago roughly through vastly greater speeds so you get really quite nice things like “hey, Siri, tell me the time” or “please, book me a room”.  The advance in the underlying field is not that great.

It’s interesting because people like Elon Musk, even Stephen Hawking who’s now passed away, and Ray Kurzweil and so on, they have been warning and really kind of issuing dire warnings about AI and what it might do.  Do you think they’re just wrong?

With due respect to Elon Musk when you stop smoking weed you might have a different view and it’s a much more interesting news story to say the robots are coming than to say that the robots aren’t coming.  If you want your moment of fame then make up a scary story and put it far enough out in the future – by the way, all of these stories are always 20 to 30 years out in the future, that you can’t actually positively verify it.  You’ll sell a lot of books and you’ll sell a lot more if you’re already famous.  It’s just there is this whole industry of future mongers and we’ve seen this now for 30 years running in which the future is always 20 to 30 years away and in this dystopic future robots will take over because we’ve allowed AI to grow in an unbridled way.  There is just absolutely no credence to that whatsoever.

Do you think we’ll have autonomous cars?

Sure, you’ll get autonomous cars but that doesn’t create a sentient semantic processing system.  It’s just a set of algorithms.  What are autonomous cars?  You have to do visual recognition of fields, you’ve just got to figure out what a car is, what a person is, what a road is, what a pylon is, what a street crossing looks like, red, green and yellow.  There’s a visual processing element to it and then there is a set of algorithms coupled to a mapping system as to what to do with it, right.  You’ve got to go from point A to point B, first figure out where you are, what it all looks like and then take some algorithmically defined actions as a result.  I would not conflate autonomous cars with the movie version of AI by any stretch, no correlation there and not even really would the field of AI as we currently understand it within the industry.

AI as near as anybody can tell at this point in time is just pattern recognition.  You have a lot of data, AI helps you to find a consistent pattern or signal within the data.  You have a medical image, run it through an AI system and find cancer.  You have a bunch of records of seismic activity or renewed seismic activity and you run it through an AI, a set of pattern recognition engines and you find oil or other hydrocarbons.  It is the detection of patterns within data, hard for humans to do because computers can do it a lot faster but mimicking human capacities for pattern recognition.

Your insight with Afiniti was to do pattern recognition to match customers with call centre operatives, which was, if I may say, a brilliant idea.  When you sat down and wrote those first codes how many lines of code were there, did you write in 2006?  How has it developed?  I’m just interested in understanding the evolution of your software in those past 12 years.

Sure.  In 2006 I wrote the first draft on my dining room table, it’s probably fewer than 5,000 lines of code, that’s it, and it took me a couple of weeks to get it working.  Back then the patterns that we were scoping out were remarkably rudimentary, I mean it was did you call before or not and if you called before what was the probability that you were going to act in a certain way, and if you didn’t what is the probability that you were going to act in a certain way.  It was, I would say, a Lego version of what would come later but it was enough to prove a point, it was enough of a demo to say hey look, even trivial information that you can layer on into a contact centre will give you better performance.  From there to get actually commercial products that work was four years or five years of work, it really wasn’t until 2009, ‘10 or dare I say it ‘11 that we had an enterprise-ready product that would give a persistent 4%, 5% or 6% gain.

If you look at the code today it’s probably 30 million lines of code and so it’s in a completely different place from where we were.  The database is I think a major source of the evolution of the success of the company, has been the availability of data not the algorithms themselves.  Data as a result of internet use and other connected devices has been growing at somewhere in the order of 200% to 300% a year.  Roughly 80% of all the world’s data was created in the last two years so that is the pattern that you’re looking at in terms of the exponential rise in the availability of data, and so you get more and more power behind the algorithms because you can filter and use more stuff.  Now, of course filtering and using more stuff…

Yeah, do you mean that you’re bringing in data from Facebook and other sources as opposed to simply whether or not the person has called before and their kind of interactions with a particular client of yours?

First let me suggest that we don’t get any data from Facebook.

Okay, that’s fine, from external sources.

The general point is correct, your general point is correct which is external data, Axciom, Experion...all of these are US based data sources and globally based data sources that pile in psycho-demographic information that’s useful.  Yes, tap external data sources but also heavily internal data sources.  A big client of ours, Verizon, you’ve got to get into every single scrap of information that we could get our hands on internally there to make a positive difference.  These markets are quite competitive now and to get 5% is a huge lift.  Massive data availability, internal and external, pulling that in, using it in a creative way, finding patterns within that data, behavioural patterns within that data and then finding pairings of behaviour, that’s the rub of it.

You’re doing it real time, right?  When a customer calls in nobody notices the fact that they have been paired with the correct or a particular call centre person.

Less than a tenth of a second.

Yeah, that’s incredible.

That’s about the time cycle.  It’s a good question because that’s exactly case in point for why this didn’t happen 30 years ago.  What is now less than a tenth of a second 30 years ago might have taken 20 minutes so if you called in to Telstra and we had the misfortune of deploying our technology 30 years back that would have added 20 minutes to the time it took you to call a rep so it’s just not practical.  That’s the heart of it, it was about circa 10-15 years ago that we were able to compress the time to do this into something that was practically useful.

Just to circle back to where we kind of started the conversation does your business depend on the fact that AI is not going to replace the human beings in the call centre?

To a degree, yes, but it’s important to define what we do a little bit more broadly, right.  From our point of view whether it’s an AI or a human the cold truth is it doesn’t much matter.  Let’s say you ask the question what is the weather?  There are many credible responses to that.  It could be the weather is 100 degrees, or it could be the weather is 38 degrees, exactly the same answer in Celsius, right?  Approximately the same answer.  Or, it could be “it’s great outside”, or it could be “time to go for a run” or it could be fill in the blank, there’s many credible responses to how is the weather if you ask that of a human being.  How you select the human being in trying to curate the appropriate response is what we do.  That exact analogy applies to machine systems.  If you asked a computer what is the weather it also has to go through a selection process to determine the behavioural context for the response.  To the extent that we’re just curating behaviour it applies in both cases. 

In the very long run we really don’t care what the underlying behavioural engine is, is it human, is it a machine, it doesn’t much matter.  That’s not the contingency on which our business rests.  Having said that today because there is almost no robots answering anything of use and we don’t see it happening certainly in five or probably ten years, our business is human dependent and being human dependent it is a useful feature as is commonplace with AI generally that we optimise the performance of humans.  If you’re a human sitting in a call centre and our technology is sitting behind you then the probability yof success of that human goes up therefore the economic utility of the human goes up, the human is more competitive with other channels ala the internet and so the use for humans goes up so actually we increase the utility of labour in the environments in which we serve, that’s the positive.

The more neutral position for us, I mean that’s just a happy externality, the more neutral position for us is that in 20 years if, and that’s a big if, you have robots that demonstrate human behavioural capacity then we’ll just pick which robot.  That’s the insight.

Just one final question, Zia.  Looking at your website and your teams, the executive team and your board of directors, I was struck by the fact that they’re all men, there’s no women.  That’s the sort of thing you’d expect in Australia, not America.

You’ve put the finger on a very sore subject.  First of all you’re right, I think in the board of directors and the first tier of management in the company there are no women which is kind of an embarrassment, really.

Yeah, I would have thought so.

Having said that in the advisory board of the company there are several, that’s helpful, and the board and advisory board are of similar stripes, built on success, so we are changing that legacy.  Here is the problem, the ratio of women and men in computer engineering and AI subjects is – I’m sure this is just a made-up statistic on my part but from where I sit it’s something like 10 to 1.  Then the ratio of people at the top of that field are sort of an infinity to zero.  What we’re dealing with is an incredibly skewed pool around the kind of talent that we need.  We’re doing our damndest to push against that underlying statistical reality but that’s just the way it is when you get to the stuff that we do.  There is a very good counter-example to that which is surely boards are not of the particular composition that you describe and surely in your management team the CFO doesn’t need to have engineering knowledge but dealing with them separately our CFO is actually a guy called Phil Davis and his undergraduate degree was in math from Brown and his graduate degree was in Computer Science from MIT with a focus in AI.  let me tell you that his job as the CFO is vastly enhanced by the fact that he has that training behind him.

That’s sort of the management team case and then if you look at the board I think it’s a very fair call.  I think our board could certainly have more women, I don’t say that because that’s politically correct or because that’s just the times but rather because having a balance of genders makes for better discussion and output.  You have variance in inputs, you have variance in processing of those inputs and through discussion and particular debate in which sides don’t agree you always come to a better ultimate conclusion so I think we’re remiss in that category and we need to do better.

That was Zia Chishti who is the CEO and founder of Afiniti.

Share this article and show your support

Join the Conversation...

There are comments posted so far.

If you'd like to join this conversation, please login or sign up here