[Interview] Predictive Analytics with Eric Siegel

Estimated Reading Time: 25 minutes
April 12, 2016

eric-siegelEric Siegel is the founder of Predictive Analytics World and one of the top thought leaders in this rapidly growing field. If you’re a little fuzzy on the concept of predictive analytics? The title of Eric’s book makes it clearer- Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die. Michael Loban caught up with Eric to chat about Eric’s book, the conference, and emerging trends in the predictive analysis universe.

Key Learnings:

1. Cross-vertical application is a growing and highly useful application within the larger field of predictive analytics.

2. In addition to the 10+ annual Predictive Analytics World conferences he’s been busy with, Eric has also developed Predictive Analytics World events for specific industries.

3. You don’t need a PhD to understand Eric’s book. Although it’s packed with enough knowledge to satisfy a technical professional, it’s also accessible to a layperson.

4. Big data is a clumsy adolescent who, despite great promise, still needs time to grow up and gain confidence and maturity.

5. Predictive analysis will never replace human analysis, but it will relieve humans of some of the menial tasks associated with data crunching.

Interview:

ML: Obviously, you’ve been working in this space now for many years. What are you most excited about when it comes to analytics?

ES: Well, I’m excited about the growth of predictive analytics, which is continuing in full steam. One of the main growth areas is cross-vertical. It’s such a widely applicable technology.

In any event, Predictive Analytics World, along with the trends of the industry, has been launching new vertical-focus apps. So in addition to our primary original event, which is called Predictive Analytics World for Business and takes place five times per year, including twice in Europe, we also have annual events and we’re launching new ones every year. So far, we have Predictive Analytics for the Workforce, for Manufacturing, for Healthcare, for Financial Services and for the Government.

ML: Why do you think predictive analytics is so widely applicable? Why so much interest now?

ES: Because there are so many different kinds of consumer behavior that can be predicted, and so many different ways that it can be useful for marketing and beyond, including fraud detection, financial risk also to government applications, healthcare, clinical treatments, hospital administration, and manufacturing; Workforce, of course, for retiring and retaining employees. It’s such a widely acceptable concept, learning from data to predict and finding different ways, different mass scale operations that could be rendered more effective by way of following the guidance of all those individuals’ millions of predictions.

ML: Thank you for that. I’m very familiar with e-metrics; in fact, I actually interviewed Jim Sterne and I’ve been to that conference a number of times. So let’s talk a little bit about your book. What motivated you to write Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, and why do you think now is the right time for this?

ES: Thanks for bringing up the book. The first edition came out three years ago and we just updated it last month. I’d like to start out by pointing out that the book’s subtitle is actually an informal definition of predictive analytics. The website for the book is thepredictionbook.com.

Most books in predictive analytics are very much written for the technical practitioner. This book is quite different in that it’s meant for anybody. It’s a fully accessible, used as a textbook in over 35 universities, but not written like a textbook. It’s written more in the entertaining, anecdotally driven kind of pop science mode of a book like Freakonomics. I wanted to write a book that would reach any reader, even lay readers and science enthusiasts and anyone who had even a non-vocational interest in science.

On the other hand, as a former academic, I made a conceptually complete introduction, and in fact the last three chapters get into advanced topics that are new even to existing, established, expert practitioners. But the main goal of the book is to spread the word and explain all aspects of the industry relevant to the value, the way it works out, building the predictive model…what it looks like under the hood, the core rocket science, what the data needs to look like…all that stuff in a way that’s relevant, keeps the interest and is fully accessible to any reader.

ML:  What do you feel are the dangers of predictive analytics and how do you feel it impacts or can impact our daily lives?

ES: Well, I’ll start with the positives before I go to the dangers. From a corporate standpoint, the value proposition and improvement on return of investment is clear. From the consumer standpoint, it also has a positive net effect. Our experience in daily life in society is dictated by how we’re treated and served by organizations; how they decide whether to contact us, whether to retain us, whether to approve us for a loan application or credit card, how we’re investigated, incarcerated, set up on a date, are medicated with healthcare treatment. All these decisions are driven by predictive models these days.

So the millions of decisions in all the outward-facing actions that establish the treatments for individual consumers are now run by the outcome of predictive models of the individual predictive scores. So it helps us. It’s the antidote to information overload. Google search results are dictated by predictive models, as is the order of the newsfeed on Facebook, the order of Airbnb properties that we see when we do search, which products are recommended as far as books on Amazon, movies on Netflix and music on Spotify and Pandora…that’s what I mean by information overload. More and more we’re becoming dependent on predictive models. Spam filters, the ability to get less junk mail because direct marketing is targeted by way of prediction, all these things.

As far as where there are dangers, there are two sorts of dangers. One is the ethical dangers, with the civil liberties may be put at risk as far as privacy invasion and these kinds of things. The other is technical dangers, where the predictive model, there could be a bug in the deployment and it hurts the business.

Which of those two dangers were you referring to? They’re like apples and oranges. As far as civil liberty concerns, I actually devoted all of Chapter 2 in the book to that topic because prediction is the power, and as Spiderman’s uncle said, “with great power comes great responsibility”. The fact is, certain things that are predicted about us as consumers are very sensitive, so an organization with this data and acute analytical skills is able to ascertain from us, out of thin air, information based on otherwise-benign data, things are sensitive, such as whether we’re pregnant (in the case of Target, the retailer), predicting that; whether we’re going to commit a crime again, as law enforcement agencies do in order to inform, for example, investigation activities, judicial sentencing and parole decisions, so literally how long you’ve been in prison informed by outcome of the predictive model, as far as crime behavior; and whether you’re going to quit a job.

ML: Understood. What about something as simple as the Facebook feed, which you mentioned? There are many websites that list or feature different stories based on my browsing history and let’s say, in my past browsing history they know that I tend to like either Democratic news or news about the Democratic Party or news that is more about the Republican Party and that’s what they predict and that’s what they show to me. So the information that I am being presented with, potentially, is not what I’m going to be interested in, it almost puts me in this circle that says, “This person is interested in this” and so the information that I see in my worldview is almost dictated by the model of what I was interested in, not what’s actually taking place.

ES: Well that’s a good question. Are you referring to the targeting of what advertisements you’re seeing?

ML: No, not necessarily advertisements, more in terms of content that’s being shared. You mentioned the Facebook algorithm. When I look at my feed, I’m connected to let’s say 1,000 people, and the majority of them don’t even make it to my feed for various reasons when Facebook thinks that information is not really essential to me. Or when I look at the news website and I realize that the majority of the news that I’m presented with is really just based on what I was interested in in the past, and quite often they take out information that maybe will be very interesting to me, but because in the past I didn’t pay attention to that, that’s not something they consider important to share with me. So my worldview, if you will, is almost being shaped by what an algorithm thinks I’m interested in.

ES: Yes, it’s a danger for the algorithm in that sense to be bad. The first and shortest answer to that overall dilemma is that when predictive targeting is working well, you don’t really notice. The moment it sticks out is where you get pigeonholed. People say about Netflix, “I rented a couple movies and now I only see recommendations for related movies, but that’s not actually the classification of movie I’m interested in.” And Google Ads does this all the time. The fact is, with number crunching, what you’re getting is an overall better performance than non-data-based, non-data-driven methods that are more of a hack, more manual via segmentation, or perhaps just random. In the case of targeting paid ads or sponsored listings, of course, what’s being targeted there still may appear very clumsy (you’ll see ads for something you already purchased), but what’s being targeted there is whether they’re going to be able to get you to buy more and click more and they generally wouldn’t be doing it if it weren’t increasing the overall bottom line for them. So I’d say that big data is still an adolescent in a sense—very clumsy and leaving a lot of room for improvement.

Facebook is an interesting example because they’re not pursuing a metric related to your satisfaction or to some more ambitious metric. Instead, what Facebook’s trying to optimize, in general, we believe, is simply how engage you more with their product, how much time you spend. They’re going to pay attention to all kinds of things like how long you linger on a certain post or whether you clicked that you liked it and they’re trying to optimize that by doing a lot of experiments to that end. Here’s an analogy: all of the news channels are getting a lot of eyeballs on their advertiser’s ads by showing a lot more about Donald Trump. That doesn’t necessarily mean that all the viewers are satisfied with seeing a lot of information about Donald Trump.

ML: So how can this learning account for things like context and circumstances, which often play into this and quite often cause, if you will, dissatisfactions that consumers have?

ES: In general, a predictive model, or if you want to call it machine learning, can only act on the data you give it, so it’s making a prediction about whether somebody will click, buy, lie or die, commit a crime, etc., based on attributes of the individual, the demographic and behavioral history of the individual or summaries thereof. In that case, that’s all it’s got and it uses the training data, used to generate and optimize the model over historical data in the first place and then that’s all the model’s going to get as input when you then deploy it and make use of the model to make individual predictions. So if you want a more general context, something about factors of the economy, the weather, seasonality, these contextual factors, there’s no qualitative difference in the core predictive modeling method, it’s simply a matter of expanding the data set.

ML: Interesting. So, basically, we only have issues with context when we eventually take it into the account or until machine learning is exposed to that information?

ES: That’s right. You need to pull in that data source some way.

ML: Obviously, many organizations and businesses love the idea of predictive analytics. It takes a lot of menial work off their shoulders, but in this world of predictive analysis, what do you feel would be the role of an analyst, because analysts right now are quite often tasked with looking at data, doing analysis, making recommendations and building hypotheses, for example: “If this were to happen, these are the type of results we can anticipate…”, and then they decide, based on the business value that this can generate,  what actions the organizations should take. Then they present that information to the stakeholders. When this type of work is being done by a machine, and what should an analyst do?

ES: So, you mean a lot of the work an analyst does will be replaced because predictive analytics/machine learning allows ways to automate a lot of their tasks, such as generating a hypothesis…

ML: Correct. I’m not saying it will replace (human analysts), but I think a lot of menial work that companies are doing will certainly be removed by machine learning and predictive analytics.

ES: I agree that predictive analytics is a more efficient way to test a lot of hypotheses and that’s sort of the whole point. You throw in all the different columns of data, all the different predictor variables, factors and characteristics, and see which ones are most effective and in what combination.

When that side is being done – taking the manual trial and error and automating, but that’s significant and a huge improvement, but pertains to a relatively small arena of what humans are required to do when dealing with data. There’s never going to be a lack of need for human expertise. In fact, if anything, humans are even more necessary in order to harness these more advanced capabilities of predictive analytics.

For example, if you’re looking at using predictive analytics for the first time, you’ll learn quickly that one of the first things you’ll discover is that the data preparation is the biggest part of the hands-on task. Although the core predictive modeling, the number crunching that learns from data that creates a model for that machine…that’s sort of the “rocket science.” That’s the fun part; that’s the advanced scientific part. Most of your time will be spent just getting the data into the right form and format. In order to do that in the first place, which is normally considered about 80% of the hands-on time and is not a process that can, in any way, be automated because it depends on so many specific, specialized factors of your organization… you must be prepared to work with the data you have.

In general, you have to start with “What’s my business goal and what specific prediction is going to help?” For example, the two main marketing applications are (1) response modeling, which is predicting who’s going to respond when it’s contacted for targeted marketing; and (2) trend modeling, predicting who’s going to leave in order to target retention offers. The second may be the harder of the two. It depends on how you measure that.

But if you decide, “This is important. We want to retain more customers. It’s cheaper than acquiring new ones and targeting these expensive retention offers effectively by predicting who’s at risk of defection…” Well, that’s not enough. You have to define the prediction goal a lot more specifically; for example: “I want to predict, among all current, tenured customers who have been around at least three months, which ones will decrease their spend by at least 85% in the next four months,” or something like that. Maybe that’s just the beginning of it.

You need a well-defined prediction goal, and that prediction goal is defined not just by the data, it’s also defined by the business. It depends on your business procedures, the context, what marketing people want to do, what’s going to serve your business, your philosophy in marketing, and what data are available. Those are just the first things you to decide before you start to prep the data.

The data preparation involves a lot more than that because ultimately what you’re trying to do is predict, which means time matters. That means that in your training data, for the learning data set you need to put together for the predictive model, you need a sense of time. You’re putting together a bunch of historical examples where you know the outcome, but you’re going to learn from those learning cases analytically, so the outcome may be yes or no, they did or did not defect. That yes or no is something you found out later—it’s also in the past but you found out later, which has to be juxtaposed in that training data set, alongside the data that you knew longer ago, at the time you would have liked to have made that prediction.

So the concept of time and rolling up that data into that training data form and format, where usually you’ve got one row per customer, depends on your human understanding of the meaning of what you’re trying to predict and what the requirements are on that training data set. To get it, you have to understand what your data looks like today and where it came from, its history and how it was accrued, the meaning of it and the context of your business.

All these things are very much human activities. The need for human experts is definitely not going anywhere.

ML: Good. So people have job security.

ES: Yeah, absolutely.

ML: So, there is a section of your book that covers something I am very interested in; a bit challenged with analytics today that insights are there, but companies are not often equipped to act on that information or they are not agile enough to respond fast enough. Predictive analytics even expedites this because it has the potential of being able to tell us what we should be doing or what we can expect over the months. What do organizations need to do to change to be able to leverage this information in time?

ES: That’s a great question and essential to the difference between successful predictive analytics and a failed project that does not achieve value because the predictions are only useful if you act on them. In fact, that issue is often considered the primary pitfall, which can be a surprise because many might think it’d be a more technical pitfall; but no, it’s really a more organizational process pitfall.

Typically, the data does yield results analytically; however, the organization needs to prepare to implement them in the deployment phase. That means actually acting on the predictions, which means a change to existing organizational processes. It’s no longer “business as usual.” So, for example, if you’re doing marketing a certain way, you need to now be targeting that marketing in a way that considers or integrates the output of a predictive model (that is, a predictive score) to make those decisions.

So that’s something that’s important to get in place as far as a plan of action to actually do the technical integration before green lighting the predictive analytics projects.

ML: Let me then ask a follow-up question: In organizations you’ve studied or worked with, where do you feel they are using predictive analytics in the right way? What do they have in common? Do these organizations have certain traits that others can learn from?

ES: I would say it’s more about executing… what they do, rather than who they are. One of the key executions is everything I just mentioned about having an organizational plan rather than just a technical plan. This is very much an organizational process. I like to call predictive analytics “the Information Age’s latest evolutionary step.” This isn’t just the warehousing of data, where a lot of the focus is about the infrastructure to maintain and manage data. Instead, this is science that uses the content of data to learn, but also to fundamentally change all of the primary activities of an organization, all the outward facing actions… it’s raison d’être (Fr., “reason for being”).

So the vision we’re going to optimize is based on data, based on math, all these outward facing treatments, as far as how people are routed in customer service, which transaction is audited for fraud, which law enforcement suspect is put on this list or that list… for all of these primary operations, the organization needs to have buy-in. Typically, and this is one of Tom Davenport’s original messages from years ago, this often requires executive buy-in, from the top down.

It’s not like economics. We’re working with an organization of people, so there are a lot of contributions from the bottom up, where the components of extra practitioners see the potential and sort of help streamline and communicate what’s possible from the bottom up in the organization as well.

ML: Clear. So taking it from a slightly different angle, we’ve been talking a lot about businesses and organizations. At what point do you think predictive analytics will become mainstream in our personal lives? For example, being able to see “This is how much I need to exercise before the end of the week to lose so many pounds,” or “How many calories do I need to consume in the following months to accomplish certain results?” So there are certain tools that kind of do this right now, but this is not yet mainstream. How far are they from it?

ES: That’s a great question. There are a lot of layers to that question. First of all, I think it is mainstream in that our lives are touched many times a day, all of us, by predictive models along the lines of the things I’ve mentioned as far as product recommendations, targeting your mail, filtering spam folders, from Facebook to Google responses… these things very much affect us every day; for example, whether you’re approved for a loan or credit card — your FICO score is from a predictive model.

The main thing is that we’re not necessarily aware of it. So I would say, since we’re increasingly dependent on it and it’s the antidote for information overload, the main thing is that we’re going to become more aware of it as mass consumers.

As far as the healthcare examples you give, those are definitely killer apps of predictability from a healthcare realm. I don’t know how to guess how long it will take. There’s a lot of interesting analysis in the law enforcement realm where there’s privacy concerns, not necessarily about the output of the model, but the input. Looking at the data and how available it is, what kind of predictive model gets access to it? I do think that those killer apps that you mentioned are on the horizon.

ML: The conference, Predictive Analytics World: What topics do you see, right now, in high demand?

ES: Financial services. The other dimension is what the application will the organization be using it for, e.g., fraud detection or marketing, is a question that cuts across all different markets. Marketing is kind of well-entrenched. I would say that most large organizations use predictive modeling to target marketing one way or another. Lots of small and medium business too—just has more to do with the size of your list than the size of your company.

I would say that all large financial institutions use predictive analytics for fraud detection, so it’s very much mainstream and, I would say, growing very quickly into HR and into manufacturing.

ML: When it comes to conferences, I always enjoy meeting different vendors. You can often see quite a bit about who is sponsoring the event. Do you see any common trends among your conferences, on who are the sponsors that continuously do it? Or maybe the newest players that are now trying to sponsor your conference?

ES: Well, that’s a great question. It’s a real mix between the established, longer-term ones and the new players. As an aside, I’ll also make the comment that the number of vendors in the predictive analytics space is increasing so quickly I would say that a couple times a month I see a new one that I’ve never even heard of. One crosses my desk either as a sponsor of our events or otherwise.

Some vendors specialize in particular verticals; others specialize in specific capabilities of their software solutions. There’s a trend now that does a lot of stuff with the cloud, but that’s not the majority of the software solutions.

The software solutions also differ as far as the data pre-processing. There’s a whole bunch, whether they integrate with R, which is the leading software solution – that’s ‘R’ as in Roger. Then also the more advanced approaches and techniques they may embody.

At our events, advanced analytical methods are covered by the non-sponsors. Of course, the majority of the conference program is non-sponsored, vendor-neutral content. So that includes advanced methods such as ensemble models. Those are some of the hot areas in modeling methods.

ML: I think you’ve already mentioned how the conference will change or continue to evolve. Do you have any other plans or thoughts on how this predictive analytics industry will evolve over the next few years?

ES: That’s a great question. I would say that these advanced methods are only just starting to take hold and continuing to grow very quickly. I’d say that one of the most interesting aspects in the space is what’s happening with the vendors as far as them rolling up and acquiring and merging together and the kinds of capabilities within the analytics field that are turning out to be most important.

A great deal of focus is really on the data and the data prep topic, even though that’s not fully automatic. There’s all sorts of data preprocessing that can help. In general, a good rule of thumb is that the data matters a lot more than the core analytical technique. The core math is important but the biggest win is from improving the source, the size and the quality of the data.

ML: Very interesting. Last question: Where do you personally go to find the answers to some of your questions? Obviously, you’re one of the best-known thought leaders in this field, but when you’re not quite sure of something, who are some of the people whose blogs or books that you read and would highly recommend?

ES: I always start with the individuals we prioritize as speakers at the event. The headliners at the event are often the best sources, including John ElderDean AbbottUsama Fayyad and then various experts in uplift modeling, which is a cutting-edge approach including Dan Corder and Kenny Larson.

There are a lot of really deep thinkers and advanced experts that we see at the event. Most of the names that I just listed are also authors.

Resources:

You can connect with Eric on LinkedIn and Twitter

To learn more about Predictive Analytics World Conference

Author

  • Michael Loban is the CMO of InfoTrust, a Cincinnati-based digital analytics consulting and technology company that helps businesses analyze and improve their marketing efforts. He’s also an adjunct professor at both Xavier University and University of Cincinnati on the subjects of digital marketing and analytics. When he's not educating others on the power of data, he's likely running a marathon or traveling. He's been to more countries than you have -- trust us.

    View all posts
Last Updated: July 19, 2023

Get Your Assessment

Thank you! We will be in touch with your results soon.
{{ field.placeholder }}
{{ option.name }}

Talk To Us

Talk To Us

Receive Book Updates

Fill out this form to receive email announcements about Crawl, Walk, Run: Advancing Analytics Maturity with Google Marketing Platform. This includes pre-sale dates, official publishing dates, and more.

Search InfoTrust

Leave Us A Review

Leave a review and let us know how we’re doing. Only actual clients, please.

  • This field is for validation purposes and should be left unchanged.