Rice Analytics

Automated Reduced Error Predictive Analytics

"I highly recommend RELR. The features of RELR we find most useful are: • the user support provided by Rice Analytics •  automated interaction detection • automated multicollinearity management • the availability of FULLBEST [Implicit RELR] vs PARSED [Explicit RELR] solutions • the reduced error objective function."

David Napior, Senior Vice President, GfK MRI
 
"When I was CFO at evolve24 [now evolve24-Maritz Research], we worked with Dan and used RELR for predictive analytics concerning mutual fund flows in relation to media variables. These models were produced very rapidly and with small samples of data and we found it to be effective."

Philip Abbenhaus, Director, Asian Equity Research Institute

 
 
Case Studies

Some of the unique aspects of what is now called Reduced Error Logistic Regression (RELR) have been employed with real business data in research and consulting applications for over ten years.  However, we have continually refined and improved the method in important ways, so it is now a generalized machine learning method with an extremely sophisticated and stable feature selection method now called Explicit RELR in the Calculus of Thought book to emphasize its connection to explicit neural learning in cognitive theory.  These cases below are just a sampling of case studies during the most recent years.  Many of these case studies have been presented by us or our users at conferences. 

.

New Case Studies on SkyRELR finished in late 2015:  We present new research showing how well  predictions related to both Implicit and Explicit RELR models replicate and how deeper aspects of the Explicit RELR model gives putative causal insight into financial behavior during the 2008-2010 financial crisis.  Here are the links:

http://www.skyrelr.com/interpreting-deep-explicit-relr/ 

http://www.skyrelr.com/replicating-predictions-using-public-uc-irvine-dataset/

 

Many Case Studies in Calculus of Thought Book:  In addition to the case studies listed below, there are numerous new case studies presented in the book Calculus of Thought by Daniel M. Rice published in November of 2013 by Academic Press. This includes entirely new application areas such as Bayesian online learning across time and causal reasoning.

 

Using RELR for Educational Achievement Research:  This was a comparison of RELR, Random Forests Logistic Regression, LASSO, LARS, Stepwise Regression, and Bayesian Networks.  This completely independent study was conducted by Thomas Ball.  Other comparisons to RELR that have been publicly available or listed in these case studies have largely concerned wide datasets.  This comparison is unique in that it concerns fairly tall datasets where there are many more observations than variables.  In this case, there were two datasets with either 46 or 1750 variables and each had about 13,000 training observations.  The major findings were that RELR's parsimonious variable selection algorithm called Parsed RELR outperformed the other algorithms in classification accuracy by 2-4%, although RELR's advantage in these tall datasets was not as dramatic as reported with some wide datasets.  While the practical significance of 2-4% improvement may not be as dramatic as with wider data, an argument can still be made that this improvement could have advantages.  A sub-comparison that shortened these tall datasets just for RELR showed that RELR's accuracy performance with a training sample size of 3300 was almost identical to its accuracy performance with a sample of 13,000 and RELR's stability was also reasonable at the smaller sample, although not perfect. These findings add support to the notion that RELR can generate accurate models that are also parsimonious and interpretable with relatively small training sample sizes. The new evidence is that RELR may outperform other algorithms even at relatively larger training sample sizes and in relatively tall datasets. Here is a link to the our web page where this study can be found:RELRTALLDATAComparison.

 

Using RELR for Syndicated Media Research and Customized Analytics In 2010, one of the GfK companies decided to use RELR for the first time to ascertain if it could solve a basic problem that they had.  This basic problem was that their models were unstable across independent samples and therefore not interpretable, as their variable selection would be likely to generate entirely different models with independent samples of data.  After a month of testing RELR, they came back to us and told us that they were very impressed with the stability and interpretability and parsimony of RELR's Parsed variable selection compared to all other methods that they have used including Stepwise and Penalized Logistic Regression.  They also told us that they really liked the fact that it was completely automated, as this was a major labor cost savings for them.  On that basis, they decided to move immediately to long term licensing of RELR for their advanced analytics.  After 6 months of using RELR, they called us and told that they were using it all the time and that they were extremely pleased and wished to order greater licensing involving of MyRELR.  They commented on its ability to see interaction effects with many variables and therefore to get much more accurate models than otherwise possible.  They also commented on how impressed they were with RELR's ability to give very stable and accurate models with very small sample sizes that were a fraction of what they had previously used with all other methods. Smaller samples were also a major cost savings for them, as this meant that less money would be spent on survey data collection. These positive reports from GfK are similar to all reports that we now hear from users of RELR.

 

Using RELR for Credit Scoring A number of different banks and financial firms have applied RELR to credit scoring scoring applications.  The most impressive result to date is that a user reports that RELR lifted the KS Statistic from roughly 40 to 65 compared to other methods.  This was possible because RELR was able to screen roughly 80,000 candidate variables and run accurately with a small sample size that was about 3000 observations.  The other methods either could not handle that number of variables and/or could not get an accurate solution with that number of variables at any sample size. This user commented that this kind of lift in performance was definitely an extraordinary result in their modeling practice.  Other noteworthy comments from these credit scoring applications include that RELR's variable selection seems to select meaningful variables and that RELR's variables were definitely more statistically significant compared to stepwise logistic regression and that RELR's automatic abilities were an advantage.   One of these credit scoring RELR users was from Premier BankCard. These results were presented at the SDSUG conference sponsored by SAS in April, 2009.

 

Average Squared Error of Reduced Error Logistic Regression and Other Comparisons to Standard Methods Validation sample average squared error in Reduced Error Logistic Regression was compared to Penalized Logistic Regression (PLR) and five other standard methods in models developed from a Pew 2004 U.S. Presidential Election Weekend Poll dataset.  RELR had significantly lower validation sample average squared error compared to all methods.   In addition, RELR's model stability was compared to PLR.   The reliability of the regression coefficients between models built from  two independent samples of roughly 1000 observations was .95 with RELR, whereas it was only .26 with PLR.  This extremely high reliability with RELR suggests that the regression coefficients are not simply an artefact of the sample chosen, but are instead very reliable.  This study also employed Parsed RELR - a sophisticated parameter reduction method - and showed that the identical 9 variables with similar regression coefficients were selected in two independent samples with Parsed RELR models that had similar accuracy as the Full RELR models.  These Parsed RELR models had a large amount of face validity, as there is substantial agreement on the most important variables in recent U.S. presidential election outcomes.  Part of these findings were presented at the 2008 Joint Statistical Meetings in Denver, Colorado on August 6, 2008.  While a copy of this paper can be downloaded from the  Papers and Presentations page of this website, its full reference is: Rice, D.M. (2008). Generalized Reduced Error Logistic Regression Machine - Section on Statistical Computing: JSM Proceedings 2008, pp. 3855-3862.

 

Reduction of Sample Size in a Customer Satisfaction Survey: A typical marketing research problem is to determine the relative importance of a large set of customer satisfaction attributes that determine overall customer satisfaction. Reduced Error Logistic Regression was employed to this end with a survey that consisted of 1,000 online respondents who rated 23 highly correlated different attributes of their financial advisor such as trustworthiness, proactive financial planning, useful advice etc. In addition, these respondents rated their overall satisfaction with their financial advisor.   RELR was able to build a reliable and valid model that predicted overall satisfaction based upon these attributes using only a sub-sample of 100 of respondents. The reliability and validity of this model could be empirically verified with independent samples of 100 taken from the original 1,000. The original sample size of 1,000 was employed because this is about the number of observations that are required with this many variables with the standard regression-based modeling. RELR reduced this cost by 90%. These results were presented at the 2006 Psychometric Society Conference in Montreal, Canada.

 

Linkage of Survey Measures to Spending Behavior in Las Vegas Shoppers: A typical marketing research problem is to link measures of customer satisfaction to business outcomes related to loyalty and spending. RELR was employed to this end with a loyalty and spending survey funded by Shop America and Fashion Outlets. The respondents were tourists in Las Vegas who took a shopping tour at the Fashion Outlets-Las Vegas shopping center. 290 people participated. The surveys were administered during the return bus trip from the shopping center back to the Las Vegas strip. Respondents were asked about 49 relatively correlated attributes related to their satisfaction, whether they spent as much money as planned, and whether they would recommend this tour to a friend. RELR was able to build a reliable and valid predictive model that uncovered attributes related to the importance of the bus driver and how he/she promotes the shopping center. In addition, the time that the shoppers were allowed to be at the mall turned out to be very important. Based upon a well known 10:1 rule that says that for every 10 target category responses you can include one variable in logistic regression, a standard logistic regression model would have required at least 10 times the number of respondents as this survey required for a stable predictive model with linear variables, but even much more with nonlinear variables. As in previous work, the signs of the regression coefficients all pointed in the predicted direction given by the pairwise correlations to the dependent variable, so again this was clear evidence that we had avoided multicollinearity. These results were presented at the 10th Annual Shop America Conference in Las Vegas in 2007.