Friday, July 30, 2010

Review: (Orme 2009): Which Conjoint Method Should I Use?

Bryan K. Orme, Sawtooth Software, Inc.
Copyright 2009

Objective
- provides greater depth in understanding issues involved with choosing a conjoint approach

Useful Information
- Ratings-Based Systems
* respondents are asked to rate (or rank) a series of concept cards describing product concepts using multiple attributes
* creator, Paul Green, suggested six attributes and 12 to 30 cards to avoid simplification strategies and that increased attributes meant increased cards
* early version referred to as "card-sort conjoint"
* Adaptive Conjoint Analysis
^ Sawtooth developed software to adapt cards to previous answers
^ possible to study a dozen to two-dozen attributes while keeping respondent engaged 
^ had varying sections of interview with one or few attributes presented at a time
^ led through systematic investigation over all attributes
^ resulted in full set of preference scores for levels of interest (part-worth utilities)
^ require computer administration
^ main-effects model
^ "all else equal," without inclusion of attribute interactions
^ understates importance of price, understatement increased with number of attributes
What were the key methods or approach?
- Choice-Based Conjoint (CBC)
* closely mimic purchase process in competitive contexts
* instead of rating or ranking, asked to choose from a set of products
* some products show products, about a dozen, on a screen as if on store shelves
* recommend researchers show more rather than fewer product concepts per choice task
* contain less information than rating per unit of respondent effort
^ do not learn degree of preference among products
^ do not learn relative preference among rejected alternatives
* Sawtooth can include up to 10 attributes, with 15 levels each (advanced design: 30 attributes with 254 levels)
* traditionally analyzed at aggregate level, but now individual level can be assessed using latent class and hierarchical Bayes (HB) estimation methods (majority of Sawtooth users use HB for final market simulation models)
^ Aggregate Choice Analysis
+ argued that it permits estimation of subtle interaction effects
+ market is not homogeneous, consumers have unique preferences
+ suffers from Independence from Irrelevant Alternatives (IIA) assumption (red bus/blue bus problem: very similar products in competitive scenarios receive too much net share), fail when there are differential cross-effects between brands
^ Latent Class Analysis
+ simultaneously detects homogeneous respondent segments and calculates segment-level part-worths
+ if market is truly segmented can reveal much about market structure and improve predictability over aggregate choice models
+ subtle interactions can be modeled
^ HB (Hierarchical Bayes Estimation)
+ "borrows" information from each respondent to improve accuracy and stability of each individual's part-worth
+ consistently proven successful in reducing IIA problem and improves predictive validity of individual-level model and market simulation share results
+ can employ main effects or models that include interaction terms
+ main effects models with HB are sufficient to model choice
+ HB outperforms aggregate logit for predicting shares for holdout choices and actual market shares even when there was very little heterogeneity in data
- Partial-Profile CBC
* used to increase number of attributes
* each choice question includes subset of total number of attributes which are randomly rotated into tasks so that each respondent considers all attributes and levels
* data are spread quite thin because each task has many attribute omissions
* require larger sample sizes to stabilize 
* individual-level estimation under HB does not always produce stable individua-level part-worths
* if main goal is to achieve accurate market simulations (and large enough samples are used), individual-level stability can be sacrificed
* subject to similar price bias as ACA (though not as pronouced)
* respondents can ignore omitted attributes and base choice solely on partial information presented in each task
* researchers and academics prefer full-profile conjoint techniques that display all attributes within each choice task to avoid bias of final part-worth utilities
- Adaptive CBC (ACBC)
* respondents first identify ideal product using configurator
* software builds a couple dozen similar product concepts for respondent to indicate which they would consider
* considered products are taken to a choice tournament to identify overall best concept where the choice tasks look like standard CBC tasks
* respondents find more engaging and realistic, but takes longer than CBC
* sample size is smaller than standard CBC because more information is captured from each individual
* more information at the individual level leads to better segmentation
* validity is slightly better than CBC
* capture percent of respondents find each attribute level to be "must have" or "unacceptable"
* not as useful for packaged goods with only a few attributes
* appropriate for problems involving more complex products and services with five attributes of more
- Sample size
* if dealing with relatively small sample sizes (especially less than 100), be cautious about using CBC unless respondents answer more than the usual number of choice tasks
* ACBC and ratings-based approaches
* If interview must be done on paper and small sample size is norm, consider CVA

No comments:

Post a Comment