CVA: A Full-Profile Conjoint Analysis System From Sawtooth Software
Technical Paper, Version 3
Sawtooth, Inc. 2002
Sequim, Washington USA (360) 681-2300
http://www.sawtoothsoftware.com
http://www.sawtoothsoftware.com
Useful Information
- Conjoint analysis useful for learning how potential buyers of a product or service value various aspects or features
- Goal is to conclude what product changes would have most beneficial effect of share of preference or which would maximize likelihood buyers would choose specific products
- Pairwise can be harder for respondent because it requires understanding of two concepts rather than one, but lets respondent make finer distinctions and contribute more information than single concept (card sort) presentation
- Ordinary Least Squares (OLS) regression appropriate for ratings-based data and monotone program for rankings-based data (hierarchical Bayes (HB) module available)
- relative part-worths are similar whether estimated from single concept or paired comparison conjoint questionnaire formats
- pairwise is suggested for most, however only captures relative differences in respondent's preferences for attribute levels, never measure absolute level of interest in product concepts
- use single concept to run "purchase likelihood" simulations
- Single Concept Presentation ("card-sort")
* shown one product at a time and asked to rate likelihood of purchasing
* can rate (OLS) or rank (monotone regression)
- To ensure CVA questionnaire is appropriate:
* keep number of attributes small
* pretest questionnaire
* conclude resulting utilities are reasonable
- Pretest:
* take questionnaire yourself and see if utilities mirror own values
* have others answer questionnaire to gauge difficulty
* have sample of relevant respondents answer questionnaire and analyze their data, look for "nonsense" results like higher utilities for higher prices
- recommended number of tasks provides 3 times the number of observations as the number of parameters to be estimated (# of parameters = total # levels - # of attributes +1)
* asking recommended number helps ensure enough information to calculate stable estimates for each respondent
* use of HB may slightly reduce number of questions you decide to ask as it estimates part worths based on information from current respondent plus information from other respondents in dataset
- Survey generation preparation:
* specified attributes by assuming apriori ordering for levels within attributes: Wild |Farmed | None, Local | US | Foreign, Price Level 1 | Price Level 2 | Price Level 3
* chosen questionnaire format: CVA ranking
* how many tasks to ask: 12 - 4 + 1 = 9, 9 x 3 = 27 tasks (less if you use HB)
* discard "obvious" tasks
- Survey Generation
* Orthogonality and CVA Design
^ orthogonal = zero correlation between pairs of attributes
^ attributes must vary independently of each other to allow efficient estimation of utilities
^ level balance: each level within an attribute shown equal number of times
^ optimally efficient
^ if design is not orthogonal Sawthooth CVA allows you to test design accounting for impact of prohibitions or asking fewer than recommended number of questions
+ done using CVA designer which maximizes D-efficiency which accounts for frequency of level occurrences and left/right balance for pairwise design
_ repeats steps 10 times and chooses best final solution (for many attributes can override to find efficiency from out of 100 or more times)
_ measures goodness of design relative to hypothetical orthogonal design
_ final efficiency may still be satisfactory
_ only after adequate number of conjoint questions relative to number of parameters should D-efficiency be considered
^ CVA Attribute Prohibitions (probably won't need)
+ cautioned against using since they usually have a significant impact on efficiency
+ should only be used to eliminate combinations that respondents recognize as impossible or absurd
^ every respondent receives same set of questions
^ can add few user-specified holdout cards
+ CVA will conservatively discard the most "obvious" conjoint tasks
- Part Worth Utility Estimation
* Ordinary Least Squares
* Monotone (nonmetric) Regression
^ iterative, finding successive solutions for utility values that fit data increasingly well
^ initial solution developed randomly or using information in experimental design
^ Measures of Fit
+ Kendall's Tau
_ measure "how close" utilities are to rank orders of preference
_ consider all possible pair of concepts, ask for each pair whether member with more favorable rank has higher utility
_ expresses amount of agreement between preference and estimated utilities
_ obtained by subtracting number of "wrong" pairs from number of "right pairs" and dividing by total number of pairs
_ tau of 1 indicates perfect agreement in a rank order sense, tau of 0 indicate complete lack of correspondence, tau of -1 indicates perfect reverse relationship
_ convenient way to express amount of agreement between set of rank orders and other numbers such as utilities
_ not useful to base optimization algorithm
+ Theta
_ continuous function of utility values
_ obtained from squared utility difference in last column of table above
_ sum squares of utility difference that are in "wrong order" divide by total of all squared utility differences and take square root of quotient
_ percentage of information in utility difference that is incorrect given data, best possible value = 0, worst = 1
_ Computation
+ initial solution obtained using random numbers or information in experimental design
+ with each iteration direction found which most likely yields an improvement, "line search" is made in that direction and best point on that line is determined
+ iteration steps:
1) obtain current value and direction in which solution should be modified to decrease theta most rapidly
2) take step in direction and recompute theta, if theta is larger than before try smaller step continuing smaller steps until smaller value of theta is found
3) continue taking steps of same size until value of theta is found that is larger than previous one
4) fit quadratic curve to last three values of theta to estimate position of minimal value of theta along this line
5) evaluate theta at estimated optimal position, if best value of theta found so far, end iteration with that solution or best solution
6) adjust step size by dividing number of successful steps in this iteration (goal: one successful and one unsuccessful step)
* Scaling of CVA Utilities (Monotone, not OLS regression)
^ values for each attribute have a mean of zero, and their sum of squares across all attributes is unity
- Possible Outputs (from Sawtooth CVA, but possibly from NLOGIT):
* First Choice: all respondents associated with product that has the highest overall utility
* Share Preference: each respondent's "share of preference" estimated for each product, simulator sums utilities for each product then takes antilogs to obtain relative probabilities, shares of preferences are averaged for all respondents and those averages are used to summarize preferences for respondents being analyzed
* Share Preference with Correction for Product Similarity:
^ disadvantage to "share of preference" is if an identical is entered twice, share of preference model may give it as much as twice the original share of preference (independence from irrelevant alternatives)
^ examines similarity of each pair of products and deflates shares of preference for products in proportion to similarities to others to ensure share of two identical but otherwise unique products together will equal what either product alone would get
^ Randomized First Choice may be better solution
* Purchase Likelihood:
^ does not assume competitive products (ex. new product, absolute level of interest vs. share of preference)
^ inverse logit transform provides estimates of purchase likelihood, as expressed by respondent in calibration section of questionnaire
^ appropriate if single-concept questionnaires are used and respondents rate cards on probability of purchase scale
* Randomized First Choice
^ combines elements of First Choice and Share of Preference model
^ based on First Choice
^ significantly reduces IIA difficulties
+ rather than use utilities as point estimates, RFC adds some degree of error or variance or unique random variance to each part-worth (and/or product utility) and computes shares of preference in same manner as First Choice method
^ each respondent sampled many times to stabilize share estimates
^ results in a correction for product similarity due to correlated sums of variance among products defined on many of the same attributes
- Sawtooth CVA Market Simulator allows:
* "base case" evaluation
* effects if one-attribute-at-a-time is evaluated
* interpolation between attribute levels
* analysis of subset of respondents with average utilities as well as shares of preference including standard errors
* weighting of respondents
- Potential advantages of CBC (CBC/HB)
* presents more "realistic" tasks than rating or ranking
* "None" option better addresses questions relating to volume (vs. just share)
* can be done for groups rather than individual respondent, information is available to measure interactions as well as main effects
* resulting utilities scaled based on choice data
* vs CVA which uses rank or rating, choice-developed utilities give shares of preference (do not automatically lead to market simulations with appropriate choice probability scaling)
* disadvantage: inefficient since respondent must process concept, require larger sample size than ACA and CVA to achieve equal precision of estimates
- CVA can achieve approximate choice-probability scaling by adding holdout choice tasks within CVA survey (Share of Preference or Randomized First Choice can be tuned to resemble choice-based scale to best fit holdout choice probability)
Useful Resources
Kuhfeld, Tobias, and Garratt (1994), "Efficient Experimental Design with Marketing Research Applications," Journal of Marketing Research, 31 (November), 545-557.
Richard M. Johnson in “A Simple Method of Pairwise Monotone Regression”, Psychometrika, 1975, pp 163-168.
Green, Paul E. and V. Srinivasan (1990), "Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice," Journal of Marketing.
No comments:
Post a Comment