Monday, August 23, 2010

Review: (Hudgins, 1980) Per Capital Annual Utilization and Consumption of Fish and Shellfish in Hawaii, 1970-77

Linda L. Hudgins (Honolulu Laboratory, Southwest Fisheries Center, National Marine Fisheres Services, NOAA, Honolulu, HI 96812)

Marine Fisheries Review, 1980, p. 16-20

Objectives
- Attempt to quantify actual fish and shellfish consumption in Hawaii between 1970-77

Methods
- Order of computation:
1) total supply
2) adjust to edible weight
3) divide by population
- attempted to calculate per capita methodology used in "Fisheries of the United States" (Bell, 1978)
- 3 major sources of commercial fish and fishery products for human consumption in State of HI:
1) local catch (round (live) weight as reported by vessels to State of Hawaii, Division of Fish and Game)
2) imports from foreign countries (net product weight as recorded at port by U.S. Customs officials and published by the U.S. Bureau of Census)
3) interstate shipments from the mainland United States (net product weight as recorded at port by U.S. Army Corp of Engineers)
- Per capita utilization determined using total supply of fishery products without adjustment for beginning or ending stocks, foreign exports or defense purchases.
- Hawaii consumption is adjusted for foreign exports of 1) fish and shellfish and 2) shipments of canned tuna and fresh and frozen fish to the mainland US.
- Cured fish for consumption includes: canned or uncanned dried, salted, smoked or kippered fish - Raw inputs for local production are counted under fresh, frozen or chilled
- Foreign import value is unadjusted and reflects customs values which generally represent value in foreign country
- Supply of canned fish for consumption includes fish of all preparation in airtight containers (mostly cans)
- 3 variable components of population in State of Hawaii outside of civilian population (can account for 20% of actual population at a given time):
1) Military= those who serve in armed forces residing in HI or stationed aboard a ship homeported in HI, including dependents
* dependents are approximated at 1.15 per military member
* since 1971 military population stabilized around 6% of those actually present in State
2) Visitors
* estimated by Hawaii Visitors Bureau
* during 1978 estimated around 9% of total population based on annual average number of visitors present
3) Foreign immigrants
* approximately 8% of civilian resident population in 1970 were foreign born from PRC, Taiwan, Japan, Korea and the Philippines (all have higher per capita fish consumption rates than US according to  Bell, 1978: 74)
- imports broken into fresh, frozen or chilled

Hypothesis/Importance of Research
- Speculated that per capita consumption of fishery products in the State of Hawaii is considerably higher than U.S. average, no studies have been done to prove this
* In 1977 U.S. per capita consumption of edible (meat-weight) fish and shellfish was 5.82 kg (12.8)
- provides valuable input to research and policy planning
- have implications in particular for:
* HI fishing industry
* aquaculture development program in HI
* State of HI Fisheries Development Plan
* Regional Fishery Management Plans

Results and Conclusions
- 1977 per capita consumption rate is 77% higher than U.S. average
- 1977 foreign imports are approx. 54% of total supply of fish and shellfish in Hawaii
- Top 5 countries of origin for quantity and customs value are: New Hebrides, Philippines, Taiwan, Japan and Panama
- Shellfish and anchovies are major products in cans or airtight containers
- in 1972 major sardine exporting countries were Brazil, UK and Denmark, in 1973 Denmark was the only exporter to Honolulu custom district which explains lower figure in 1973
- Tuna, fish fillets and shellfish compose over 90% of total fresh and frozen imports to Hawaii for 1976 and 1977
- Salmon, Anchovies, Sardine (not in oil), Tuna, Bonito and Yellowtail, Clams, and Shrimp were 60% of total canned import quantity in 1976 and 78% in 1977
- US per capita total consumption followed slight upward trend from 1973 to 1977, varied between 5.45 and 5.91 kg
- HI per capita consumption followed unclear trend between 1970 and 1977, decline from 1972 to 1974, but has since followed a solid upward trend
- Per capita consumption of fresh and frozen fishery products in Hawaii has ranged from 206% (1971) to 87% (1976) above national average
- Per capita consumption of canned fishery products in Hawaii since 1973 has been below U.S. average
- Per capita consumption of cured fishery products in HI has been above US average, since it does not include local HI production the rate is even higher
- 1972-1974 decline occurred in fresh and frozen category, possible factors:
* change in tastes due to public concern over high mercury content in large pelagic fishes
* observed reduction in quantity of local supply possibly result of decline in demand due to change in tastes
* 1972 visitor population grew much faster than civilian resident population and may exhibit differential consumption rates higher or lower than resident population
- fresh category may have declined due to unreported catch from recreational fishing (recreational landings are not reported)

Questions about experimental design, statistical analyses or analytical approaches
- How does U.S. Bureau of Census import data differ from USDA-FAS data? Same?
- Are Local (State of HI, DLNR- round weight), Foreign (U.S. Bureau of the Census- net product weight) and Interstate (U.S. Army Corps of Engineers- net product weight) data sources comparable?
* How accurate is Army Corps of Engineers interstate shipment total? This is only questioned since the foreign data for 2008 does not match USDA-FAS import numbers.

Assumptions
- Cured fishery products does not include local production
- Foreign import values excludes U.S. import duties, freight, insurance and other charges incurred in bringing merchandise to US, does not reflect actual transaction value
- Recreational fishing catch is not included

Opinion
According to Web of Science no one has cited this paper. However, it did meet its stated objective of attempting to quantify actual fish and shellfish consumption in Hawaii between the years 1970-77. I need to further investigate how the data was used by the HI fishing industry, the aquaculture development program in HI, and the State of HI for fisheries development plans or regional fishery management plans.

This data can be useful to understand current consumption of fish and seafood, especially at the species level. Using this data management for the highest consumed species can be considered for fisheries and aquaculture ventures. Local suppliers can also consider import substitution. Marketing for under-consumed or more sustainable species can be imporved.

Further Research
- specific determinants of consumption are not addressed in this paper

Useful References
Bell, T. (editor). 1978. Fisheries of the United States, 1977. U.S. Dep. Commer., NOAA, Natl. Mar. Fish. Serv., Curr. Fish. Stat. 7500, 112 p.

Wednesday, August 11, 2010

Technical Paper: (Sawtooth 2002) Conjoint Value Analysis (CVA), Version 3.0



CVA: A Full-Profile Conjoint Analysis System From Sawtooth Software
Technical Paper, Version 3
Sawtooth, Inc. 2002

Sequim, Washington USA (360) 681-2300
http://www.sawtoothsoftware.com

Useful Information
- Conjoint analysis useful for learning how potential buyers of a product or service value various aspects or features
- Goal is to conclude what product changes would have most beneficial effect of share of preference or which would maximize likelihood buyers would choose specific products
- Pairwise can be harder for respondent because it requires understanding of two concepts rather than one, but lets respondent make finer distinctions and contribute more information than single concept (card sort) presentation
- Ordinary Least Squares (OLS) regression appropriate for ratings-based data and monotone program for rankings-based data (hierarchical Bayes (HB) module available)
- relative part-worths are similar whether estimated from single concept or paired comparison conjoint questionnaire formats
- pairwise is suggested for most, however only captures relative differences in respondent's preferences for attribute levels, never measure absolute level of interest in product concepts
- use single concept to run "purchase likelihood" simulations
- Single Concept Presentation ("card-sort")
* shown one product at a time and asked to rate likelihood of purchasing
* can rate (OLS) or rank (monotone regression)
- To ensure CVA questionnaire is appropriate:
* keep number of attributes small
* pretest questionnaire
* conclude resulting utilities are reasonable
- Pretest:
* take questionnaire yourself and see if utilities mirror own values
* have others answer questionnaire to gauge difficulty
* have sample of relevant respondents answer questionnaire and analyze their data, look for "nonsense" results like higher utilities for higher prices
- recommended number of tasks provides 3 times the number of observations as the number of parameters to be estimated (# of parameters = total # levels - # of attributes +1)
* asking recommended number helps ensure enough information to calculate stable estimates for each respondent
* use of HB may slightly reduce number of questions you decide to ask as it estimates part worths based on information from current respondent plus information from other respondents in dataset
- Survey generation preparation:
* specified attributes by assuming apriori ordering for levels within attributes: Wild |Farmed | None, Local | US | Foreign, Price Level 1 | Price Level 2 | Price Level 3
* chosen questionnaire format: CVA ranking
* how many tasks to ask: 12 - 4 + 1 = 9, 9 x 3 = 27 tasks (less if you use HB)
* discard "obvious" tasks
- Survey Generation
* Orthogonality and CVA Design
^ orthogonal = zero correlation between pairs of attributes
^ attributes must vary independently of each other to allow efficient estimation of utilities
^ level balance: each level within an attribute shown equal number of times
^ optimally efficient
^ if design is not orthogonal Sawthooth CVA allows you to test design accounting for impact of prohibitions or asking fewer than recommended number of questions
+ done using CVA designer which maximizes D-efficiency which accounts for frequency of level occurrences and left/right balance for pairwise design
_ repeats steps 10 times and chooses best final solution (for many attributes can override to find efficiency from out of 100 or more times)
_ measures goodness of design relative to hypothetical orthogonal design
_ final efficiency may still be satisfactory
_ only after adequate number of conjoint questions relative to number of parameters should D-efficiency be considered
^ CVA Attribute Prohibitions (probably won't need)
+ cautioned against using since they usually have a significant impact on efficiency
+ should only be used to eliminate combinations that respondents recognize as impossible or absurd
^ every respondent receives same set of questions
^ can add few user-specified holdout cards
+ CVA will conservatively discard the most "obvious" conjoint tasks
- Part Worth Utility Estimation
* Ordinary Least Squares
* Monotone (nonmetric) Regression
^ iterative, finding successive solutions for utility values that fit data increasingly well
^ initial solution developed randomly or using information in experimental design
^ Measures of Fit
+ Kendall's Tau
_ measure "how close" utilities are to rank orders of preference
_ consider all possible pair of concepts, ask for each pair whether member with more favorable rank has higher utility
_ expresses amount of agreement between preference and estimated utilities
_ obtained by subtracting number of "wrong" pairs from number of "right pairs" and dividing by total number of pairs
_ tau of 1 indicates perfect agreement in a rank order sense, tau of 0 indicate complete lack of correspondence, tau of -1 indicates perfect reverse relationship
_ convenient way to express amount of agreement between set of rank orders and other numbers such as utilities
_ not useful to base optimization algorithm
+ Theta
_ continuous function of utility values
_ obtained from squared utility difference in last column of table above
_ sum squares of utility difference that are in "wrong order" divide by total of all squared utility differences and take square root of quotient
_ percentage of information in utility difference that is incorrect given data, best possible value = 0, worst = 1
_ Computation
+ initial solution obtained using random numbers or information in experimental design
+ with each iteration direction found which most likely yields an improvement, "line search" is made in that direction and best point on that line is determined
+ iteration steps:
1) obtain current value and direction in which solution should be modified to decrease theta most rapidly
2) take step in direction and recompute theta, if theta is larger than before try smaller step continuing smaller steps until smaller value of theta is found
3) continue taking steps of same size until value of theta is found that is larger than previous one
4) fit quadratic curve to last three values of theta to estimate position of minimal value of theta along this line
5) evaluate theta at estimated optimal position, if best value of theta found so far, end iteration with that solution or best solution
6) adjust step size by dividing number of successful steps in this iteration (goal: one successful and one unsuccessful step)
* Scaling of CVA Utilities (Monotone, not OLS regression)
^ values for each attribute have a mean of zero, and their sum of squares across all attributes is unity
- Possible Outputs (from Sawtooth CVA, but possibly from NLOGIT):
* First Choice: all respondents associated with product that has the highest overall utility
* Share Preference: each respondent's "share of preference" estimated for each product, simulator sums utilities for each product then takes antilogs to obtain relative probabilities, shares of preferences are averaged for all respondents and those averages are used to summarize preferences for respondents being analyzed
* Share Preference with Correction for Product Similarity:
^ disadvantage to "share of preference" is if an identical is entered twice, share of preference model may give it as much as twice the original share of preference (independence from irrelevant alternatives)
^ examines similarity of each pair of products and deflates shares of preference for products in proportion to similarities to others to ensure share of two identical but otherwise unique products together will equal what either product alone would get
^ Randomized First Choice may be better solution
* Purchase Likelihood:
^ does not assume competitive products (ex. new product, absolute level of interest vs. share of preference)
^ inverse logit transform provides estimates of purchase likelihood, as expressed by respondent in calibration section of questionnaire
^ appropriate if single-concept questionnaires are used and respondents rate cards on probability of purchase scale
* Randomized First Choice
^ combines elements of First Choice and Share of Preference model
^ based on First Choice
^ significantly reduces IIA difficulties
+ rather than use utilities as point estimates, RFC adds some degree of error or variance or unique random variance to each part-worth (and/or product utility) and computes shares of preference in same manner as First Choice method
^ each respondent sampled many times to stabilize share estimates
^ results in a correction for product similarity due to correlated sums of variance among products defined on many of the same attributes
- Sawtooth CVA Market Simulator allows:
* "base case" evaluation
* effects if one-attribute-at-a-time is evaluated
* interpolation between attribute levels
* analysis of subset of respondents with average utilities as well as shares of preference including standard errors
* weighting of respondents
- Potential advantages of CBC (CBC/HB)
* presents more "realistic" tasks than rating or ranking
* "None" option better addresses questions relating to volume (vs. just share)
* can be done for groups rather than individual respondent, information is available to measure interactions as well as main effects
* resulting utilities scaled based on choice data
* vs CVA which uses rank or rating, choice-developed utilities give shares of preference (do not automatically lead to market simulations with appropriate choice probability scaling)
* disadvantage: inefficient since respondent must process concept, require larger sample size than ACA and CVA to achieve equal precision of estimates
- CVA can achieve approximate choice-probability scaling by adding holdout choice tasks within CVA survey (Share of Preference or Randomized First Choice can be tuned to resemble choice-based scale to best fit holdout choice probability)

Useful Resources
Kuhfeld, Tobias, and Garratt (1994), "Efficient Experimental Design with Marketing Research Applications," Journal of Marketing Research, 31 (November), 545-557.

Richard M. Johnson in “A Simple Method of Pairwise Monotone Regression”, Psychometrika, 1975, pp 163-168.

Green, Paul E. and V. Srinivasan (1990), "Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice," Journal of Marketing.

Review: (Elrod et. al. 1992) An Empirical Comparison of Ratings-Based and Choice-Based Conjoint Models

Terry Elrod, Jordan J. Louviere, Krishnakumar S. Davey

Journal of Marketing Research, Vol. 29, No. 3 (Aug., 1992), pp. 368-377
Published by: American Marketing Association
Stable URL: http://www.jstor.org/stable/3172746
Accessed: 29/07/2010 01:46

Objectives
- compare two approaches to conjoint analysis in terms of ability to predict shares in a holdout choice task
- comparing rating-based and choice-based approaches on ability to predict choice shares from holdout choice sets
* versus Louviere and Gaeth (1988) who compared ratings-based to choice-based, but only coefficients and not predictive ability
* Hagerty (1986) suggests the best utilization of each method seems likely to different model specifications
* Bateson, Reibstein and Boulding (1987) only used OLS for analysis, but at individual level
Leigh, MacKay, and Summers (1984) used TRICON to infer a partial rank ordering for analysis by MONANOVA (discards information available in original choice data)
- introduce and evaluate new choice-based conjoint model specification
* inclusion of "generic cross-effects" allows and tests for departures from Independence of Irrelevant Alternatives (IIA)
Methods/Approach
- 3 models fit to individual-level ratings of full profiles vs. four multinommial logit models fit to choice shares for set of full profiles
* individual-level models fit to ratings of full profiles and choice simulators used to predict shares for sets of alternatives (most frequently applied method) (Wittink and Cattin 1989)
* aggregate multinomial logit model using choice data (Louviere 1988a,b; Louviere and Woodworth 1983) = respondents choose one alternative from several sets and aggregate logit model is fit to choice shares by maximum likelihood, choice shared can be predicted directly by aggregate model
- task involved student evaluations of rental apartments
* attributes chosen by examining aprtment listings in university's off-campus housing
* fit normal distribution to listed rent levels, selected levels to vary that corresponded to nearest $10 for first, third and fifth sextiles of distribution (median of lower, middle and upper thirds)
* levels: one bedroom | two bedroom ("All apartments have one bedroom large enough to accommodate 2 twin-sized beds, two bedrooms have 2nd large room as study or extra lounge area"),  .5 miles | 1.5 miles | 2.5 miles (by shortest way by road for driving or cycling), very safe | fairly safe
* reduced task artificiality and response bias caused by repetition of attribute level by using random sampling procedure to allow rent and distance from the university vary slightly about a design value for each level (Louviere 1988b)
^ rent could be design level $20 more or $20 less each with 1/3 probability
^ distance was design level .2 mile more or .2 mile less each with 1/3 probability
^ exposed respondents to three times number of values for continuous variables, better representing variability to be found in real-world alternatives
^ thought to also help sustain respondent involvement in task
^ controlled effect on ratings and choices of attributes not included in study by saying apartments were very similar to where they currently lived in every respect not included in profile description 
- 3 task preformed by respondents: 
1) rating  
2) calibration choice
3) holdout choice
- 7 models (specifications, goodness of fit, how predict choice for arbitrary choice sets, how assess ability of models to predict holdout choices):

Results/Conclusions
- both predict holdout shares well with neither ratings-based nor choice-based dominant, though some predict better than others
- new aggregate model that captures departures from independence of irrelevant alternatives (IIA)

Opinion
This paper has been cited 162 times according to Google Scholar. I am still a little confused on how holdout shares are predicted, especially with rating. I assume it is a “ringer” choice which is why I’m not sure how it would be used in a rating situation. Sawtooth (http://www.sawtoothsoftware.com/download/techpap/inclhold.pdf) recommends not including a “None” option. If the “None” option is what it is the JIMAR study did use “ I will not chooise either A or B.”

In regards to the seafood preference study, this paper has only been useful in explaining the disadvantages of choice, which is rare to find.

Useful Information
- advantages of choice over traditional ranking/rating for conjoint analysis:
* usually behavior of ultimate interest, presumably have advantage in predicting choice behavior
* designed to study effect of choice set composition on choice, such as departures from independence of irrelevant alternatives (IIA)
* allow direct prediction of choice shares avoiding conjoint simulators which require questionable assumptions to translate predicted ratings into choices
- disadvantages of choice over traditional ranking/rating for conjoint analysis:
* choice data more amenable to maximum likelihood multinomial logit (MNL) analysis, yet MNL biased
* for small samples there is finite probability that they will be infinite (estimation problems disappear in aggregate level)
* interpretation of aggregate models is more difficult because they confound choice process operating at individual level with heterogeneity in process across individuals
- ratings data allow model estimation by ordinary least squares (OLS) which yields unbiased estimates of parameters
* individual-level estimation is therefore possible which allows arbitrary heterogeneity in coefficients across respondents
* individual-level estimates unstable so observed variability in estimates will overstate variability in true coefficients
* problem of predicting choices from ratings data

Useful References
Hagerty, Michael R. (1986), "Cost of Simplifying Preference Models," Marketing Science, 5 (Fall), 298-319.

Louviere, Jordan J. and Gary J. Gaeth (1988), "A Comparison of Rating and Choice Responses in Conjoint tasks," in Proceedings of the Sawtooth Software Conference. Ketchum, ID: Sawtooth Software, 59-73.

Louviere, Jordan J. (1988) "Conjoint Analysis Modeling of Stated Preferences: A Review of Theory, Methods, Recent Developments and External Validity," Journal of Transport Economics and Policy, 22 (January), 93-119.

Green, Paul E. and V. Srinivasan (1990), "Conjoint Analysis in Marketing Research: New Developments and Directions," Journal of Marketing, 54 (October) 3-19.

Bateson, John E.G., David Reibstein, and William Boulding (1987), "Conjoint Anlysis Reliability and Validity: A Framework for Future Research," in Review of Marketing, Michael J. Houston, Ed. Chicago: American Marketing Association, 451-81.

Louviere, Jordan J and Gary J. Gaeth (1988), "A Comparison of Rating and Choice Responses in Conjoint Tasks," in Proceedings of the Sawtooth Software Conference. Ketchum, ID: Sawtooth Software, 59-73.

Hagerty, Michael R. (1986), "Cost of Simplifying Preference Models," Marketing Science, 5 (Fall), 298-319.

Reibstein, David, John E.G. Bateson, and William Boulding (1988), "Conjoint Analysis Reliability: Empirical Findings," Marketing Science, 7 (Summer), 271-86.

Leigh, Thomas W., David B. MacKay, and John O. Summers (1984), "Reliability and Validity of Conjoint Analysis and Self-Explicated Weights: A Comparison," Journal of Marketing Research, 21 (November), 456-62.

Green, Paul E., Kristiaan Helsen, and Bruce Shandler (1988), "Conjoint Internal Valudity Under Alternative Profile Presentations," Journal of Consumer Research, 15 (Dcember), 392-7.