SSI Web v3 - Sawtooth Software

SSI Web v3.5.1
Section i: CiW
Section ii: ACA/Web
Section iii: CBC/Web
Sawtooth Software, Inc.
Sequim, WA
Copyright Sawtooth Software 2003
Table of Contents
Section i: CiW
i.1 Getting Started
i.1.1 Getting Started with Web Interviewing
i.1.2 What’s New in V3.5?
i.1.3 Software Requirements: Users, Servers and Respondents
i.2 Creating a Study/Administrative Settings
i.2.1 Creating/Opening a Study
i.2.2 New SSI Web Study
i.2.3 Settings
i.2.4 Advanced Settings
i.2.5 Setting Passwords for On-line Administrative Access
i.3 Global Settings
i.3.1 Global Settings (Format Settings Tab)
i.3.2 Global Settings (Global Font Colors Tab)
i.3.3 Global Settings (Headers and Footers Tab)
i.3.4 Displaying a Progress Bar (Progress Bar Tab)
i.3.5 Global Settings (Simple Browser Tab)
i.3.6 Respondent Answer Verification Error Messages
i.4 Passwords
i.4.1 Creating Passwords
i.4.2 Generating a Series of Passwords Automatically
i.4.3 Copying/Pasting Passwords to/from Other Applications
i.4.4 Generating/Using More than 2,048 Passwords
i.4.5 Import Passwords
i.4.6 A Selection of Prime Numbers
i.5 Write Questionnaire/Question Types
i.5.1 Write Questionnaire
i.5.2 Add Question
i.5.3 Rename Question
i.5.4 Reordering Questions
i.5.5 Composing the Password Screen
i.5.6 Composing Select Questions
i.5.7 Error Text Field
i.5.8 Composing Numeric Questions
i.5.9 Composing Open-end Questions
i.5.10 Composing Text/HTML Filler Questions
i.5.11 Composing Grid Questions (Introduction)
i.5.12 Grid Questions (General Tab)
i.5.13 Grid Questions (Headers and Footers Tab)
i.5.14 Grid Questions (Rows/Columns Tab)
i.5.15 Edit Grid Row/Column Label and Question Settings
i.5.16 Add Grid Row/Column Label
i.5.17 Composing Constant Sum Questions
i.5.18 Constant Sum Advanced Dialog
i.5.19 Composing Ranking Questions
i.5.20 Composing Free Format Questions
i.5.21 SELECT (Combo Box) (Free Format Question Type)
i.5.22
i.5.23
i.5.24
i.5.25
i.5.26
i.5.27
i.5.28
i.5.29
TEXTAREA (Free Format Question Type)
TEXT (Text Box) (Free Format Question)
TEXT (Numeric) (Free Format Question Type)
CHECKBOX (Free Format Question Type)
RADIO (Free Format Question Type)
Hidden (Free Format Variable)
JavaScript Question
Using HTML Elements to Enhance Your Surveys
i.6 Graphics
i.6.1 Including Graphics in Your Questionnaire
i.6.2 Graphics Management
i.7 Questionnaire Flow
i.7.1 Layout Dialog
i.7.2 Randomize Questions
i.7.3 Page Skip Logic
i.7.4 Displaying Responses Given to Previous Questions
i.8 Testing Locally (Local Web)
i.8.1 Testing/Running the Survey Locally
i.8.2 Local Web Settings
i.9 Respondents Connecting to Survey
i.9.1 How Respondents Access Your Survey
i.9.2 “One-Click” Respondent Access to Survey
i.9.3 Linking to Sawtooth Software’s Web Survey from another Survey Program
i.9.4 What Happens at the End of the Survey?
i.10 Uploading Study to Server
i.10.1 Uploading Your Survey to Your Internet Server
i.10.2 Deleting Practice Data Prior to Taking the Study “Live”
i.11 Data Management Module
i.11.1 Logging onto the Online Data Management Module
i.11.2 Study Summary
i.11.3 Passwords Table
i.11.4 Downloading Passwords Report
i.11.5 Accumulate Qualified/Complete Surveys
i.11.6 Accumulate Disqualified and Incomplete Surveys (Read/Modify Access Only)
i.11.7 Delete Disqualified and Incomplete Surveys (Read/Modify Access Only)
i.11.8 View Survey Data
i.11.9 Download Accumulated Surveys (Read-Modify Access Only)
i.11.10 Tabulating Data On-Line
i.11.11 Reset Web Survey (Read/Modify Access Only)
i.12 Exporting Data
i.12.1 Export All Data (Export File Type)
i.12.2 Export All Data (Respondent Filter Tab)
i.12.3 Export All Data (Data to Export)
i.12.4 Exporting Open-End Data
i.12.5 Moving Data from SSI Web to SMRT for Analysis
i.13 Technical Notes and Troubleshooting Aids
i.13.1 How Data Are Stored
i.13.2 Files Created by SSI Web
i.13.3
i.13.4
i.13.5
i.13.6
Error Messages (from Perl scripts)
Trouble-Shooting Server-Side Problems
Web Server Setup for Microsoft IIS Servers
Networked Environments Where Users Cannot Access the Program Files Folder
i.14 HTML Aids
i.14.1 HTML Cheat-Sheet
i.14.2 HTML Editor (Pencil Icon)
i.15 System Variables
i.15.1 System Variables
i.16 CAPI Interviewing
i.16.1 CAPI Interviewing with “SSI Web CAPI”
i.16.2 Accumulate CAPI Data Files
i.16.3 Deleting Sensitive Data from CAPI Interviewing PCs
Section ii: ACA/Web
ii.1 Getting Started
ii.1.1 Getting Started with ACA/Web
ii.1.2 What Is Adaptive Conjoint Analysis?
ii.1.3 Basic Overview of ACA/Web
ii.2 ACA Questionnaire Flow
ii.2.1 Sections and Flow of an ACA Questionnaire
ii.3 Control Parameters
ii.3.1 Setting ACA Control Parameters
ii.4 Is ACA the Appropriate Technique?
ii.4.1 Is ACA the Appropriate Technique?
ii.5 Attributes and Levels
ii.5.1 Specify Attributes and Levels
ii.5.2 Attribute Text
ii.5.3 Level Text
ii.5.4 Specify Prohibited Pairs
ii.6 ACA Questions
ii.6.1 ACA Rating Questions (Question Text Tab)
ii.6.2 ACA Rating Questions (Question Format Tab)
ii.6.3 ACA Rating Anchor Text
ii.6.4 ACA Importance Questions (Question Text Tab)
ii.6.5 ACA Importance Questions (Question Format Tab)
ii.6.6 ACA Importance Anchor Text
ii.6.7 ACA Pairs Questions (Question Text Tab)
ii.6.8 ACA Pairs Questions (Question Format Tab)
ii.6.9 ACA Pairs Anchor Text
ii.6.10 ACA Calibration Questions (Question Text Tab)
ii.6.11 ACA Calibration Questions (Question Format)
ii.6.12 ACA Calibration Settings
ii.7 Utility Estimation
ii.7.1 Calculate ACA Utilities
ii.7.2 Details of ACA/Web Utility Estimation
ii.7.3 How Utility Estimation in ACA/Web Differs from Previous Versions of ACA
ii.8 Data Management
ii.8.1 How ACA/Web Data are Stored
ii.8.2 Prepare Conjoint Data for HB
ii.9 Technical Details Regarding Estimation and Design
ii.9.1 Updating Utilities During the ACA Interview
ii.9.2 Choosing the Next Paired-Comparison Question
ii.9.3 Differences between ACA/Web and ACA 4
ii.10 Data File Formats
ii.10.1 Format of .UTL and .ACD files
ii.11 Real-Time Feedback to Respondents
ii.11.1 Real-Time Feedback of ACA Results to Respondents
ii.12 Analysis
ii.12.1 Analyzing ACA Data with the Market Simulator
(see also section i.12.5 Moving Data from SSI Web to SMRT for Analysis)
Section iii: CBC/Web
iii.1 Getting Started
iii.1.1 Getting Started: What is CBC Analysis?
iii.1.2 Getting Started: Overview of CBC/Web Software
iii.1.3 CBC Questionnaires and Design Strategy
(see section ii.5.1 Specify Attributes and Levels for more information on defining attributes and levels)
iii.2 Experimental Design Considerations
iii.2.1 Attribute Interactions
iii.2.2 Number of Attributes/Levels/Tasks in CBC
iii.2.3 Specifying Fixed or Holdout Tasks
iii.2.4 Generating/Testing the CBC Design
iii.3 Interview Parameters and Display Options
iii.3.1 Specify CBC Interview Parameters (General Tab)
iii.3.2 “Shelf-Facing” Display Display Tab (Advanced Design Module Only)
iii.3.3 Specify CBC Interview Parameters (Format Tab)
iii.3.4 Specify CBC Interview Parameters (Design Tab)
iii.3.5 Specify CBC Interview Parameters (Advanced Design Module Tab)
iii.3.6 CBC Random and Fixed Questions
iii.3.7 CBC Fixed Choice Task Settings
iii.4 Advanced Designs and Display Elements
iii.4.1 Alternative-Specific Designs
iii.4.2 Partial Profile Designs
iii.4.3 Conditional Pricing
iii.4.4 Conditional Display
iii.5 Data Export
iii.5.1 Prepare CBC Data Files (*.cho & *.att)
(see also section i.12.5 Moving Data from SSI Web to SMRT for Analysis)
iii.6 Paper-and-Pencil Administration
iii.6.1 CBC/Web Paper-and-Pencil Studies
iii.6.2 Accumulate Paper & Pencil Data Step 1 of 4
iii.6.3 Accumulate Paper & Pencil Data Step 2 of 4
iii.6.4 Accumulate Paper & Pencil Data Step 3 of 4
iii.6.5 Accumulate Paper & Pencil Data Step 4 of 4
i.1.1 Getting Started with Web Interviewing
CiW, ACA/Web and CBC/Web are integrated software systems for conducting market research studies
over the Internet, over an intranet, or via CAPI-based (stand-alone PC) interviewing. These software
systems include documentation in electronic format. While at most any dialog/screen within SSI Web,
you can receive help about the options within that particular dialog by pressing F1. (A complete copy of
the help documentation with a table of contents is available within the PDF file entitled SSI Web
Documentation.pdf that was installed with your SSI Web system.) We strongly suggest you start by
viewing “Getting Started with SSI Web: A 45-Minute Hands-On Tour”, by clicking Help + Getting Started
Hands-On Tour….
Your license entitles you to free technical support via phone, email or fax. We encourage you to contact
us (360/681-2300) if you cannot find the answer to your problem or question in this documentation. We
are available Mon-Fri, 8:30 am to 5:00 pm Pacific time. (Note: we provide full technical support for
composing and testing your Internet survey on your local PC, and provide written instructions on how to
install your questionnaire on your server. We cannot take responsibility for or be expected to provide
technical support to debug any customized HTML or JavaScript you may add to your survey. You are
also responsible to make sure that Perl is running and that you can run Perl (CGI) scripts on your web
server. We can provide some assistance related to SSI Web’s use with your server once the fundamental
Perl functionality is in place.)
Sample CiW, ACA/Web and CBC/Web studies and supporting files are included with your installation in
the \tutorials directory. You can easily access these example studies by clicking Help | Tutorial
Studies. These questionnaires let you see examples and investigate how they are set up.
If you know a little HTML (which is not necessary to use this software), you can employ some "tricks" that
can help you customize the look of your survey beyond the standard options. This is especially the case
if using the “Free Format” question type. By opening the sample studies and looking at the settings and
text we've used, you can become familiar with some of the possibilities. There is also a brief section
available in the on-line help regarding HTML elements.
Using this web-based survey tool requires the following skills and knowledge base:
1) The analyst managing the survey should know the basics of survey writing. If using conjoint
analysis, the analyst should understand how to define attributes and levels properly, correctly
phrase the questionnaire text, interpret the conjoint utilities, and run simulations. It is not
necessary to be a statistician to use our software effectively, but knowledge of basic statistics for
market research is highly recommended.
2) The questionnaire is first composed locally on a PC running under Windows 95 or later with
Internet Explorer, Netscape or another Web browser installed. The individual inputting/formatting
the survey should have working knowledge of standard Windows applications such as word
processing and web browsers. Though it is not necessary, some knowledge of HTML is valuable.
The survey can be tested/run locally on your PC using the Local Web Server installed with SSI
Web. To post your survey on the Web, you’ll eventually upload your survey to a Web server.
3) To upload the survey to the Web, you must have access to an Internet/intranet server running
Perl 5.003 or later. This can be your company's own Web server, or your ISP's (Internet Service
Provider's) Web server. You must know how to use FTP software to create directories, upload
files to those directories, and set permissions on the files and directories. Detailed instructions on
the directories, files and permissions to set are provided in the on-line help, in the section entitled
Uploading Your Survey to the Web Server.
Note: Sawtooth Software also provides web hosting services for licensed SSI Web customers.
Please call us for more information.
Steps to Creating a Web Survey
1. It is likely that you will first compose the script to your questionnaire using a word processing package.
This is fine as a starting point, as you can cut-and-paste text from your word processing package into the
questions you develop one-by-one within SSI Web’s survey building interface.
2. Create a new study in SSI Web. A study is a four character (or less) name by which all files related to
your project are referenced.
3. Next, you write your questionnaire, adding questions one-by-one within the Write Questionnaire
interface. You can preview these questions using the Preview function, that shows how an Internet
browser presents your questions.
4. Create a respondent passwords table, to provide a way to let qualified respondents in and keep
unqualified people out of the survey.
5. Preview the questions, questionnaire pages, or the entire questionnaire on your PC during
questionnaire development. You can also test or run your questionnaire locally on your PC using the
Local Web Server installed with SSI Web.
5. When you are satisfied with your questionnaire, upload it, along with the supporting Perl files, to the
Web server. You then test your survey on the Web server, to make sure it functions properly and saves
the data correctly.
6. When you are satisfied that your survey looks and functions properly, invite respondents to take the
survey. Their data are automatically stored on the server. You can view or perform tabulations of the
data in real time at any point during data collection. (Note: performing administrative functions while a
survey is running live can affect server performance.)
7. Download the data to your PC and use the data Export capabilities within SSI Web to convert the
survey data to a generic ASCII format, with a supporting layout file that tells you which columns of data
correspond to different questions in your survey. You can then import the data into a data tabulation
program of your choice, or into a spreadsheet for analysis.
8. If running a conjoint analysis project (e.g. ACA or CBC), use the menu system to prepare the
appropriate conjoint files for analysis within the SMRT system.
Privacy and Research
There is a great deal of concern regarding privacy rights and the Internet. Researchers must be
respectful of respondents and the valuable information they provide. One should never use information
collected during the course of research for purposes other than research. We suggest providing a privacy
policy near the front of your survey, to let respondents know how the data they provide will or won’t be
used.
We condemn any direct marketing, selling, list building, or PR campaigns that masquerade as research.
i.1.2 What’s New in V3.5?
What’s New in V3.5?
General SSI Web Improvements:
1. We have added new capabilities for CAPI data collection. CAPI data collection lets you easily set
up Web studies to run on stand-alone PCs or laptops for data collection (that are not connected
to the web). We hope this new functionality makes SSI Web even more useful than before,
allowing users to use the software in more traditional interviewing environments.
CBC/Web Improvements
1. For the Advanced Design Module, we have increased level capacity per attribute from 15 to 100,
and have increased the number of concepts that can be displayed per task from 16 to 100.
These changes are particularly useful for packaged goods and beverage research, where it is
common to have dozens of brands or SKUs.
2. Conditional Pricing and Conditional Display capabilities. Users can now specify conditional
pricing tables which enable them to create customized price ranges for certain brands (for
example), or combinations of attributes (such as brand/package size combinations). Prices can
be made to vary (using conditional “look up” tables) on the level combinations of up to three other
attributes. Conditional Display lets users display a graphic (or other HTML element) in place of
(or in addition to) an attribute’s regular level text. Conditional display can vary these conditional
elements based on the level combinations of up to four other attributes.
3. Ability to suppress concept randomization. Some researchers have wanted their CBC tasks to
always show a particular brand in a certain concept position on the screen. The default behavior
in CBC is to randomize the position of levels on the screen. We’ve added the capability to
suppress that randomization and show the concepts in a specific order (the natural order) for a
user-specified attribute, such as brand. Thus, brand 1 will always display in the first concept
position, brand 2 in the second position, etc. Another new feature randomizes the concept
position once for each respondent (for a user-specified attribute), but holds that random order
constant across all tasks for that respondent. This helps control order bias across respondents
without making respondents re-orient themselves to the new positions of, say, brands on the
screen within their respective surveys.
4. Shelf-facing display presentation. We’ve added a new capability to the Advanced Design Module
for showing realistic shelf-facing displays, like a shopper might see in a grocery store.
5. Paper-and-Pencil Data Collection. Users can now use CBC/Web software for implementing
paper-and-pencil CBC studies. The menu items are nearly identical as for CBC for Windows
(SMRT). Also, we’ve changed the naming convention from “beginning questionnaire version” to
“design seed” (though the functionality is the same). In CBC for Windows (SMRT) the beginning
questionnaire version really was a design seed, and we’ve clarified that terminology both in SSI
Web v3.5 and in the newest version of SMRT to reflect that. Now, questionnaire versions always
start with #1, but the designs can vary depending on the design seed.
What’s New in V3.2?
In case you missed upgrading to v3.2, here is the list of changes for that version:
With the release of version 3.2, our Web-interviewing software has many improvements, and a few
changes that previous users should review.
Important Changes:
1. The list of files to be uploaded to the server is slightly different from before (Note: auth-lib.pl is
replaced with authlib.pl, and cgi-lib.pl replaced with lite.pm). See our instructions for setting up
the survey on the server for more information.
2. Server setup for IIS (Microsoft) is now much easier than in the previous version. The default
server setup in our documentation should work also for IIS. If you choose an alternate setup,
please note that with v3.2 a) paths are from the actual location of Perl scripts (not virtual
directory), b) path.cgi may need to be uploaded to cgi-bin. For documentation on how to set up a
dedicated IIS Server please refer to
http://www.sawtoothsoftware.com/support/ssiweb/310/IISsetup.html.
3. If upgrading from a version prior to v3, please note that system variables such as
%graphics_path% needs to include one more % symbol, such as %%graphics_path%. You’ll
need to update older questionnaires to reflect this standard if you want to work with them in v3.x;
otherwise, your graphics path will not work properly.
New Features:
The items listed below represent incremental improvements over v2. Items 12 through 16 in the section
immediately below, and item 5 in the ACA/Web section represent improvements moving from 3.0 to 3.2.
1. Data export to SPSS(TM) .SAV files and Excel(TM)-friendly .CSV files.
2. New Grid, Constant-Sum, and Ranking questions (see www.sawtoothsoftware.com/grid for
examples).
3. Better formatting options for CBC/Web and ACA/Web surveys.
4. Cut-and-Paste functionality within and across projects for questions, conjoint attributes, and lists
of response items.
5. Ability to assign disqualified versus complete status (along with other status codes) at each
"terminating question" within the survey.
6. An option called “Simple Browser” is now available for disabling the respondent's browser
controls. This prohibits the respondent from using the back button, and maximizes the screen
real estate used for your questionnaire.
7. Hidden variable support in Free Format.
8. Ability to start the progress bar only after respondents reach a certain question. Ability to change
the colors used in the progress bar.
9. File + Save As for renaming/making a copy of your entire project.
10. Preview mode now implements question and question item randomization.
11. You can set the beginning and ending progress bar positions (e.g. begin at 35%, end at 82%) for
the current questionnaire. This makes it possible to link multiple SSI Web surveys or to link from
another survey package into SSI Web surveys with a correct and continuous progress bar across
the modules.
12. You can now add new passwords while data collection is in progress on the server without
causing problems (such as resetting password quotas and losing track of which passwords have
already been used). The new approach stores the information regarding who has completed the
survey in a new folder that SSI Web automatically creates called “XXXXpass” inside the admin
directory, where XXXX is the short study name.
13. Export data can now use any variable in the survey as the case identifier. This is useful, for
example, when exporting conjoint data and matching the records to another database that uses a
case identifier that was “passed into” the data through a linked survey.
14. We have created a new “hands-on tour” tutorial that can be accessed through the Help menu.
15. Open end text fields are no longer limited to just 999 characters. The number of characters that
can be accepted per open-end question can be as high as 10,000.
16. We’ve added a simple bar-graph display within the Online Data Management Module’s marginals
program for displaying the frequency of responses to items within questions.
Specific Improvements Affecting ACA/Web Include:
1. Ability to split Calibration Concepts over multiple pages.
2. Improved formatting options for ACA questions.
3. Ability to re-order attributes in the attribute list.
4. Availability of FastPace pairs selection algorithm (by request only). FastPace is a technique
developed by leading researchers at MIT. Current research suggests that it is as good as regular
ACA, and may be better in some situations.
5. We have modified the way we calculate utilities for levels not taken forward to pairs. The change
makes the final resulting importances more logically consistent with respect to prior information.
The new approach more closely matches the method in ACA/HB software for constraining
importances to be consistent with prior ratings.
Specific Improvements Affecting CBC/Web Include:
1. Ability to place attribute labels at the left-side of the choice task.
2. Randomized attribute order synchronized across random and fixed choice tasks.
3. Ability to re-order attributes in the attribute list.
i.1.3 Software Requirements: Users, Servers and Respondents
For the User (Survey Developer):
PC running under Windows 95 or later
Internet Explorer 3.02 or later (**suggested but not required)
Pentium-class processor
16 Mbytes RAM
The Internet/Intranet Server:
Support Perl Version 5.003 or later (capable of running CGI scripts)
Permit “write” and “delete” capabilities to allow our programs to write and delete files on the
server. (If your ISP doesn’t support these capabilities, we can recommend an ISP that does, or
we can host your surveys for a fee.)
For the Respondent:
Respondents can use any PC, Mac or Internet-enabled terminal. Since the surveys use only
HTML pages (no cookies or Java applets are used), any browser that interprets standard HTML
will work. Browsers that don’t support JavaScript 1.2 or later uses the Perl error checking
method. In addition, if you want to turn off JavaScript verification altogether, you can turn it off
from the Write Questionnaire | Global Settings menu.
**You can use another browser when previewing your survey screens, but you must first turn off the
automatic link to Explorer. Do that by choosing File | Settings and un-checking the Preview Link with
Microsoft Internet Explorer. Once you have done that, open your browser in another window, point your
browser to TempWeb.htm (or TempWeb1.htm if viewing the full survey) within your study directory. You
need to switch back and forth (Alt + Tab) between our web survey software and your browser after
previewing the page(s).
To run the Local Web Server installed with SSI Web within another browser other than Internet Explorer,
open the browser and enter in the address field: http://localhost/weblink.htm (Make sure you click
Upload within the Local Web menu area of SSI Web.)
i.2.1 Creating/Opening a Study
A “Study” is a collection of files pertaining to a particular research project. We suggest that you place
your studies in separate directories (folders) on your local drive. For example, you can create
subdirectories for each study within the studies directory. You should not place your study directory on a
shared network drive; it must be a local installation to ensure that SSI Web functions properly.
To create a new study:
Click File | New Study, browse to the desired directory (or click the Create a New Folder icon)
and provide a studyname consisting of four or fewer characters. A STUDYNAME.ssi file is saved
within the target directory and the study is opened.
To open an existing study:
Click File | Open Study, and if necessary browse to the folder containing the study. Then click
on the STUDYNAME.ssi file to open a study previously created. You may also select a study
name from the list of most recently opened studies, available under the File menu, or when
browsing folders double-click a STUDYNAME.ssi file.
To copy the current study to a new path or name:
Open an existing study. Click File | Save Study As.... Browse to the target folder, and type a
study name that is no more than 4 characters. SSI Web automatically supplies a .ssi extension to
the filename and saves all associated study files (including passwords, graphics directory, and
data) using that naming convention to the target folder.
i.2.2 New SSI Web Study
This dialog lets you specify a studyname and a folder in which to store all study-related files.
Studyname
A studyname must have no more than four letters or numbers. All study-specific files (such as your
questionnaire, your data file, etc.) use the studyname as their prefix. For example, if your study is named
TEST, your questionnaire file is named TESTqst.cgi and your data file is named TESTdat.dat.
Location
This field lets you specify a folder (directory) in which to place your study-related files. We strongly
encourage you to put each study in a separate folder. To create a new folder, click the “...” button, and
within the browse dialog that appears, click the “New Folder” icon.
You should not place your study directory on a shared network drive; it must be a local installation to
ensure that SSI Web functions properly.
Survey Type
Lets you specify what kind of study to create:
CiW only (no ACA or CBC section)
ACA and CiW (Adaptive Conjoint Analysis with additional CiW-style questions)
CBC and CiW (Choice-Based Conjoint with additional CiW-style questions)
If after creating a new study you change your mind about the survey type, you can modify it under File |
Settings.
Note: you may potentially lose some conjoint-specific question settings when changing from an ACA
study to a CBC study, or vice-versa. This only occurs when those previous settings are irrelevant to the
new conjoint study type.
i.2.3 Settings
(File | Settings)
Study Name
Displays your current study name. If you change this name, all of your study files for this study
are renamed to the new name.
Survey Type
Lets you view or change the survey type for the current study:
CiW only (no ACA or CBC section)
ACA and CiW (Adaptive Conjoint Analysis with additional CiW-style questions)
CBC and CiW (Choice-Based Conjoint with additional CiW-style questions)
You can change the survey type of the existing study using this drop-down box. Depending on your
study, you may lose some conjoint-specific settings when changing from one conjoint study type to
another.
Administrative Access
This area is used to specify the passwords you use to access the Online Data Management Module.
Default random user names and passwords are provided for your convenience (new random names and
passwords are created for each new study), but you can change these if you like.
User Name
Lets you provide a name with up to 12 characters.
Read-Only Password
Lets you provide a password with up to 12 characters. Read-only passwords convey limited
administrative rights (e.g. you cannot download the data, delete or accumulate
disqualified/incomplete data, edit the passwords file, or reset the survey.)
Read-Modify Password
Lets you provide a password with up to 12 characters. Read-modify passwords convey full
administrative rights.
URL Pathname for Email Surveys
If you plan to use an Email message to recruit respondents and point them to your survey via a
“delayed Web connection”, you should specify the complete URL for your study directory. If you
do not know whether to use this option, you can leave it blank for now.
For example, if the study directory on your server is www.yourhomepage.com/survey, specify:
http://www.yourhomepage.com/survey in the URL Pathname for Email Surveys field.
Preview Link with Microsoft Internet Explorer
By default, SSI Web assumes you have Internet Explorer installed. If not, you should uncheck
this box. You are still able to preview your questionnaire on your local machine using another
browser. Open the TempWeb.htm file in another window with the browser to preview a question.
Every time you click the “preview” button, the TempWeb.htm file is updated with the new
information. You may need to click your browser’s “reload” button to refresh the page and display
the most recent contents.
If using another browser and previewing the whole survey, point your browser to TempWeb1.htm and
then you can preview the survey by clicking the Next button on each page. If you want to use another
browser while using the Local Web Server, follow the same steps as if using Internet Explorer. After you
click Upload Study within the Local Web Server dialog, go to your browser and enter:
To access the survey:
http://localhost/weblink.htm
To access the administrative module:
http://localhost/studydirectory/STUDYNAMEadmn.htm
where studydirectory is the name of the directory specified on the Local Web Server dialog
under Web Server | Local Web Server.
i.2.4 Advanced Settings
Server Directory Paths
Sometimes you may find that your ISP doesn’t let you set up directories the way we’ve outlined. Or, you
may simply have other preferences regarding your study’s architecture on your server. You can change
the names and locations of the directories using the Advanced button under File | Settings.
When you click the Advanced button under File | Settings, the Advanced Settings dialog is displayed.
This dialog lets you specify relative paths for the Administrative, CGI Scripts and Data directories.
The five paths are as follows:
Administrative Directory:
Path from Perl scripts to the /admin directory
CGI Scripts Directory:
Path from STUDYNAMElogn.htm and STUDYNAMEadmn.htm to the
directory that contains ciwweb.pl and admin.pl
Data Directory:
Path from Perl scripts to the /data directory
Relative Path from
CGI Scripts to Graphics:
Path from Perl scripts to the /graphics directory
Relative Path from
logn.htm to Graphics:
Path from STUDYNAMElogn.htm to the /graphics directory
When you use the Add Image icon to add graphics to your study, your <IMG> tags then contain the
variable %%graphics_path% in their paths. %%graphics_path% will be replaced on the web server with
the path that you provided here in the Advanced Settings area. For example if you set your graphics path
to "../MyGraphics/" and you included <IMG SRC="%%graphics_path%picture.jpg" BORDER="0"> in your
questionnaire, then on the web server that path would dynamically change to <IMG
SRC="../MyGraphics/picture.jpg" BORDER="0">.
If you rename the directories that the files are stored in on your server or use absolute file references, put
the new STUDYNAMEpath.cgi file in your CGI Scripts directory so that the Perl scripts can find your
study.
If you can only run Perl scripts from one directory and you want to have multiple studies running at the
same time. The paths in the Advanced Settings for each study must be set correctly and then each
study’s STUDYNAMEpath.cgi file must be placed in the cgi-bin directory. For example, if two studies “T1"
and “T2" share the same Perl files, then the cgi-bin directory would need to contain the T1path.cgi and
T2path.cgi files.
Password File Size
You are asked how many passwords to place in each password file. The available multiples are 1024,
2048, 4096 and 8192. (Files with these number of rows organized as a binary tree can be traversed
efficiently. The number of total records need not be exactly divisible by these multiples: any remainder
rows are placed in the final password file.)
You can achieve maximum efficiency by placing 1024 passwords in each password table file. For
example, if you issue 102,400 unique passwords, there would be 100 password table files (...p001,
...p002, etc.). Of course, the more password tables files there are, the more work is involved for the user
in uploading those files to the server. The default is 8,192.
Click HERE for more information about fielding surveys that have potentially thousands of unique
passwords.
i.2.5 Setting Passwords for On-line Administrative Access
SSI Web data collection software includes an on-line study management module. You should assign
both a User Name and Password to limit access to your study parameters and data. Two levels of
access are available within the on-line data management module: Read Only and Read-Modify.
You may want to give only Read Only access to your client or other individuals who need to monitor the
progress of the study, while retaining Read-Modify access for yourself.
To Specify Administrative User Name and Passwords
1. Click File | Settings. The Settings dialog is displayed.
2. Type the User Name and Passwords into the spaces provided.
Default random user names and passwords are provided for your convenience (new random names and
passwords are created for each new study), but you can change these if you like. Up to 12 letters or
numbers may be used for each field. The user name and passwords are case sensitive. For example, if
the User Name is AdminJudy, the administrator cannot type adminjudy or ADMINJUDY to access the
data on-line.
Read Only vs Read/Write Administrative Rights
Read Only administrative rights let the user view the Study Summary, View the Passwords Table(s),
Accumulate Qualified/Complete Surveys, View Survey Data, and run Marginals.
Read/Write administrative rights permit all the above operations, plus Reset a Survey, Modify the
Passwords Table(s), Accumulate Disqualified/Incomplete Surveys, Delete Disqualified/Incomplete
Surveys, and Download Accumulated Surveys.
i.3.1 Global Settings
(Format Settings Tab)
The Global Settings dialog (accessed under Compose | Write Questionnaire | Global Settings...) lets
you specify settings that apply to all questions within your questionnaire. It has three tabs: Format
Settings, Global Font Colors, and Headers and Footers.
Background Color
You can control the background color for your survey by checking Specify a Background Color. Click the
Change... button to access the “color picker.” You can also specify customized colors by typing the 6character hexadecimal “red-green-blue” value in the form RRGGBB directly into the color field. For
example, “00ff00" sets the color to green. The default background color takes on whatever default is set
for the browser viewing the document (usually white).
Background Graphic
An attractive background (wallpaper) graphic can greatly improve the look of your questionnaire. You can
create a background graphic based on a corporate logo, or choose from a number of backgrounds
available for downloading from various sites on the Internet.
When creating your own graphic, choose a .GIF or .JPG format to ensure compatibility across different
browsers. We suggest subtle backgrounds that enhance the look of the page, without making the
foreground text difficult to read.
To choose a background graphic, use the ... button to browse to and select the graphic. It is important
that you browse to the directory and select the file so that the graphic can be managed properly for
Preview, testing in Local Server, and for proper pathing when eventually uploading to a remote server.
For more information about how SSI Web manages graphics, please see Including Graphics in Your
Questionnaire .
The background graphic takes precedence over background colors. The background color may appear
briefly until the graphic file has been loaded by the browser.
Global Font
You can specify the font face used for text in the survey. Type a font face, such as Times or Arial. Do
not put quotes around the font face. The font face doesn’t apply to text in response fields for the openend, numeric or combo box question types. It is best to use a standard font, since respondents may not
have the particular unique font you might want to use installed on their machine. If the font you specify is
not available on the respondents machine, the browser will choose a default font. If you specify multiple
fonts separated by commas, if the first font is not found on the respondent’s machine, the second font will
be attempted, and so on.
Resources
Many attractive background graphics are available for *downloading from sites on the Internet.
(Right-click a graphic to download to your hard drive). Try the following site developed by
Netscape Communications Corporation for assisting web developers:
http://www.netscape.com/assist/net_sites/bg/backgrounds.html
*Make sure not to make use of copyrighted graphics.
Between Each Question
Blank Lines
If you plan to group more than one question on a survey page, you should specify how much
space separates them. Specify the number of blank lines (between 0 and 9) in the Blank Lines
Between Questions field.
Horizontal Line
Check the Horizontal Line Between Questions box to place a horizontal line between questions
on a survey page. The horizonal line extends the full width of the display window of the Internet
browser.
Submit Button
Button Text
Respondents click the Submit button at the bottom of the password page and other pages of the
survey to send data to the server. You customize the text that appears on this button by typing
text directly into the Submit Button Text field. This is helpful when interviewing in foreign
languages.
Button Justification
Specify whether the Submit button is left justified, center justified, or tabbed on the page, by
selecting a Submit Button Justification option.
General Indent
Most question types feature an indented section. You control the amount of indent by specifying a
number (representing number of pixels) in the Indent Width field. Check the effect of different indent
widths by adjusting the pixel width and previewing your questionnaire, or particular questions.
Numeric Formatting
If you are including numeric questions in your survey and have specified that decimal answers are
permitted for those questions, you need to specify the character to be reserved for the decimal point. In
the US, a period (default) is used. Other countries may use a comma. If a period is selected, only
numbers and a single period are acceptable answers. For example, the number “one thousand one and
one half” must be specified as: “1001.5", not “1,001.5". If other characters are used, a warning message
appears.
If you specify that a comma or period is to be used as the decimal point character, both periods AND
commas are acceptable as decimal points. In other words, the number “one thousand one and one half”
can be specified as: “1001.5" or “1001,5", but not “1,001.5", or “1.001,5". (Note: the Marginals program
within the Online Data Management Module truncates all decimal answers, treating them as integers,
prior to computing frequencies, min, max, and means.)
Question Verification
Any browser that interprets standard HTML can work to take surveys. More recent browsers support
JavaScript 1.2. You can reduce the processing effort for your Web server by letting JavaScript, running
locally on your respondents’ computers, perform response verification (making sure required questions
are answered, numeric values fall within prescribed ranges, etc.).
If you specify JavaScript verification and the respondent’s browser does not support or does not have
JavaScript 1.2 or later enabled, SSI Web will use the Perl error checking script built into SSI Web instead.
Under Perl verification, responses to the survey page are submitted to the server, the Perl scripts perform
the response verification, and any error messages are sent within the HTML code back to the respondent.
We generally recommend you check the Use JavaScript 1.2 box. If you want to turn off JavaScript
verification altogether, you can un-check it, although in most cases there is no reason why you would
want JavaScript to be off.
Sawtooth Software Logo
If you don’t wish to display the Sawtooth Software notification on the last page of your survey, un-check
this box. You must include a Text/HTML Filler question as the last question of your survey and mark it as
a terminating question for the Sawtooth Software notification to be removed.
Error Messages
This button lets you specify the messages that respondents see if certain errors or conditions occur as
they respond to the survey.
i.3.2 Global Settings
(Global Font Colors Tab)
This tab lets you control the default colors used for text and hyperlinks throughout the questionnaire.
Note that while composing questions within the SSI Web system, one of the colors that can be selected is
the “default” color. The default color used is the one specified on this screen. If you use the default color,
this makes it much easier if you need to change the text color used throughout the questionnaire.
Default Text Color
Check this box to specify a default text color to use throughout the questionnaire. If you do not check this
box and if your question specifies to use the “default” color, the color used will be the one specified as the
default by the respondent’s browser.
Link Color
Check this box to specify a default color to use throughout the questionnaire for any hyperlink (underlined
“clickable” words). If you do not check this box, the color used will be the one specified as the default by
the respondent’s browser.
Visited Link Color
Check this box to specify a default color to use throughout the questionnaire for any hyperlink leading to a
page that has already been visited within a certain period of time as defined by the respondent’s browser.
If you do not check this box, the color used will be the one specified as the default by the respondent’s
browser.
Link Color When Clicked
Check this box to specify a default color to use throughout the questionnaire for any hyperlink that is
clicked. If you do not check this box, the color used will be the one specified as the default by the
respondent’s browser.
Error Messages
This button lets you specify the messages that respondents see if certain errors or conditions occur as
they respond to the survey.
i.3.3 Global Settings
(Headers and Footers Tab)
Page Header and Footer
This section allows you to specify a general header and footer that appears on every page of your survey.
Headers and footers can contain text, HTML formatting commands, and graphics. A Preview button lets
you see how your Web browser displays the header and footer you provide.
A common use of headers and footers is to print some company information, logos, or a phone number to
call for help if respondents have questions about the survey.
<Head Tag>
This area lets you specify additional information to include between the <HEAD> and </HEAD> tags.
Error Messages
This button lets you specify the messages that respondents see if certain errors or conditions occur as
they respond to the survey.
i.3.4 Displaying a Progress Bar
(Progress Bar Tab)
Sometimes it is useful to display a progress bar at either the top or bottom of each page to let
respondents know how far they have come in a survey and approximately how far they have to go.
Show Progress Bar
You can automatically include a progress bar in the Page Footer or Header by checking this box and
selecting the position. You can exert additional control over the positioning of the progress bar by placing
it manually within the global header or footer (see below).
Manually Including a Progress Bar
To place a progress bar in the questionnaire, simply type %%Progress_Bar% (case sensitive) within the
text boxes of the Global Settings dialog box under Headers and Footers tab. To start a progress bar only
after the respondent has reached a certain question, specify %%Progress_Bar(questionname)%. The
progress bar won’t appear on the login page.
Progress Bar Settings
Alignment: Choose from left, center, or right.
Width (%): Specifies the width of the progress bar, as a percentage of the total width of the
browser window.
Beginning Value (%): Specifies the beginning value for the progress bar. This is useful if you are
linking multiple SSI Web surveys together and need to specify a certain beginning value.
Ending Value (%): Specifies the ending value for the progress bar. This is useful if you are linking
multiple SSI Web surveys together and need to specify a certain ending value.
Background Color, Bar Color, Bar Background Color and Text Color
These controls let you customize the look of your progress bar. Click the Change... button to access the
“color picker.” You can also specify customized colors by typing the 6-character hexadecimal “red-greenblue” value in the form RRGGBB directly into the color field. It is easiest to try a new setting and click the
Preview button to immediately see the result.
i.3.5 Global Settings
(Simple Browser Tab)
This tab lets you specify that SSI Web should remove the regular toolbar and buttons used by the
respondent’s browser and show the survey within a “simple browser.” This is useful for maximizing the
window real estate for use in presenting your survey, and for removing buttons such as the “back” button
from many common browsers.
To see how the simple browser operates, you need to test your survey in Local Web Server or on your
remote server. Preview mode doesn’t support simple browser.
Notes: The respondent’s browser must support JavaScript to enable simple browser. If the respondent
has disabled JavaScript processing, the survey will appear in normal mode. We recommend you test
simple browser using different browsers, as the presentation can vary. If you use simple browser and the
respondent has enabled JavaScript processing, even if the browser still displays the toolbar including the
“back” button (e.g. AOL), SSI Web prevents the users from backing up..
Close Window:
Because the simple browser can remove the toolbar and icons the respondent is accustomed to seeing, it
can be helpful to provide an easy way for them to close the simple browser window once they’ve finished
the survey. If you provide text in the Close Window field, it becomes a clickable hyperlink at the upper
right-hand corner of the survey that closes the window. You may include a graphic instead of text. If the
Close Window field is empty, no link will display for closing the window.
Use Full Screen Window:
This setting maximizes the browser window for most browsers, and prohibits the respondent from resizing
the window. If you do not specify to use full screen, then you must tell the browser the width and height
(in pixels) to use.
Simple Browser Width (pixels):
If you choose not to use full screen window, this lets you specify the width of the simple browser window.
Simple Browser Height (pixels):
If you choose not to use full screen window, this lets you specify the height of the simple browser window.
Allow Respondent to Resize Window:
If you do not specify a full screen window, then you can optionally permit respondents to resize the
window.
i.3.6 Respondent Answer Verification Error Messages
The Error Message button on the Global Settings dialog lets you customize the error messages shown
to respondents during the course of an interview. For example, when respondents specify an invalid
response, or skip a required question, error messages prompt them to remedy the error. This area is also
useful for customizing the messages for foreign languages.
When respondents do not complete a questionnaire page properly and click the Submit button, warning
messages can appear to tell them about the error and how to fix it. If you have checked Use JavaScript
1.2 within the Global Settings dialog (default), a Windows-looking dialog box appears with the error
message (this happens instantaneously without submitting information to the server). If you have not
checked the Use JavaScript 1.2 button (or on older browsers not supporting JavaScript 1.2), the
information is submitted to the server. The Perl scripts perform the verification and the server returns the
not-yet-completed page with the error message displayed at the top of the survey page. Disabling
JavaScript verification requires the Web server to process more pages and generally work harder. This
can result in slower performance for your respondents.
Customizing Error Messages
When you click the Error Messages button, the Error Messages screen is displayed.
You can choose the color, styles, size and justification for the error messages (you can highlight multiple
error messages by holding down the Shift or Ctrl keys and clicking with the mouse to change the styles
for multiple messages at once). Default text is provided, that you can customize for the purposes of your
survey. The default text provided refers respondents to a certain question number on the current page.
However, you can also customize your text to refer to specific question names (Note: if you refer
respondents to specific question names, you should include the question name as part of the header text
of that question).
These are the following error type classes:
Missing Answers
Reminds respondents to complete a question before continuing.
Minimum Checks
Tells respondents that the minimum number of checks required in a multiple-response select
question (check box) has not been satisfied.
Maximum Checks
Tells respondents that the maximum number of checks permitted in a multiple-response select
question (check box) has been exceeded.
Out of Range (Numeric)
Lets respondents know that the value they specified for a numeric question is outside the
specified range. You specify the minimum and maximum range when you compose numeric
questions using the Numeric Question screen.
Invalid Input (Numeric)
This error message appears if respondents type an invalid character into a numeric entry box.
Invalid characters are values other than numbers or the period and/or comma permitted for
decimal answers. You may want to expand the error message text to explain these other
conditions to respondents.
Minimum Characters (Openend Questions)
Informs respondents that they haven’t typed enough characters in an open-end question. You
can specify a minimum number of characters when you compose open-end questions using the
Open-end Question screen.
Maximum Characters (Openend Questions)
Informs respondents that they have typed too many characters in an open-end question. You
can specify a maximum number of characters when you compose open-end questions using the
Open-end Question screen.
Quota Full
Appears when a valid password combination is used, but the password quota cell is full (e.g. the
username/password combination has already been used.)
Invalid Login
Lets respondents know that the password combination they’ve specified is invalid.
Rank
Informs respondents that the answers to a ranking question are not valid.
Constant Sum
Informs respondents that the responses to a constant sum question do not total the valid amount.
Submit Only Once
If respondents click the submit button multiple times, this message asks them to wait for a
moment while their responses are being processed. This message only appears if JavaScript 1.2
is enabled for the respondent’s browser.
Error Message Arguments
The text of the error messages includes arguments (a % symbol followed by a variable label), that are
used to display numeric values or text strings relevant to each error in the error message. The available
arguments are:
%field
Specifies the field (question) number on the most recent interview page, where the first question
on the page is question number 1.
%min
Specifies the minimum value for a numeric question, or the minimum number of characters for an
open-end question.
%max
Specifies the maximum value for a numeric question, or the maximum number of characters for
an open-end question.
%questionname
Returns the question name for the question on the current page causing the error. Use this
argument if you prefer to refer respondents to specific question names rather than a question
sequence number on the current page. (Note: if you refer respondents to specific question
names, you should include the question name as part of the text of that question).
%errortext
Returns the error text provided under the Advanced... button for questions, or in the error text
field for Free Format questions. If no text is provided in the error text field, the %errortext
variable will be replaced by nothing (be missing) when run over a server.
Preview
Lets you see how the Internet browser displays the error messages (assuming Perl verification, not
JavaScript verfication). The error messages are all displayed on the same screen, separated by
horizontal lines. (Respondents only see one error message at a time, but we’ve consolidated them onto a
single screen for preview mode only.)
Cancel
Exits the Error Messages screen without saving changes.
OK
Saves any changes and returns to the Global Settings dialog.
The actual error messages that a respondent sees when taking the questionnaire (with numeric values
substituted for the string arguments above) can only be viewed when your survey is running on the Local
Web server or on a remote web server.
i.4.1 Creating Passwords
The Need for Security
Posting your questionnaire on the Internet gives anyone connected to the Internet potential access to
your survey. The imposter/repeater problem is one of the most oft-cited criticisms of on-line survey
research. You can control this problem by assigning passwords and quotas.
You can assign up to two passwords (a User Name and a Password) and quotas to control how many
completed interviews to permit for each password/user name combination. A quota of 1 can prevent
“ballot stuffing” by disabling a password combination (password + user name) once it results in a
complete interview. It also lets respondents interrupt surveys part-way through (without losing the data)
and restart them at a later time using their unique password combination. SSI Web does this all without
the use of Cookies (a file saved to the respondent’s hard drive that stores information related to their
browsing activities.)
Providing respondents with passwords permits them to access just their survey; it does not give them
access to other aspects of your Web server.
The Password Grid
From the Compose menu, click Set Passwords. The password grid dialog is displayed.
User Name
User Name can be any combination of up to 12 letters or numbers. Check the User Names box if you
want to require respondents to enter a User Name to access the survey. If this box is checked,
respondents must supply a valid user name (as listed in the User Name column of the password grid) to
start the survey.
Password
Passwords can be any combination of up to 8 letters or numbers. Check the Passwords box if you want
to require respondents to enter a Password to access the survey. If this box is checked, respondents
must supply a valid password (as listed in the Password column of the password grid) to start the survey.
You don’t need to require respondents to specify both a User Name and a Password and to start a
survey. You can require one, both or neither piece of information. User Names and Passwords cannot
contain spaces. Both the User Name and Password are saved as a part of the respondent data file.
Max Respondents
Max Respondents must be an integer from 1 to 99,999, and controls the maximum number of completed
interviews (either qualified or disqualified) to permit per User Name/Password combination. This is also
called the “quota” within SSI Web.
Case Sensitive
If you check this box, respondents must enter User Names and Passwords exactly as you specified them.
For example, if you specify the Password SawTooth, the password sawtooth is not valid. If you choose
Case Sensitive and are requiring respondents to type the passwords/user names, inform your
respondents in any correspondence and on the introductory password page of the web survey.
Specifying Password Rows
The first time you establish passwords for a project, only one row of the password grid is displayed (you
can automatically generate up to 8,192 rows for a study). You navigate the grid by clicking with the
mouse, by using the arrow keys, or the ENTER key on the keyboard. You can also import passwords
from an ASCII file. This option allows you to exceed 8,192 passwords.
Each row represents a unique password combination, permitting from 1 to 99,999 respondents. When
you open the Passwords Module, the first cell of the grid is active and awaiting an input. You can specify
a User Name by typing any combination of up to 12 letters or numbers, such as AGENT007. Once
you’ve typed a User Name, you can move to the next cell (the Password cell) on the same row by
pressing the Tab key, pressing the right arrow, or by clicking on the next cell with the mouse. If you do
not want to assign a User Name or a Password, uncheck the User Names and Passwords boxes.
If a User Name or Password has been specified in any row of the grid, a corresponding User Name or
Password must be supplied for all rows. User Names or Passwords can be duplicated across rows, as
long as the resulting User Name/Password combinations are unique.
Adding New Rows
The ENTER key advances the cursor focus from one cell to the next, moving from left to right. Pressing
ENTER at the end of the last row in the grid inserts a new blank row. You can also create a new row by
clicking the Add Row button.
Cutting and Pasting Information from a Spreadsheet
You can cut and paste information into the password grid, or from the password grid into another
Windows or spreadsheet application. When cutting data from SSI Web, use CTRL+C to cut. To paste
information to SSI Web, use Ctrl+V.
Deleting Rows
To delete a single row, first click any cell in the row. Click Delete Row.
You can also delete a block of adjacent rows at the same time. To delete a block of rows, click the first
row to delete. Then, while holding down the SHIFT key, click anywhere within the last row in the block to
delete (or use the arrow keys to highlight rows). Click Delete Row.
Saving Your Work
Save Grid
Once you have specified your passwords grid, choose Save Grid to save the specifications to the
passwords files (STUDYNAMEpidx.cgi and STUDYNAMEp00X.cgi, where the “X” in the
STUDYNAMEp00X prefix can represent additional files containing passwords, such as “...p001.cgi",
“...p002.cgi”, “...p003.cgi”, etc., depending on the Password File Size you have set under the File |
Settings | Advanced dialog).
Note: make sure to upload a new STUDYNAMElogn.htm page to your server if you change the required
information for logging in (username + password, password only, none required, etc.) as such changes
are reflected in the STUDYNAMElogn.htm file rather than just in the STUDYNAMEqst.cgi.
Close
Close lets you exit the Passwords Module. You are prompted for any unsaved changes.
Sort
The contents of the passwords grid are automatically sorted alphabetically whenever you save your work.
You can sort the cells before saving your work by choosing Sort. The Internet Password Module sorts by
User Name within Password. Stated another way, the Password field is the primary field, and User Name
is the secondary field in order of sorting priority.
Printing the Passwords Grid
You can send your passwords grid to a printer, or the contents of the grid to a text file. When you choose
Print, a print dialog is displayed.
You can specify a destination file by checking Print to File and specifying the file name in the Filename
field. If you do not check Print to File, the output is sent to your default printer.
Changing Passwords after the Study Is Fielding
After you have begun fielding a study on a server, you may find it necessary to change the list of
username/passwords and/or the maximum number of respondents per User Name/Password
combination. You can modify the information in this dialog, and upload new password files
(STUDYNAMEpidx.cgi and STUDYNAMEpxxx.cgi) to the \admin directory on your server. This can be
done at any point during data collection—even if respondents are currently completing surveys.
(Note: If the maximum number of respondents per User Name/Password combination is changed after
the study is fielding either from more than one to one, or from one respondent to more than one, it is
possible in either case to result in one additional completed record per User Name/Password than you
may have specified. This only occurs if a respondent started a survey prior to your change, using an
affected User Name/Password.)
i.4.2 Generating a Series of Passwords Automatically
The Passwords Module can generate up to 8,192 rows automatically. This is a convenient feature for
assigning a large number of password combinations. (There are ways to generate and use 8,192 or
more passwords.)
Generate Rows
Click the Generate Rows button to automatically generate passwords. The Generate Rows dialog is
displayed.
In this dialog you provide starting (seed) User Names or Passwords that the Passwords Module
increments to generate additional rows. You can specify a respondent number as the User Name instead
of an alpha-numeric string.
User Name
Starting (seed) User Name to be incremented. Can be a numeric or alpha-numeric field of up to 12
characters in length. If respondents are to type their user names, we suggest using numeric values only,
as it is often difficult for respondents to tell the difference between, for example, the number “1" and small
“L”, or “VV” instead of “W”.
Increment
The size of the increment that is added to each subsequent character string.
Password
Starting (seed) password to be incremented. Can be a numeric or alpha-numeric (both letters and
numbers) field of up to 8 characters in length. If respondents are to type their passwords, we suggest
using numeric values only, as it is often difficult for respondents to tell the difference between, for
example, the number “1" and small “L”, or “VV” instead of “W”.
Maximum Number of Respondents
Maximum number of respondents to permit per password row. Must be an integer from 1 to 99,999.
(Note: a “1" permits respondents to restart.)
Number of Rows
Number of password rows to generate. Must be an integer from 1 to 8,192 (or the number specified in
the Advanced Settings dialog).
Seeding the Automatic Password Generator
If you specify a numeric seed string for either the User Name or Password, the generator creates only
additional numeric strings. If you specify an alpha-numeric string for either field, such as A123B4, the
generator creates additional alpha-numeric strings. If you do not want to increment either the password
or user name, choose 0 for the increment value.
For example, if you specify 1000 as the User Name (leave Password blank), choose an Increment of 50,
specify Maximum Number of Respondents equal to 10, and Number of Rows of 5, a five-row password
grid is generated automatically, where the respondent numbers have been incremented by a counting
unit of 50 (base 10).
If instead of specifying 1000 as the User Name, you specify AAAA (and you use the previous other
settings), a five-row password grid is generated automatically, where the alpha-numeric string is
incremented by a counting unit of 50 (base 36). This will generate number + letter combinations such as
A3B7.
Note: Be aware that alphanumeric strings can be confusing for respondents. For example, for the
password “A7O1", is the “O” the letter “O” or the number zero? Depending on the font you use to print
the passwords and the level of experience of your respondents, it can be very confusing. To avoid the
issue altogether, you may decide to use only strictly numeric user names and passwords if respondents
will be asked to retype them.
i.4.3 Copying/Pasting Passwords to/from Other Applications
You can copy and paste information within the Passwords Module, or to and from other Windows
applications. For example, you can generate a password grid in a Windows spreadsheet program and
paste it into the Passwords Module grid. Or, you can copy and paste the contents of the Passwords
Module grid into a Windows spreadsheet or word processing package.
Copying information from the Passwords Module into another Windows application
1. Copy the contents of selected cells into the clipboard:
a) Select an area by clicking and dragging the mouse over the desired cells.
b) Press Ctrl+C to copy the contents into the clipboard.
2. Switch windows to the application into which you want to copy the information.
3. Position the cursor where you want the information to begin, and press Ctrl+V (or choose Edit
| Paste if available).
Copying information from a Windows spreadsheet application into the Passwords Module
1. Select the area you want to copy with the mouse.
2. Press Ctrl+C (or choose Edit | Copy in your application) to copy the data to the clipboard.
3. Switch windows to the Passwords Module.
4. Position the cursor where you want the information to begin, and press Ctrl+V.
If you attempt to cut and paste invalid information into the passwords grid, the operation is aborted. For
example, you cannot paste blank rows or cell contents that exceed the character limitations defined in the
Passwords Module. If you try to paste more than three columns of information into the three-column
passwords grid layout, only the first three columns of information are used. You can overwrite information
in the passwords grid during a paste operation (but not without a warning).
i.4.4 Generating/Using More than 2,048 Passwords
If you plan to use a large number of passwords, there are special considerations. The Passwords Module
lets you automatically generate and store up to 8,192 passwords. So many passwords, however, can
slow down the process for respondents for accessing your survey over the web. To complicate matters
further, there may be situations in which you need to generate and use 10,000 or more passwords.
Password Verification: Potential Bottleneck
If you invite many people to take the interview at the same time, it can lead to performance (bandwidth)
problems. If all passwords are in a single file, each respondent submitting a password must be verified
separately by opening up that password file, scanning the file for a password match, updating a counter
for that password to reflect the fact that a new interview has been started, and closing the file. Only one
respondent can be processed at a time, so this creates a potential bottleneck. We have taken some
steps to speed up the process of verifying passwords and giving respondents access to the survey.
First, we have organized the passwords within the password table file(s) in a “binary tree.” A binary tree
lets the computer search and find a particular text string among many text strings much more quickly than
scanning the entire file from top to bottom.
Most small to medium-sized studies involve two files: STUDYNAMEpidx.cgi and STUDYNAMEp001.cgi.
The “IDX” file is a very small file that provides an index of what ranges of passwords are stored in each
password table file (named ...p001.cgi, ...p002.cgi, ...p003.cgi, etc.). The password table file(s) are
generally much larger than the IDX file. When respondents submit their passwords, the IDX file is
opened and referenced to see which password table file should be searched to verify the password. This
is a more efficient way to handle many respondents trying to access the survey at the same time, since
the “bottleneck” through which all respondents must pass is a very small file that can be quickly opened
and closed and read by multiple processes simultaneously. The heavier processing load of verifying
each respondent and storing the fact that interviews have been started can be divided among multiple
passwords table files (that may be opened concurrently). Thus, the server can be processing/verifying
more than one password at the same time, resulting in less average wait time per respondent.
Splitting Passwords into Multiple Password Files
If you are using more than 2048 passwords, we suggest that you take the steps described below to divide
the passwords in multiple “password table” files (named ...p001.cgi, ...p002.cgi, ...p003.cgi, etc.).
If your passwords are already in the Password grid of the Password Module, you can split the passwords
into multiple files by setting the Number of Password Rows per Password File within the File | Settings |
Advanced dialog.
To generate more than 8,192 passwords and split them into multiple files, you can:
1) Generate your passwords using the Password Module. You can generate up to 8,192 at a
time. (It is not necessary to use our Password Module to generate passwords. You can use your
own software such as Excel(TM), as long as the resulting passwords and ASCII file conform to
the restrictions and formats.)
2) From the Passwords Module Dialog, save the passwords to an ASCII text file, by clicking the
Print button, checking the Print To File box, and providing a file name. The passwords are saved
in ASCII format (space delimited). (When generating more than 8,192 passwords, you’ll need to
generate separate ASCII files at first, and then combine them into a single file using a text editor
or a DOS copy command.)
3) Choose Compose | Import Passwords from Text File. You are asked whether both User
Name and Passwords are available. You can specify if you want to specify a constant number of
maximum respondents for each password combination. If you do not specify that number, the
number you provide in the ASCII file is used. Depending on the multiple you have specified
under File | Settings | Advanced dialog, you determine how many passwords to place in each
password file. The available multiples are 1024, 2048, 4096 and 8192. (Files with these number
of rows organized as a binary tree can be traversed efficiently. The number of total records need
not be exactly divisible by these multiples: any remainder rows are placed in the final password
file.)
4) Click Import File, and your password index file (STUDYNAMEpidx.cgi) and supporting
password tables file(s) (...p001.cgi, ...p002.cgi, ...p003.cgi, etc.) are generated. This process may
take many minutes to finish, as the records need to be sorted into a binary tree within each
password table file.
You can achieve maximum efficiency by placing 1024 passwords in each password table file. For
example, if you issue 102,400 total passwords, there would be 100 password table files. Of course, the
more password tables files there are, the more work is involved for the user in uploading those files to the
server.
i.4.5 Import Passwords
This dialog (Compose | Passwords | Import Passwords from Text File...) lets you import a series of
passwords from an ASCII text file. This also can be useful if you have generated more than 8,192
passwords.
You are asked whether both User Name and Passwords are available. You can specify if you want to
specify a constant number of maximum respondents for each password combination. If you do not
specify that number, the number you provide in the ASCII file is used. Depending on the multiple you
have specified under File | Settings | Advanced dialog, you determine how many passwords to place in
each password file. The available multiples are 1024, 2048, 4096 and 8192. (Files with these number of
rows organized as a binary tree can be traversed efficiently. The number of total records need not be
exactly divisible by these multiples: any remainder rows are placed in the final password file.)
Text (ASCII) File Format
The following example shows what a valid text file may look like when viewed with a text editor:
JOHN
MARY
BETTY
SY7E4
UT6G9
ZE2M3
1
1
1
In the above example, both User Name and Password are provided, followed by the maximum number of
respondents to permit per password (1). A hard return separates each row in the ASCII file.
If only a User Name is required (checked on this dialog), the file could look like:
JOHN
MARY
BETTY
Only this field would be required. A hard return separates each row in the ASCII file.
i.4.6 A Selection of Prime Numbers
Incrementing User Name or Password strings by large prime numbers is thought to provide greater
security. It makes it less likely that a respondent can guess the next valid User Name/Password. Below
is a sample of prime numbers you can specify as increment values in the automatic password generator.
101
2819
6857
9391
139
3313
7129
9403
373
3617
7369
9419
787
4079
7703
9431
953
4691
8011
9533
1231
5333
8623
9601
1583
5807
8807
9739
2083
6151
9173
9973
i.5.1 Write Questionnaire
This dialog provides a list of the current questions in your study, and many buttons that lead to other
useful functions. When you highlight a question on the list, its Name, Type and Page # is displayed.
Re-ordering Questions
You can re-order questions within the list by highlighting them and clicking the Up Arrow and Down Arrow
icons directly at the right of the questions list.
Cutting/Copying Questions within and across Studies
You can cut or copy questions within the same study, or to another SSI Web study. To copy questions,
highlight a single or multiple questions and click Edit | Copy Questions (this is the Edit menu at the top
of the Write Questionnaire dialog, not the Edit… button). The question(s) are stored in the buffer.
Then, place your curser within the current question list or in the question list for another SSI Web study
(to copy to another SSI Web study, you must close the current study). Click Edit | Paste Questions and
copies of those question(s) are pasted into the target questionnaire. The questions are automatically
renamed to resolve duplicate labels, by incrementing a number at the end of the question name.
Add
Adds a new question directly following the question that is currently highlighted in the List of Questions.
Edit
Edits (lets you modify) the question that is currently highlighted in the List of Questions. You can also
double-click a question within the list to edit.
Delete
Deletes the question that is currently highlighted in the List of Questions.
Layout
Leads you to a dialog in which you can specify a number of aspects to control the layout of your
questionnaire, including page breaks, skip patterns, and question randomization.
Global Settings
Leads you to a dialog in which you can specify a number of global settings that control the look and
functionality of your survey, including fonts, colors, and global headers and footers.
Preview Survey
This option lets you preview how your questionnaire looks when it is posted on your web server. When
you click this button, the page(s) of your survey are saved within the \temp subdirectory off of your study
directory as TempWeb1.htm through TempWebN.htm, where N is the number of pages in your study. If
Internet Explorer is installed on your system, it opens TempWeb1.htm automatically. When you click on
the “Submit” button at the bottom of each page, it links to the next page in sequence.
This is a convenient way to show your client or others within your organization how the questionnaire
looks when it is posted to your server. Please note, however, that because the questionnaire is not
running “live” on a server with communication to the Perl engines, skip patterns, adaptive conjoint
customization, and data collection does not occur.
You can email the TempWeb?.htm files to your colleagues, or post them to an Internet/Intranet server.
Instruct your clients to open TempWeb1.htm first, and click the button at the bottom of each page to move
through the survey. Again, remember that this is for viewing purposes only, and does not execute skip
patterns or save any actual data.
The TempWeb?.htm files are only updated/modified when you click the View Survey button. Changes
you make to your questionnaire are not reflected in the TempWeb?.htm files unless you click the View
Survey button.
Save
Saves the current changes to your study.
Close
Closes this dialog.
i.5.2 Add Question
This dialog lets you specify the question type to add to your survey.
Question Name
Question names can include up to 12 letters or numbers. They cannot begin with a number, and cannot
include spaces or underscores.
Available question types are:
ACA-Conjoint
If you have the ACA/Web software, you can add ACA question types. These are added as a
single section. The control parameters and existing attribute list you provided determine the
number of questions and ACA question types that are added.
CBC-Conjoint
If you have the CBC/Web software, you can add CBC question types. These are added as a
single section. The control parameters and existing attribute list you provided determine the
number of CBC questions that are added.
Select
Lets you add a select-type question. Examples include radio button (select one from a list), dropdown combo boxes, and select all that apply (check-box).
Numeric
Adds a numeric question. Numeric questions are text boxes in which respondents type numeric
values.
Open-end
Adds a text box question, in which respondents provide a written answer to an open-ended
question. Responses can be short (a few words), or long (many sentences).
Text/HTML Filler
Adds a section of text or HTML instructions that do not require any response.
Grid
Lets you compose Select, Numeric, and Open-end questions in a grid format (usually multiple
rows and/or columns).
Constant Sum
Adds a question in which respondents provide a series of numeric values that sum to a certain
value, or sum to the value provided in a previous question.
Ranking
Adds a question in which respondents can rank multiple items, from “1" meaning best to “n”
meaning worst.
Free Format
Lets you compose the HTML to define most every aspect of a question or series of questions.
This is a powerful area for users who know HTML. One can compose grid-type questions, or
special layouts not supported by the standard templates for CiW question types.
JavaScript
Lets you add custom JavaScript. This is usually used to provide custom verification on the same
page as Free Format questions.
Copy from Question of Similar Type
After selecting the type of question to be added, you can specify that you want to make a copy of an
existing question and use that as a starting point. The previous question’s text, colors, response settings
and layout are copied to the new question.
You can also create copies of questions by highlighting a question or questions on the List of Questions:,
clicking Edit | Copy Questions, placing your cursor in the list of questions of the target survey (either the
current or a different SSI Web survey), and clicking Edit | Paste Questions.
i.5.3 Rename Question
You must provide a valid name for the question. Question names can include up to 12 letters or
numbers. They cannot begin with a number, and cannot include spaces or underscores.
Note: By renaming a question, Skip logic or Randomization sequences that might reference the old
question name are NOT modified, and thus may still refer to the old question name. You will need to
review these areas and change the question name references in such cases.
i.5.4 Reordering Questions
When you are in the Write Questionnaire screen, highlight the question to be moved, and then click the
Up and Down arrow icons directly at the right of the list of questions. The question label is moved up or
down within the list.
You can also highlight a single or multiple questions and cut or copy-and-paste the questions to a new
position in the questionnaire, or to a different SSI Web questionnaire. Highlight the question(s) to be
moved or copied, click Edit | Cut Questions or Copy Questions. Move the cursor to a new place in the
list (or within a new questionnaire). Click Edit | Paste Questions.
i.5.5 Composing the Password Screen
Internet surveys usually include a password screen, that is always the first page of the survey (an
exception is when you provide “one-click” access to the survey, passing the necessary login information
with a hyperlink directly to ciwweb.pl). If you are letting respondents access the survey without a user
name or password, the password screen serves only as a welcome/instructional page.
The password screen is always first in the List of Questions on the Write Questionnaire screen.
You use the Password Question dialog to compose a password screen. You access this screen by
highlighting the Password Question on the list of questions, and clicking Edit. Changes to the layout of
the Password Question are reflected in the STUDYNAMElogn.htm file. Therefore, if you make any
changes to the Password Question, you should take care to upload the new STUDYNAMElogn file to the
web server.
Password Question Layout
The password question has five main sections:
1.
2.
3.
4.
5.
Header 1
Header 2
User Name
Password
Footer
The User Name and Password fields are indented according to the tab indent you specify in the Global
Settings dialog.
The Password Question dialog has five separate rows (each corresponding to the five sections of the
question), followed by three buttons: Preview, OK and Cancel.
Preview
Lets you preview how the Internet browser displays the current question.
OK
Saves changes to the current question and returns to the Write Questionnaire
screen.
Cancel
Returns to the Write Questionnaire screen without saving changes to the
current question.
Format controls (Color, Size, Bold, Italic, Underline, Justification) and Text fields are provided for
formatting question text.
Color
Click the drop-down box to use the user-defined default color, or choose from a list of preset
colors. You can specify customized colors by typing the 6-character hexadecimal “red-greenblue” value in the form RRGGBB directly into the color field. For example, “00ff00" sets the color
to green. The text color you choose should harmonize with the background color or background
graphic you specify under the Global Settings dialog.
Size
Controls the size of the text, from 1 (very small) to 7 (very large).
Bold
Check this box to bold text.
Italic
Check this box to italicize text.
Underline
Check this box to underline text.
Justification
Choose from Left, Center, or Right justification.
Editing Text Fields
You specify header, user name, password and footer text by typing directly into the Text fields at
the right-hand side of the dialog or by Clicking the “pencil” icon to edit the text within a larger
window. You can include HTML instructions in any of the Text fields to enhance your surveys
with additional styles or graphics. The toolbar within the text editor offers a number of buttons to
automatically add HTML instructions within your survey.
Example: Composing a Passwords Screen
1. While at the Write Questionnaire screen, highlight the <Password> question in the List of
Questions by clicking it with the mouse.
2. Click Edit.
(Skip Header 1)
3. Header 2:
Color:
Size:
Bold:
Justify:
Text:
(Default) (default)
5
On
Left (default)
Welcome to our survey. Please type your User Name and Password in the
spaces below.
4. User Name:
Color:
(Default)
Size:
3 (default)
Justify:
Left (default)
Text:
User Name:
5. Password:
Color:
Size:
Justify:
Text:
(Default)
3 (default)
Left (default)
Password:
6. Footer:
Color:
Size:
(Default)
2
Justify:
Text:
Left (default)
(Click the Submit button below when finished)
7. Click Preview:
Notes: You can include other questions on the password (login) screen. However, if a respondent
restarts the survey, the respondent must completely fill out the login screen (including the additional
variables) prior to continuing. If the respondent enters different values on the password screen for these
additional questions than he/she answered before, the respondent cannot jump forward in the survey to
where he/she left off. The respondent must complete the entire survey again.
i.5.6 Composing Select Questions
Select questions are very common in market research surveys. Three general types of Select questions
are offered:
1. Radio Button (single response)
2. Check Box (multiple response)
3. Combo Box (single response)
To add a Select question to your survey, click the Add button from the Write Questionnaire screen. The
new question is inserted directly after any highlighted question in the List of Questions. If no question is
highlighted, the new question is inserted at the end of the list.
The Add Question screen is displayed. Choose the Select question type. Type a question name in the
Question Name field. Question names can include up to 12 letters or numbers (no spaces or
underscores permitted, and cannot start with numbers). Click Next. The Select Question screen is
displayed.
Select Question Screen
You choose the type of Select question by selecting from a Radio Button (single response), Check Box
(multiple response) or Combo Box (single response). If you choose the combo box, you can also specify
whether the default response is rejected (more on this below).
Next, you specify response options. Click Response Options, and the Specify Response Options
screen is displayed.
Adding Response Options
1. From the Specify Response Options screen, click the Add button.
2. Type the response option in the space provided.
3. Click OK when finished, and the response option is added to the list.
Tip: If you are adding many response options at the same time, you can save time by pressing ENTER
twice after typing each response option. Pressing ENTER twice from the Response Option dialog is
equivalent to clicking OK and then clicking Add.
You can specify up to 99 response options. A numeric index is displayed at the left of your response
options for reference, but is not displayed as part of the question.
Click OK to save your changes and to return to the Select Question screen.
Randomizing Response Options
You can randomize all of the options, or a continuous subset of the options. Information regarding the
specific order shown to each respondent is not saved in the data.
Re-ordering, Copying and Pasting Response Options
The “up” and “down” arrow icons can be used for moving the selected response option up or down in the
list.
You can also highlight a single or multiple items and cut or copy-and-paste the items to a new position on
this list, or to a response list for another select-type, ranking, or constant-sum question. Highlight the
item(s) to be moved or copied, click Edit | Cut Response Options or Copy Response Options. Move
the cursor to a new place on the list (or to a response list for a different question). Click Edit | Paste
Response Options.
Formatting Response Options
When you return to the Select Question screen, the number of response options you specified is
displayed in the Number of Response Options field (which you cannot edit). Response options can be
formatted in a single column, or over multiple columns by specifying a value greater than 1 in the Number
of Columns field. For example, if you have 11 response options (say, colors) and specify four columns,
the response options are distributed as follows:
Which of these colors is your favorite?
1) Red
2) Purple
3) Pink
4) Blue
5) Yellow
6) Green
7) Violet
8) Brown
9) Orange
10) Magenta
11) Gray
Requiring Response
The Require Response check box lets you specify whether you require respondents to make at least one
selection within the current question before continuing the survey.
Select Question Layout
The Select question has four main sections:
1.
2.
3.
4.
Header 1
Header 2
Question
Footer
The Question text line appears directly above the response options. The Question text line is indented
according to the tab indent you specify in the Global Settings dialog. The response options are indented
a bit further relative to the Question text line. The styles you apply (color, bold, size, etc.) in the Question
row also apply to the response options.
The Select Question screen has four separate rows (each corresponding to the four sections of the
question), followed by five buttons: Rename, Advanced...,Preview, OK and Cancel.
Rename
Lets you rename the current question.
Advanced...
This button lets you specify additional error text to use for this question.
Preview
Lets you preview how the Internet browser displays the current question.
OK
Saves changes to the current question and returns to the Write Questionnaire screen.
Cancel
Returns to the Write Questionnaire screen without saving changes to the current
question.
Format controls (Color, Size, Bold, Italic, Underline, and Justification) are available for formatting text.
Example: Composing a Radio Button (Single Response) Question
1. Click Add from the Write Questionnaire screen.
2. Select the Select question type.
3. Type a valid name in the Question Name field, such as Education, then click OK.
4. In the Select Format section, click Radio Button (single response).
5. Click Response Options. Add the following six response categories by clicking Add, and pressing
ENTER twice after typing each response option:
Some high school
Completed high school
Some college
Completed college
Some graduate studies
Completed advanced degree
Click OK in the Specify Response Options dialog.
6. Header 1:
Color:
Size:
Bold:
Justify:
Text:
(Default)
5
On
Left (default)
Please indicate the highest degree of education you attained:
(Skip Header 2)
7. Question:
Color:
Size:
Italics:
Justify:
(Default)
4
On
Left (default)
(Skip Footer)
8. Click Preview:
Typing a 2 into the Number of Columns field of the Select Question dialog spreads the response options
across two columns.
Tip: Try adding optional HTML instructions in Header 1 to italicize "highest" and add a horizontal line
beneath the question:
Header 1 Text: Please check the <I>highest</I> degree of education you attained:
Footer Text:
<HR>
Notes: You can italicize the word “highest” by using the toolbar accessible when you click the “pencil” edit
icon. Highlight the word “highest” with the mouse, and then click the italics button (looks like an italics I ).
You can also add a horizontal line by editing the Footer text (again using the “pencil” edit icon) and
clicking the “horizontal rule” button (looks like a dash “—”)
Example: Composing a Check Box (Multiple Response) Question
1. Click Add from the Write Questionnaire screen.
2. Select the Select question type.
3. Type a valid name in the Question Name field, such as Music, then click OK.
4. In the Select Format section, click Check Box (multiple response).
5. Click Response Options. Add the following seven response categories by clicking Add, and pressing
ENTER twice after typing each response option:
Country
Classical
New Age
Jazz
Soft Rock
Hard Rock
Klingon Opera
Click OK in the Specify Response Options dialog.
6. Header 1:
Color:
Size:
Bold:
Italics:
Justify:
Text:
(Default)
5
On
On
Left (default)
Which types of music do you enjoy listening to?
7. Header 2:
Color:
Size:
Bold:
Justify:
Text:
(Default)
3 (default)
On
Center
(Select all that apply)
8. Question:
Color:
Size:
Justify:
(Default)
3 (default)
Left (default)
(Skip Footer)
9. Click Preview:
Tip: You can create an “Other Specify” response field below a Select question by specifying a separate
open-end question (with no headers) directly beneath the Select question.
Note: When using the multiple-select check box, you can require respondents to check a minimum and
maximum number of items.
Example: Composing a Combo Box (Single Response) Question
1. Click Add from the Write Questionnaire screen.
2. Select the Select question type.
3. Type a valid name in the Question Name field, such as Party, then click OK.
4. In the Select Format section, click Combo Box (single response).
5. Click Response Options. Add the following response categories by clicking Add, and pressing
ENTER twice after typing each response option:
(No Response)
Democrat
Independent
Republican
None of the above
Click OK in the Specify Response Options dialog.
When the combo box is displayed, response item #1 is always shown and selected by default.
Respondents choose a category by clicking the arrow to reveal the drop-down list of response options. If
you check Require Response from the Select Question screen, respondents must select a category
other than the default option (always the first item) before proceeding to the next page in the survey.
6. Header 1:
Color:
Size:
Bold:
Justify:
Text:
(Default)
5
On
Left (default)
Which political party do you most closely align with?
(Skip Header 2)
7. Question:
Color:
Size:
Italics:
Justify:
Text:
(Default)
3 (default)
On
Left (default)
(Click the down arrow and choose an option)
(Skip Footer)
8. Click Preview:
Tip: You can use combo boxes for collecting numeric responses. Simply type numbers (up to 99
separate values) for the response categories. Combo boxes can make sure that answers conform to
certain values when respondents answer the question, rather than requiring the verification that occurs for
the numeric question type when respondents click the Submit button.
For example, a question might ask respondents how many people are in their family. The seven
response options (not including the default option) might be: 1, 2, 3, 4, 5, 6, 7 or more.
i.5.7 Error Text Field
This field lets you provide additional “error text” for this question. The “error text” is the additional text that
can be shown to the respondent if there is something invalid regarding this question. The text you
provide here is used within the %errortext system variable when the survey runs on a server.
For example, if the question is regarding the person’s marital status, you could specify the error text as
“(Marital Status)”. When using the default error text, the respondent might see a message such as:
You forgot to answer question #2 (Marital Status)! Please answer the question and click
the Next button again.
You supply error text and use system variables for “errors” under Compose | Write Questionnaire |
Global Settings... | Error Messages...
i.5.8 Composing Numeric Questions
Numeric questions let respondents type a numeric answer from -99999999 to 999999999 into an answer
field. First we’ll describe how to compose Numeric questions, and then we’ll provide a simple example for
you to try.
When you add a numeric question to your survey, the new question is inserted directly after any
highlighted question in the List of Questions on the Write Questionnaire screen. If no question is
highlighted, the new question is inserted at the end of the list.
The Numeric Question screen is used for formatting numeric questions.
You can control the range of values that can be entered by specifying a Minimum Value and a Maximum
Value. When you preview the question, the response field lets you type the number of characters used
for the Minimum or Maximum Value. The value constraints are not enforced until the survey runs on the
server.
There are three check boxes that control the numeric question:
Require Response
Indicates whether respondents are required to enter a valid response before continuing the
survey.
Allow Decimal Answers
Controls whether answers with a decimal point are accepted as valid answers. You can specify
whether commas or periods are to be used as decimal delimiters under the Global Settings
dialog. You can specify how many decimal places of precision may be used in the Number of
Decimal Places field. (Note: the Marginals program within the Online Data Management Module
truncates all answers with decimal points, treating them as integers, prior to computing
frequencies, min, max, and means.)
Input Box to Left of Question
Determines whether the text in the Question text area appears at the left or right of the numeric
entry box.
Numeric Question Layout
The numeric question has four main sections:
1.
2.
3.
4.
Header 1
Header 2
Question
Footer
The Question text line with its answer field is indented according to the tab indent you specify in the
Global Settings dialog.
The Numeric Question screen has four separate rows (each corresponding to the four sections of the
question), followed by five buttons: Rename, Advanced..., Preview, OK and Cancel.
Format controls (Color, Size, Bold, Italic, Underline, and Justification) are available for formatting text.
Advanced...
This button lets you specify additional error text to use for this question.
Example: Composing a Numeric Question
1. Click Add from the Write Questionnaire screen.
2. Select the Numeric question type.
3. Type a valid name in the Question Name field, such as AGE, then click OK.
4. Specify a Minimum Value of 18 and a Maximum Value of 99 by clicking and editing those fields.
5. Header 1:
Color:
Size:
Bold:
Justify:
Text:
(Default)
5
On
Left (default)
What is your age?
(Skip Header 2)
6. Question:
Color:
Size:
Justify:
Text:
(Default)
3 (default)
Left (default)
Age:
7. Footer:
Color:
Size:
Justify:
Text:
(Default)
3
Left (default)
(If you really don’t want to tell us your age, just type 99)
8. Click Preview:
Tip: Try adding optional HTML instructions in the Footer to bold the "99" and add a horizontal line
beneath the question:
Footer Text:
(If you really don’t want to tell us your age, just type <B>99</B>)<HR>
Notes: You can bold the “99" by using the toolbar accessible when you click the “pencil” edit icon .
Highlight the “99” with the mouse, and then click the bold button (looks like a bold B). You can also add
a horizontal line by editing the Footer text (again using the “pencil” edit icon) and clicking the “horizontal
rule” button (looks like a dash “—”)
i.5.9 Composing Open-end Questions
Open-end questions let respondents type any characters (including letters and numbers) into an answer
field. You can create either fixed entry fields (single line) or free entry fields (multiple lines) that give text
scrolling capabilities when typing long text strings.
First we’ll describe how to compose Open-end questions, and then we’ll provide a simple example for you
to try.
When you add an open-end question to your survey, the new question is inserted directly after any
highlighted question in the List of Questions of the Write Questionnaire screen. If no question is
highlighted, the new question is inserted at the end of the list.
The Open-end Question screen is used for constructing open-end questions.
You can choose whether the response is Fixed Entry or Free Entry.
Fixed Entry:
A single-row answer field is displayed. Respondents cannot type more characters than
the specified limit (between 1 to 256).
Free Entry:
The answer field can be formatted across a single or multiple rows. Scroll bars are
provided and respondents can type up to the maximum number of characters that you
specify (up to 10,000). (In any case, responses are trimmed to a maximum of 10,000
characters to maintain data file integrity.)
You also can specify a minimum number of characters that respondents must type. The number of typed
characters is checked when the interview runs on the server.
Open-end Question Layout
The open-end question has four main sections:
1.
2.
3.
4.
Header 1
Header 2
Question
Footer
The Question text line with its answer field is indented according to the tab indent you specify in the
Global Settings dialog.
The Open-end Question screen has four separate rows (each corresponding to the four sections of the
question), followed by five buttons: Rename, Advanced..., Preview, OK and Cancel.
Format controls (Color, Size, Bold, Italic, Underline, and Justification) are available for formatting text.
Advanced...
This button lets you specify additional error text to use for this question.
Example: Composing a Fixed Entry Open-end Question
1. Click Add from the Write Questionnaire screen.
2. Select the Open-end question type.
3. Type a valid name in the Question Name field, such as FirstName, then click OK.. Question names
can include up to 12 letters or numbers (no spaces or underscores permitted, and cannot start with
numbers).
4. Check Fixed Entry, specify a Maximum Number of Characters of 15.
5. Header 1:
Color:
Size:
Bold:
Justify:
Text:
(Default)
3 (default)
On
Left (default)
What is your first name?
(Skip Header 2)
6. Question:
Color:
Size:
Justify:
Text:
(Default)
3 (default)
Left (default)
Name:
(Skip Footer)
7. Click Preview:
Example: Composing a Free Entry Open-end Question
1. Click Add from the Write Questionnaire screen.
2. Select the Open-end question type.
3. Type a valid name in the Question Name field, such as VanStyle, then click Next..
4. Check Free Entry, specify 5 rows and 90 columns.
5. Header 1:
Color:
Size:
Bold:
Justify:
Text:
(Default)
3 (default)
On
Left (default)
What do you like or dislike about the styling of the Summit minivan?
6. Header 2:
Color:
Size:
Justify:
Text:
(Default)
2
Center
(Type your response below)
(Skip Question)
(Skip Footer)
7. Click Preview:
i.5.10 Composing Text/HTML Filler Questions
Sometimes you may want to add transitional text that does not require any response. You can specify
text “questions” that stand alone on a survey page, or that separate other survey questions. Note that
Text/HTML filler questions don’t limit you to just text. You can include graphics, horizontal lines or tables
using HTML instructions.
Notes: Text/HTML Filler questions do not count toward the question limit specified by your software
license. No data are saved for text/html filler questions.
When you add a Text/HTML Filler question to your survey, the new question is inserted directly after any
highlighted question in the List of Questions on the Write Questionnaire screen. If no question is
highlighted, the new question is inserted at the end of the list.
Text/HTML Filler as Terminating Questions
Text/HTML Filler questions can also serve as “terminating questions.” To make the current Text/HTML
Filler question a terminating question, click the Terminating Question box and check whether the
termination indicates and Qualified/Complete or a Disqualified status. You can additionally assign a
disposition code for each terminating question (a value from 0 to 999) that is saved in the respondent’s
data set. This is useful for tabulating how respondents terminated the questionnaire, if there are multiple
termination points.
A terminating question is usually a text screen telling a respondent that the survey is over. Reaching a
terminating question prohibits them from backing up to change answers to previous questions.
Terminating Text/HTML Filler questions must appear on a survey page that requires no respondent input.
Most survey authors will include at least one, and sometimes multiple terminating Text/HTML Filler
questions in their surveys.
You can have more than one terminating question in your survey. Terminating questions can appear
nearly anywhere in your list of questions. Another common use of the terminating question is to terminate
a respondent who is not eligible to complete the survey, given his/her answers to previous questions.
Formatting Options
You use the Text Question screen to format text questions.
Text Question Layout
The text question has four main sections:
1.
2.
3.
4.
Header 1
Header 2
Body
Footer
The Body text line with its answer field is indented according to the tab indent you specify in the Global
Settings dialog.
The Text Question screen has four separate rows (each corresponding to the four sections of the
question), followed by four buttons: Rename, Preview, OK and Cancel .
Format controls (Color, Size, Bold, Italic, Underline, and Justification) are available for formatting text.
i.5.11 Composing Grid Questions
(Introduction)
Introduction to Grid
General
Headers and Footers
Rows and Columns
Grid questions can include any of the standard question types used in SSI Web surveys (numeric, radio,
check-box, combo-box, or open-end), but Grid questions give you much greater power to format them in
new ways and within a unified table.
The most common use of the grid question is to provide multiple rows and columns for labels and inputs,
such as this array of radio buttons:
The main response area for this grid question has 4 rows and 5 columns (not counting the rows and
columns for labels). There are four separate radio button variables, one variable for each row. The
“Question Direction” is by rows. The alternating colors by rows help respondents orient themselves to the
question direction.
Grid questions can sometimes be challenging for new users to author. When you add a Grid question to
your survey, an intuitive Grid Wizard is available to help you set up the essentials of the Grid. The key to
using the main Grid interface efficiently is to always keep in mind whether the question direction is by
rows (as in the previous example) or columns (as illustrated below).
Grid questions also can include numeric or text entry boxes, and rankings or constant-sums. They also
can include combinations of different question types within the same grid.
There are many options within the Grid area. We suggest you review the help text for each area, but
especially that you experiment with the various settings and click Preview to see the results.
As you learn to use Grid questions, you may find that the increased formatting options lead you to use
Grid questions to compose standard questions that don’t involve multiple rows or columns.
The Grid question area has four main tabs (General, Headers and Footers, Rows, and Columns). The
Rows and Columns tabs have near-identical functionality, depending on whether the Question Direction is
by rows or columns. Therefore, we describe their functionality in one common documentation area.
If the Question Direction is by Rows, then additional controls for adding new variables to the Grid are on
the Rows tab. If the Question Direction is by Columns, then you add and edit variables from the Columns
tab.
Introduction to Grid
General
Headers and Footers
Rows and Columns
i.5.12 Grid Questions
(General Tab)
Introduction to Grid
General
Headers and Footers
Rows and Columns
Question Direction
The question direction is a fundamental issue that determines the functionality of your grid and
the use of the authoring interface. The key to using the main Grid interface efficiently is to
always keep in mind whether the question direction is by rows or columns. Go to the Introduction
to Grid for a quick illustration of row-based and column-based question direction.
If the question is row-based, then the Rows tab is used for adding and editing questions within
the grid. If the question is column-based, then the Columns tab is used for adding and editing
questions.
General Question Format
Question Width (pixels)
This field specifies the total width of the grid question, in pixels. We suggest you consider the
minimum screen resolution that respondents may be using for their PCS. Unless you are certain
of the screen resolution used for all interviewing, generally we suggest not making your questions
wider than 800 pixels. A conservative choice would be no wider than 640 pixels.
Width of Labels Column (pixels)
The labels column is the first column at the very left of the grid question. It is used for the row
labels you supply under the Rows tab.
Question Border (pixels)
The question border determines how wide the dark border is around the outside of the grid
(“border=” in html). It also can have an affect on the grid lines. You can make the grid lines
disappear by setting the border=0.
Space Between Cells (pixels)
This determines the amount of space between the cells of the grid (“cellspacing=” in html). By
default we use no space. If you specify space, the space will inherit the color used in the
Background Color on this dialog. You can achieve a sharp black line between cells by specifying
the Question Border as zero, the Space Between Cells as 1 and a Background Color of black.
Space Between Border and Cell Contents (pixels)
This is the “cellpadding=” in html. It controls the amount of empty space between the characters
or graphics within cells and the cell borders.
Question Alignment
Controls the horizontal position of the grid question within the browser window.
Include Margins
Determines whether an additional border appears surrounding the grid question. The color of the
border is determined by the Background Color.
Background Color
Controls the color used in the margins (if Include Margins is checked) or the color used for the
spacing between cells (if the Space Between Cells is greater than zero).
Alternating Colors
Alternating Color Direction
Determines the direction for alternating colors, either by rows or columns. By default, the
Alternating Color Direction is the same direction as the Question Direction. Alternating colors
help respondents process information in grid questions more efficiently.
Alternating Colors #1 & #2
Specifies the alternating colors to use. The alternating colors are used by default in the question
and row/column labels. However, you can make the labels rows/columns use the Background
Color instead using a control on the Rows and Columns tabs.
i.5.13 Grid Questions
(Headers and Footers Tab)
Introduction to Grid
General
Headers and Footers
Rows and Columns
The Headers and Footers tab lets you place text within the grid question. The various labels and their
positions are as follows:
In addition to supplying text (or graphics) for all headers and footers, you can choose the background
colors and the border pixels for Header 1 and Footer 1.
Click the “pencil” icon to edit the text for any header or footer within a larger window. You can include
HTML instructions in this field to enhance your survey with additional styles or graphics. The toolbar
within the text editor offers a number of buttons to automatically add HTML instructions within your
survey.
i.5.14 Grid Questions
(Rows/Columns Tab)
Introduction to Grid
General
Headers and Footers
Rows and Columns
The Rows and Columns tabs are nearly identical and are both documented within this same section. If
the Question Direction is rows, then the Rows tab controls the adding and editing of question variables. If
the Question Direction is columns, then the Columns tab controls adding and editing question variables.
See Introduction to Grid for an example of row-based vs. column-based question direction.
Row/Column Label and Question Settings
You use this area to add new questions or labels or modify those items within the grid. You can also
select multiple rows/columns to edit by holding down the Ctrl key and clicking multiple items. Once you
have selected multiple questions/labels, you can modify them all at once by clicking the Copy, Edit or
Delete buttons.
When you highlight each row or column label, if the label represents a question, you see summary
information regarding the question type in the area at the top right-hand side of the dialog.
Add
Lets you add labels and/or question items.
Copy
Lets you copy the label or question item to add another row or column to the grid.
Edit
Lets you edit the current label or question item. In the case of a question item, the Edit
Grid Row/Column Label and Question Settings dialog is displayed.
Delete Lets you delete the current label or question item.
You can remove the Row or Column containing the labels by deleting all text for the row or column of
labels. If no text is supplied, SSI Web removes the associated row or column entirely from the grid.
Row/Column Label Format
This area lets you control the text color, styles, size, and justification for the rows/column labels.
You can also choose to use the Background Color (as specified on the General tab) rather than
the alternating color for the column or row label background.
Row/Column Label Position
Lets you specify that the row label is to appear at the left or right-hand side of the grid, or that the
column label is to appear at the top or bottom of the grid.
Randomize Rows/Columns
Sometimes you may wish to randomize the order of row or column labels to control for order bias.
This section lets you specify to randomize all, or a subset of the range of items, for rows or
columns.
i.5.15 Edit Grid Row/Column Label and Question Settings
If the label you are editing is a question variable in the grid, this dialog lets you supply the label and
question settings.
Row/Column Label
Type the label for the question in this field.
Click the “pencil” icon to edit the text within a larger window. You can include HTML instructions
in this field to enhance your survey with additional styles or graphics. The toolbar within the text
editor offers a number of buttons to automatically add HTML instructions within your survey.
You can remove the Row or Column containing the labels by deleting all text for the row or
column of labels. If no text is supplied, SSI Web removes the associated row or column entirely
from the grid.
Row Question Type
Select the question type for this row/column. A grid can include multiple question types. The
selections are:
Select (includes radio, check-box, and combo-box)
Numeric
Open-End (Fixed Entry or Free Entry)
Constant Sum (numeric entry boxes that typically sum to a particular value)
Ranking (for ranking items from best to worst using numeric entry or combo boxes)
Question Settings
Depending on the question type you select, there are different options for formatting/asking the
questions. The options are the same as available in the non-grid version of each question type
in SSI Web. We have provided a link for each question type below for you to refer to the relevant
question settings.
Select
Numeric
Open-End
Constant Sum
Ranking
Cell Alignment
Lets you control the position of the radio button, checkbox, combo box, or numeric/text entry box
within the cell. Specify horizontal and vertical justification.
Cell Input Labels
Lets you specify text to appear within the cells of the grid, to the immediate left or right of the
radio button, checkbox, combo box, or numeric/text entry box within the cell.
Click the “pencil” icon to edit the text within a larger window. You can include HTML instructions
in this field to enhance your survey with additional styles or graphics. The toolbar within the text
editor offers a number of buttons to automatically add HTML instructions within your survey.
Error Text
This field lets you provide additional “error text” associated with this particular row or column of
this grid question. The “error text” is the additional text that can be shown to the respondent if
there is something invalid regarding the grid question. The text you provide here is used within
the %errortext system variable when the survey runs on a server.
Grid questions often have many elements for respondents to correctly complete, so it is important
to supply Error Text to point them to the exact place on the grid to remedy.
For example, if the row or column question is regarding the person’s marital status, you could
specify the error text as “(Marital Status)”. When using the default error text, the respondent
might see a message such as:
You forgot to answer question #2 (Marital Status)! Please answer the question and click
the Next button again.
You supply error text and use system variables for “errors” under Compose | Write
Questionnaire | Global Settings... | Error Messages...
i.5.16 Add Grid Row/Column Label
Row/Column Label
If the label you are editing is not a question variable in the grid, this dialog lets you supply text (or
graphics) to be associated with the question variable.
Click the “pencil” icon to edit the text within a larger window. You can include HTML instructions
in this field to enhance your survey with additional styles or graphics. The toolbar within the text
editor offers a number of buttons to automatically add HTML instructions within your survey.
Make This a Totals Row/Column
If the question type this label is associated with is a constant sum, you can check that this row or
column is to be a row or column total. In the case of row or column totals, a greyed-out
(respondents cannot edit it) numeric field will display within the questionnaire (requires JavaScript
to be enabled) to let the respondent know the sum of the rows or columns for that series of
questions.
i.5.17 Composing Constant Sum Questions
Constant sums are used to let the respondent allocate, say, 100 points across multiple items. Common
uses include amount of time spent in various activities, or amount of money spent on various items. You
can also ask respondents to allocate numbers that sum to a particular value that the respondent provided
in an earlier question.
Constant Sum Settings
Total
Type a value such as “100" into this field. This is the amount that the numeric entry items should
sum to. (You can provide a permissible range around the total, such as plus or minus 5, by
clicking the Advanced... button on this dialog.) You can also specify %questionname%, where
questionname is a name of a previous numeric question in your survey. In that case, the value
the respondent typed earlier in the questionnaire is used as the required sum.
Allow Decimals
Specify if respondents are allowed to use decimals, and the number of decimal places permitted.
Question Direction
Lets you specify whether the response items should be arranged horizontally or vertically.
Require Response to Question
Specifies whether respondents are required to answer the question.
Require Response to each individual input box
Use this field to specify whether respondents must supply a value for every response option of
the multi-part constant sum question.
Include a “Total” at the bottom
If you want a running total to be displayed within the question, check this box. Also specify a
label to be placed next to that field. The totals field appears as a greyed-out text box that updates
as respondents enter values in the other fields in the constant-sum question. The totals value is
not stored in the data file. JavaScript must be enabled to display totals.
You can display additional messages to the user regarding the current sum and the required total.
%cursum returns the current sum, and %total returns the required total.
The Constant Sum Question screen has three separate rows (each corresponding to the three sections
of the question), followed by five buttons: Rename, Advanced..., Preview, OK and Cancel.
Format controls (Color, Size, Bold, Italic, Underline, and Justification) are available for formatting text.
Advanced...
This button lets you specify additional settings for this question. You can specify that an
amount slightly lower or higher than the required sum is permissible. Also error text to
use for this question may be specified.
Adding Response Options
The Response Options... button is for specifying the items within the constant sum question. Click
Response Options..., and the Specify Response Options screen is displayed.
1. From the Specify Response Options screen, click the Add button.
2. Type the response option in the space provided.
3. Click OK when finished, and the response option is added to the list.
Tip: If you are adding many response options at the same time, you can save time by pressing ENTER
twice after typing each response option. Pressing ENTER twice from the Response Option dialog is
equivalent to clicking OK and then clicking Add.
You can specify up to 99 response options. A numeric index is displayed at the left of your response
options for reference, but is not displayed as part of the question.
Click OK to save your changes and to return to the Select Question screen.
Randomizing Response Options
You can randomize all of the options, or a continuous subset of the options. Information regarding the
specific order shown to each respondent is not saved in the data.
Re-ordering, Copying and Pasting Response Options
The “up” and “down” arrow icons can be used for moving the selected response option up or down in the
list.
You can also highlight a single or multiple items and cut or copy-and-paste the items to a new position on
this list, or to a response list for another select-type, ranking, or constant-sum question. Highlight the
item(s) to be moved or copied, click Edit | Cut Response Options or Copy Response Options. Move
the cursor to a new place on the list (or to a response list for a different question). Click Edit | Paste
Response Options.
i.5.18 Constant Sum Advanced Dialog
Some researchers suggest that there are situations in which the respondent should not be forced to
allocate exactly, say, 100 points within a constant sum question. Sometimes the researcher may be
satisfied if the respondent has allocated nearly the right number of points. This dialog lets you specify the
acceptable margin below and above the total required points for this constant sum question.
If the number of points to allocate is dynamically set for each respondent based upon the answer to a
previous question, then you should specify the maximum width of that numeric value. This is for display
purposes, so SSI Web knows how wide to make the field when showing the “totals” amount.
The Error Text field lets you provide additional “error text” for this question. The “error text” is the
additional text that can be shown to the respondent if there is something invalid regarding this question.
The text you provide here is used within the %errortext system variable when the survey runs on a server.
i.5.19 Composing Ranking Questions
Ranking questions are useful if you need respondents to rank multiple items from best to worst. Two
response formats are possible: combo box or numeric entry. In either case, respondents mark the best
item as “1" and the worst item as “N”, where N is the total number of items to be ranked. Each value
must only be used once. An error message prompts respondents if the ranking question is not completed
properly.
Ranking Format
Select either Combo Boxes or Numeric Input Boxes.
Require Response to Question
Check this if respondents are required to complete the question.
Question Direction
Specify whether the response options should be displayed horizontally or vertically.
The Ranking Question screen has three separate rows (each corresponding to the three sections of the
question), followed by five buttons: Rename, Advanced..., Preview, OK and Cancel.
Format controls (Color, Size, Bold, Italic, Underline, and Justification) are available for formatting text.
Advanced...
This button lets you specify additional error text to use for this question.
Adding Response Options
The Response Options... button is for specifying the items within the Ranking question. Click Response
Options..., and the Specify Response Options screen is displayed.
1. From the Specify Response Options screen, click the Add button.
2. Type the response option in the space provided.
3. Click OK when finished, and the response option is added to the list.
Tip: If you are adding many response options at the same time, you can save time by pressing ENTER
twice after typing each response option. Pressing ENTER twice from the Response Option dialog is
equivalent to clicking OK and then clicking Add.
You can specify up to 99 response options. A numeric index is displayed at the left of your response
options for reference, but is not displayed as part of the question.
Click OK to save your changes and to return to the Select Question screen.
Randomizing Response Options
You can randomize all of the options, or a continuous subset of the options. Information regarding the
specific order shown to each respondent is not saved in the data.
Re-ordering, Copying and Pasting Response Options
The “up” and “down” arrow icons can be used for moving the selected response option up or down in the
list.
You can also highlight a single or multiple items and cut or copy-and-paste the items to a new position on
this list, or to a response list for another select-type, ranking, or constant-sum question. Highlight the
item(s) to be moved or copied, click Edit | Cut Response Options or Copy Response Options. Move
the cursor to a new place on the list (or to a response list for a different question). Click Edit | Paste
Response Options.
i.5.20 Composing Free Format Questions
Warning:
Because of the complexities of the Free Format Question type, you must completely test your
survey to ensure that it functions properly and that the data for your entire survey are being saved
correctly. We do not provide support to help you troubleshoot your own HTML and JavaScript.
Response Types
Currently the Free Format Question allows you to use any of the following types of variables:
RADIO
CHECKBOX
SELECT (Combo Box)
TEXT (Numeric)
TEXT (Text Box)
TEXTAREA
Hidden
Introduction
The SSI Web system provides pre-set templates for you to construct commonly used question types.
Most users find these adequate to compose their surveys. However, some projects (or clients) may
require that you create a certain look/layout with a question type that our templates don’t allow. For such
instances, the Free Format question type can provide those who know HTML near total control of the look
of a question or entire survey pages containing many questions. For example, you can create complex
“grid” questions, with combinations of numeric, open-end, check-box or radio button responses. To see
an example of the possibilities, please see the “free format” study that was installed in your Tutorials
directory.
You must be familiar with HTML to use the Free Format question type (the following URL has some of the
basic HTML tags with descriptions:
http://hotwired.lycos.com/webmonkey/reference/html_cheatsheet/index.html).
As the name of the Free Format question suggests, you supply all of the HTML code to display the
question and input controls. SSI Web still provides the <form> tag and its parameters. But, most
everything that you would put within (but not including) the <body></body> tags is up to you (with the
exception of the “submit” button control). Free Format question HTML code cannot have any <HTML>,
<BODY>, or <FORM> tags (like all CiW questions, SSI Web generates these tags).
Adding a Free Format Question
A Free Format question can contain a single, or multiple questions (variables). If using multiple variables,
they do not necessarily have to be all of the same response type. You can include combinations of
numerics, combo boxes, radio buttons, check boxes and open-end text fields within the same Free
Format question. To add a Free Format question to your survey, click the Add button from the Write
Questionnaire screen. The new Free Format question is inserted directly after any highlighted question in
the List of Questions. If no question is highlighted, the new question is inserted at the end of the list.
The Add Question screen is displayed. Choose the CiW question type and then select the Free Format
option in the combo box. Type a question name in the Question Name field. Question names can
include up to 12 letters or numbers (no spaces or underscores permitted, and cannot start with numbers).
Please note that the name given to the Free Format question plus an underscore character that SSI Web
supplies acts as a prefix to all of the variables you define within this particular Free Format question. For
example, you may wish to add a multi-part numeric question that has three response blanks. If you name
your Free Format question “FF1", you later assign three variable labels with FF1_ as a prefix (e.g.
“FF1_A”, “FF1_B”,” FF1_C”). By requiring you to provide a unique prefix name for Free Format
questions, SSI Web can help you make sure you do not accidentally give two different variables within
your survey the same name, resulting in potential data loss.
Click Next. The Free Format Question screen is displayed.
Free Format Question Screen
The Free Format Question screen is divided into two parts. The lower section contains a text box where
you compose your HTML code. (HTML can also be written in a separate editor and then copied and
pasted into this section.) The upper section contains a list box of Free Format variables. Remember, a
Free Format question actually can contain many sub-questions (Free Format Variables). The Free
Format Variables you add to the Variable Names list must correspond exactly to the input tags and
variable names you write within your HTML section (including distinction between upper and lower case).
(If you use the Free Format Controls icon this helps ensure that your variable names match.) Each Free
Format Variable counts as a CiW question against your license limitation.
Free Format Variables
Free Format Variables must have unique names that begin with the name of the Free Format question.
When specifying the name of a variable, the Free Format question name is made the first part of the
variable name automatically. It is very important that the full variable name be exactly the same as the
name parameter for the corresponding input tags in your HTML code section (including case).
Error Messages to Respondents
You may also specify an Error Message for Free Format Variables. This message is used by the
verification code to direct the respondent to the specific response item/question that needs to be
corrected. For example, if you define a multi-part question with numeric blanks in which respondents type
ratings for restaurant chains, you may define the Error Message as “McDonald’s” for the text to be
displayed to respondents if they forget to provide a score within the correct range for McDonald’s. The
error message might return: “I’m sorry, but you forgot to answer question McDonald’s”.
The Error Message text you provide for Free Format variables is assigned to the %errortext internal
variable when displaying error messages within the interview. To enable error messages for respondents
that return the question name (or Error Message for Free Format variables), make the change under
Compose | Write Questionnaire... | Global Settings | Error Messages... . Rather than use %field (that
returns the question number on the current page that needs to be remedied), use %errortext.
Note: If you turn off the JavaScript verification (or the respondent does not have JavaScript enabled) and
use Perl verification instead (under Global Settings), if the respondent enters invalid data, when that Free
Format question is re-displayed (along with the error message), the respondent will lose any previously
entered data for that free format question, and will need re-enter before submitting again.
i.5.21 SELECT (Combo Box) (Free Format Question Type)
Warning:
Because of the complexities of the Free Format Question type, you must completely test your
survey to ensure that it functions properly and that the data for your entire survey are being saved
correctly.
General Information about Free Format Questions
To display a combo box type input within your Free Format Question, you define a Free Format Variable
with SELECT (Combo Box) as the Variable Type.
Within the HTML editing area, specify a <SELECT> HTML tag in your HTML code. The easiest way to do
this is by using the Free Format Tools button. When using the Free Format Tools button, SSI Web
automatically provides default HTML with appropriate variable naming conventions.
The name of the Free Format Variable must be the same as the name attribute in your HTML code for the
<SELECT> tag (including case).
Checking Required Response requires a respondent to choose an option other than the first visible
default option. Be sure to change the Default_option text between the <Option> tags. Not checking the
Required Response allows the respondent to skip this question.
The MULTIPLE (multiple input list box) attribute is not yet supported in SSI Web.
SELECT (Combo Box) Example
(The following parameters represent a Free Format Question with one variable defined.)
Free Format Question Name: FF1
Variable Name:
FF1_COLOR
Variable Type:
SELECT (Combo Box)
Error Message String:
Favorite Color
Reject Default Response:
(checked)
Example HTML Code:
Please select your favorite color: <BR>
<SELECT NAME="FF1_COLOR">
<OPTION SELECTED VALUE="99">Choose One</OPTION>
<OPTION VALUE="1">Red</OPTION>
<OPTION VALUE="2">Green</OPTION>
<OPTION VALUE="3">Blue</OPTION>
</SELECT>
Note that we assigned numeric values to the response options. This permits you to tabulate responses
within the Marginals program in the on-line administrative module. Although we specified a value of “99"
for the default response, in reality this value should never occur in the dataset, as SSI Web would not let
respondents continue unless they choose an option other than this default.
i.5.22 TEXTAREA (Free Format Question Type)
Warning:
Because of the complexities of the Free Format Question type, you must completely test your
survey to ensure that it functions properly and that the data for your entire survey are being saved
correctly.
General Information about Free Format Questions
The TEXTAREA variable is used for asking longer, open-ended text questions. Within the Free Format
Variable dialog, specify a Free Format Variable with TEXTAREA as the Variable Type.
Within the HTML editing area, use a <TEXTAREA> HTML tag in your HTML code. The easiest way to
do this is by using the Free Format Tools button. When using the Free Format Tools button, SSI Web
automatically provides default HTML with appropriate variable naming conventions.
The name of the Free Format Variable must be the same as the name attribute in your HTML code for the
<TEXTAREA> tag (including case). If desired you can check the Require Response option and then
specify the minimum number of characters you require the respondent to enter. SSI Web forces the
respondent to enter at least the minimum and no more than the maximum number of characters before
continuing to the next question.
TEXTAREA Example
(The following parameters represent a Free Format Question with one variable defined.)
Free Format Question Name: FF2
Variable Name:
FF2_WORKEXP
Variable Type:
TEXTAREA
Require Response:
(checked)
Error Message String:
Work Experience
Minimum Characters:
30
Example HTML Code:
Please tell us about your past work experience: <BR>
<TEXTAREA NAME="FF2_WORKEXP" ROWS="3" COLS="35">
</TEXTAREA>
(Regardless of the length permitted in the textarea box, responses are trimmed to a maximum of 10,000
characters to maintain data file integrity.)
i.5.23 TEXT (Text Box) (Free Format Question)
Warning:
Because of the complexities of the Free Format Question type, you must completely test your
survey to ensure that it functions properly and that the data for your entire survey are being saved
correctly.
General Information about Free Format Questions
The TEXT (Text Box) variable is used for asking shorter text questions. Within the Free Format Variable
dialog, specify a Free Format Variable with TEXT (Text Box) as the Variable Type.
Within the HTML editing area, use an <INPUT> HTML tag with the type attribute set to "text." The
easiest way to do this is by using the Free Format Tools button. When using the Free Format Tools
button, SSI Web automatically provides default HTML with appropriate variable naming conventions.
The name of the Free Format Variable must be the same as the name attribute in your HTML code for the
<INPUT> tag (including case). If desired, you can check the Require Response option and specify the
minimum and maximum number of characters you wish the respondent to enter (similar to the
TEXTAREA variable) before continuing.
TEXT (Text Box) Example
(The following parameters represent a Free Format Question with one variable defined.)
Free Format Question Name: FF3
Variable Name:
FF3_LASTNAME
Variable Type:
TEXT (Text Box)
Require Response:
(checked)
Error Message String:
Last Name
Minimum Characters:
2
Example HTML Code:
Last Name: <BR>
<INPUT TYPE="text" NAME="FF3_LASTNAME" SIZE=”100">
*Note: The INPUT tag has a SIZE attribute that determines the size of the text box. It also has a
MAXLENGTH attribute that limits the maximum number of characters that can be typed into the box.
Regardless of the number of characters specified, responses are trimmed to a maximum of 10,000
characters to maintain data file integrity.
i.5.24 TEXT (Numeric) (Free Format Question Type)
Warning:
Because of the complexities of the Free Format Question type, you must completely test your
survey to ensure that it functions properly and that the data for your entire survey are being saved
correctly.
General Information about Free Format Questions
The TEXT (Numeric) variable is used for asking numeric only type questions. Within the Free Format
Variable dialog, specify a Free Format Variable with TEXT (Numeric) as the Variable Type.
Within the HTML editing area, use an <INPUT> HTML tag with the type attribute set to "text." The
easiest way to do this is by using the Free Format Tools button. When using the Free Format Tools
button, SSI Web automatically provides default HTML with appropriate variable naming conventions.
The name of the Free Format Variable must be the same as the name attribute in your HTML code for the
<INPUT> tag. If desired, you can check the Require Response option, that forces respondents to enter a
numeric value. You must specify a Minimum Value and a Maximum Value for the variable. SSI Web
requires the respondent's answer to be within that specified range. If you would like the respondent to be
able to enter decimal type values, then check the Allow Decimal Answers check box and specify the
number of decimal places to allow.
TEXT (Numeric) Example
(The following parameters represent a Free Format Question with one variable defined.)
Free Format Question Name: FF4
Variable Name:
FF4_CARS
Variable Type:
TEXT (Numeric)
Require Response:
(checked)
Error Message String:
Number of Cars
Minimum Value:
0
Maximum Value:
10
Allow Decimal Answers:
(not checked)
Example HTML Code:
How many cars do you currently own?
<INPUT TYPE="text" NAME="FF4_CARS" SIZE="2" MAXLENGTH="2">
i.5.25 CHECKBOX (Free Format Question Type)
Warning:
Because of the complexities of the Free Format Question type, you must completely test your
survey to ensure that it functions properly and that the data for your entire survey are being saved
correctly.
General Information about Free Format Questions
The CHECKBOX variable is used for presenting a set of checkboxes in your survey. Within the Free
Format Variable dialog, specify a Free Format Variable with CHECKBOX as the Variable Type. Even
though the response to each checkbox is stored independently as a unique variable name, you do not
specify at this point each variable name associated with each checkbox. Specify the Number of
Response Options, or the number of checkboxes you have in the set. You may also specify the Min
Checks and the Max Checks to force the respondent to check a certain number of checkboxes (within the
range you specified).
Within the HTML editing area, specify an <INPUT> HTML tag with the type set to "checkbox" for each
checkbox in the set. The easiest way to do this is by using the Free Format Tools button. When using
the Free Format Tools button, SSI Web automatically provides default HTML with appropriate variable
naming conventions.
The name attribute for the <INPUT> tags must be unique and must be the name of the Free Format
Variable followed by an underscore “_” character and then a sequential integer starting at one (the last
integer is the same as the value you specified in the Number of Response Options).
We recommend that the value attribute of all <INPUT> tags be set to 1. This ensures that the collected
data are a '1' if the respondent checked the checkbox and a '0' if they did not check it.
CHECKBOX Example
(The following parameters represent a Free Format Question with one variable defined.)
Free Format Question Name: FF5
Variable Name:
FF5_HOBBIES
Variable Type:
CHECKBOX
Require Response:
(checked)
Error Message String:
Hobbies
Number of Response Options: 5
Min Checks:
0
Max Checks:
5
Example HTML Code:
Please select activities that you like as hobbies from
<BR>
<I> (Check all that apply) </I> <BR>
<INPUT TYPE="checkbox" NAME="FF5_HOBBIES_1" VALUE="1">
<INPUT TYPE="checkbox" NAME="FF5_HOBBIES_2" VALUE="1">
<INPUT TYPE="checkbox" NAME="FF5_HOBBIES_3" VALUE="1">
<INPUT TYPE="checkbox" NAME="FF5_HOBBIES_4" VALUE="1">
<INPUT TYPE="checkbox" NAME="FF5_HOBBIES_5" VALUE="1">
the following list:
Walking <BR>
Running <BR>
Hiking <BR>
Swimming <BR>
Eating <BR>
i.5.26 RADIO (Free Format Question Type)
Warning:
Because of the complexities of the Free Format Question type, you must completely test your
survey to ensure that it functions properly and that the data for your entire survey are being saved
correctly.
General Information about Free Format Questions
The RADIO variable is used for asking the respondent to select a single option from a set of response
options. Within the Free Format Variable dialog, specify a Free Format Variable with RADIO as the
Variable Type.
Within the HTML editing area, specify an <INPUT> HTML tag with the type set to "radio" for each radio
button in the group. The easiest way to do this is by using the Free Format Tools button. When using
the Free Format Tools button, SSI Web automatically provides default HTML with appropriate variable
naming conventions.
The name attribute for each <INPUT> tag must be the name of the variable (all radio <INPUT> tags
should have the same name attribute). The value attribute for each <INPUT> tag should be set to a
unique value, so you can determine which option the respondent selected (we recommend sequential
integers starting with one).
RADIO Example
(The following parameters represent a Free Format Question with one variable defined.)
Free Format Question Name: FF6
Variable Name:
FF6_INCOME
Variable Type:
RADIO
Require Response:
(checked)
Error Message String:
Annual Income
Example HTML Code:
What is your annual
<INPUT TYPE="radio"
<INPUT TYPE="radio"
<INPUT TYPE="radio"
<INPUT TYPE="radio"
<INPUT TYPE="radio"
income? <BR> <BR>
NAME="FF6_INCOME"
NAME="FF6_INCOME"
NAME="FF6_INCOME"
NAME="FF6_INCOME"
NAME="FF6_INCOME"
VALUE="1">$0 - $19,999 <BR>
VALUE="2">$20,000 - $39,999 <BR>
VALUE="3">$40,000 - $59,999 <BR>
VALUE="4">$60,000 - $79,999 <BR>
VALUE="5">$80,000 or more
i.5.27 Hidden (Free Format Variable)
This is an advanced area for declaring a hidden variable using JavaScript and setting its value.
Example #1: Setting the hidden variable equal to a random number used in the survey.
Suppose you declared a random value earlier in the survey (such as %randnum53) and wanted to store
the value of that random number in your data set.
1. Add a Free Format question to your survey, and a variable (e.g. myvar1_1) of type hidden.
2. Edit the HTML (using the “pencil” icon, and using the Free Format Tools button select myvar1_1 from
the drop-down control list, to insert the following text in your HTML:
<input name=”myvar1_1“ type=”hidden”>
Then, edit that text to assign the value equal to the system variable %randnum53:
<input name=”myvar1_1“ type=”hidden” value=”%%randnum53%”>
Note that you must enclose the system variable %randnum in percents (%%randnum53%).
Example #2: Setting the hidden variable equal to the sum of two numeric variables on a previous screen.
Assume the two previous numeric variables were Value1 and Value2.
1. Add a Free Format question to your survey, and a variable (sumvar1_1) of type hidden.
2. Edit the HTML (using the “pencil” icon, and select the Free Format Tools button to insert the following
text in your HTML:
<input name=”sumvar1_1“ type=”hidden”>
3. Add additional JavaScript instructions within the HTML. Use the JavaScript icon on the toolbar to add
opening and closing JavaScript tags:
<script language=”javascript”>
<!-//-->
</script>
4. Modify the JavaScript tags as follows:
<script language=”javascript”>
<!-document.mainform[”sumvar1_1"].value = parseFloat(%Value1%) + parseFloat(%Value2%);
//-->
</script>
i.5.28 JavaScript Question
Warning:
Because of the complexities using JavaScript questions, you must completely test your survey to
ensure that it functions properly and that the data for your entire survey are being saved correctly.
General Information about Free Format Questions
There might be times when our default question verification will not accomplish the functionality you want.
The “JavaScript” question type allows you to easily add your own custom JavaScript verification to SSI
Web. With SSI Web, JavaScript “questions” are most often used with the Free Format question type.
Your custom JavaScript verification is queued on the click of the "Submit" button.
This dialog includes two text fields in which you can supply JavaScript. Note that SSI Web provides
default JavaScript (as shown in the dialog) before and after the script you provide. This default script
(along with the script you provide) is written to the html page when the questionnaire runs over a server
(as long as JavaScript is enabled in your questionnaire under Compose | Write Questionnaire | Global
Settings... and also on the respondent’s browser).
If you place a JavaScript question on a page then when the user clicks "Submit" on that page the
SSI_CustomJScriptShell() function will be called. You can indicate (using the drop-down control) whether
the custom function should be called before or after SSI Web’s default JavaScript verification.
You can write any JavaScript that you want inside the body of the function. You are responsible for
setting bln_custom_result to true or false. If set to true the page will submit. If set to false the page will
not submit to the web server.
We suggest you only use one JavaScript “question” per page, and that it be placed on the same page as
the question you are verifying. Do not place a JavaScript question on the login page.
i.5.29 Using HTML Elements to Enhance Your Surveys
(This section gives a brief overview of using HTML elements within SSI Web. More HTML help is
included in the HTML Cheat-Sheet.)
HTML (HyperText Markup Language) is the language of the Web. You don’t need to know any HTML
code to use this software. But users who know (or are willing to learn) a few simple HTML tricks can gain
some nice benefits.
You can include HTML instructions to:
Add styles to specific words or letters (like bold, italics, or underlining)
Change font sizes of specific words or letters
Display graphic images
Nearly any HTML instruction can be included in your survey. You simply type HTML elements into the
text entry fields in the SSI Web interface.
Notes: Be careful with the HTML elements you add within your surveys! You should only include HTML
instructions that belong between the <BODY> </BODY> tags of the html script. It would be a mistake to
use <HTML> or <FORM> tags in the HTML code you add to your questionnaires, as it will likely interfere
with the <HTML> and <FORM> section instructions that our software automatically generates when it
runs on the server. Also, you probably wouldn’t want to add a hyperlink that would send respondents out
of the survey, turn off the forms mode using </FORM>, create a submit button or additional input field that
would interfere with the survey progress.
Any of a number of books and websites can instruct the interested user on HTML basics. Here is a
partial list of HTML elements that can be useful (and safe to use) in your Internet surveys:
Adding Styles
Italics
to turn on:
to turn off:
<I>
</I>
Type:
Please be <I>very</I> careful!
to create:
Please be very careful!
Bold
to turn on:
to turn off:
<B>
</B>
Underline
to turn on:
to turn off:
<U>
</U>
Adding Hard Returns
Internet browsers typically wrap your text if it doesn’t fit into the allotted space. Sometimes you may want
to control where your phrases break. You can insert hard returns using the <BR> element.
Type:
Please be <BR>very careful!
to create:
Please be
very careful!
Type:
<P>
to create:
hard return plus blank line
Inserting Horizontal Lines
Type:
<HR>
to create:
Horizontal line spanning the current text cell, or spanning the current line.
Note: When using the Free Format question type, you will probably need to know more than just the
“basics” of HTML. The Free Format question type turns all of the question-writing responsibility over to
you, including specifying the <INPUT> tags. We have some very simple examples documented in the
Free Format section.
i.6.1 Including Graphics in Your Questionnaire
You can display graphics most any place within an SSI Web study. In previous versions of SSI Web, you
added graphics by typing an <IMG> tag with the proper specifications in any field in the questionnaire.
However, this sometimes led to “pathing” problems when shifting from SSI Web’s Local Web server to
your remote web server. In this version of SSI Web, we have automated the way that SSI Web manages
graphics for testing (under Local Web) and for running over servers.
(Note: Please see the section on Graphics Management for more information regarding how SSI Web
manages graphics, and how you can customize how graphics are used and stored on the server.)
Adding Graphics
To add a graphic, click the Add Image icon on the toolbar accessed by first clicking the “Pencil” edit icon.
Then, browse to the folder containing your graphic, and double-click the graphics file. When you do this,
the following happens:
1. An HTML <IMG> tag is inserted within your questionnaire:
<IMG SRC="%%graphics_path%yourgraphic.???" BORDER="0">
where “yourgraphic.???” is the name of the graphic file you added, and %%graphics_path% is a
system variable containing the path from the cgi-bin directory to your graphics directory (or from
the login.htm file to your graphics, if the graphic is on the login screen), as specified under File |
Settings | Advanced...
2. The file “yourgraphic.???” is copied into a graphics directory directly off of your study
directory.
You may be tempted to type the <IMG> tag contents yourself for a new graphic, or cut-and-paste the
<IMG> tag contents to another part of your survey and alter the referenced graphic. If you do this, make
sure to copy the new graphic into the graphics directory, or the graphic will not appear when testing your
survey. For this reason, you may discover that it is easier to just use the Add Image icon.
We suggest you use the system variable %%graphics_path% for managing the paths to graphics for your
survey. However, you can edit other aspects of the <IMG> tag, such as providing size parameters or
changing the border width.
Different browsers support different graphics formats. For best results across different browsers, your
image should probably be saved in .GIF or .JPG format.
Internet browsers display images in their original size; they do not automatically size them to fit within
your interview or in relation to your question area. Plan ahead and size your images to look good under
different screen resolutions.
i.6.2 Graphics Management
In earlier versions of SSI Web, we assumed that graphics would be copied into the cgi-bin directory on
the server. However, many ISPs prohibited graphics from residing in the same directory as the Perl
scripts. When users created customized paths to graphics appropriate for their server setups, they then
found that the preview mode and Local Web server could not display the graphics due to incorrect paths
with respect to the local installation on their hard drives.
To resolve these issues, this version of SSI Web assumes a server installation that contains a separate
graphics directory. We use a system variable called %%graphics_path% to specify the path to that
graphics directory. When the survey is run on a server, the text “%%graphics_path%” is replaced by the
paths specified under File | Settings | Advanced... There are two paths related to graphics:
If the graphic is located on the login page, then the system variable %%graphics_path% is
replaced with the text supplied in the Relative Path From login.htm to Graphics field.
If the graphic is located anywhere other than the login page, then the system variable
%%graphics_path% is replaced with the text supplied in the Relative Path From CGI Scripts to
Graphics field.
If you use the %%graphics_path% system variable, Local Web Server always looks to the graphics
subdirectory off of your study directory regardless of what you have specified under File | Settings |
Advanced... But, when running on the server, SSI Web uses the paths to graphics specified under File |
Settings | Advanced...
One benefit to using the %%graphics_path% variable is that if you need to change the path definitions for
graphics, you only need to change them in one place (under File | Settings | Advanced...).
If you want to, you can choose not to use the system variable when defining the paths to graphics. You
can provide an <IMG> tag with a relative or absolute path of your choice (where the path is the path from
the Perl scripts to the directory containing the graphic, or the login.htm file to the graphic). However,
customizing the paths will most likely mean that the graphics will not be able to be found when testing
with Preview or the Local Server.
i.7.1 Layout Dialog
There are a number of options for organizing your questions in your survey. You can display each
question on a separate page in your survey. Or, you can organize questions into groups displayed on
separate pages.
You could also put all questions on just one page, but there are a number of reasons you probably
shouldn’t:
1. Too many questions on a single page can make for a very long and clumsy form. The load
time might be excessive for a page including the complete text and graphics of your
questionnaire.
2. Response verification for numeric ranges or combo boxes would only occur at the very end of
the survey. Respondents might have a difficult time reviewing such a long list of questions to
discover their errors.
3. Respondents might look ahead to sections of the questionnaire that you’d rather they didn’t
see until the proper time.
4. Respondents who wanted to quit a survey early and resume at a later time would lose all their
work. If you organize your survey into separate pages, respondents who quit early only lose the
questions they’ve answered since clicking the last Submit button (assuming each respondent
has a unique password with a quota of 1).
5. If using Adaptive Conjoint Analysis (ACA), multiple pages are required as ACA re-calculates
estimates of respondent preferences during the different stages of the survey.
We suggest that you organize your survey questions in logical groups displayed on separate pages. You
might discover that some questions (such as conjoint questions, or questions requiring verification) are
best shown one at a time, even though this means more back-and-forth between the server and the
respondent’s PC. As Internet access speeds improve, this will become less of a concern. Even now,
most users will not notice long pauses between page breaks.
Setting Page Breaks and Previewing Pages
At the right-hand side of the Layout screen is a list of all the questions you’ve created to this point. Page
numbers are also listed. If respondents are actively connecting to the Internet to take your survey, you’ll
probably want the passwords question (login screen) to always stand alone as page 1. If you are using
the delayed email option, you may want to have other questions (or all questions) appear on page 1.
Setting Page Breaks
1. Highlight the question that you want to have head a separate page.
2. Click Set Page Break, and a new page number is created. Any subsequent page numbers are
automatically adjusted.
Clearing Page Breaks
1. Click the first question of the page definition you want to remove.
2. Click Clear Page Break, and the page break is removed. Any subsequent page numbers are
adjusted. (Note: page renumbering does not necessarily automatically get adjusted correctly in the page
skip area. Please be sure to check any skip logic (branching) and randomized sequences before you
proceed.)
Preview
Click the Preview button to see how the Internet browser displays the current page. The Submit button is
also displayed as part of the complete survey page. When you are previewing a page, clicking Next
Page displays the next page in the survey.
Randomize Questions
Lets you specify a series of questions to be presented in random order when the questionnaire is run over
the Web server.
Page Skips
Lets you specify logical rules for skipping certain pages, based on answers to previous questions.
Cancel
If you accidentally click the Clear All button and don’t want to lose your previous page breaks, click
Cancel.
OK
Click OK to save your changes and return to the Write Questionnaire screen.
i.7.2 Randomize Questions
There are times in which the researcher may wish to display a series of questions in random order. This
is most often done to reduce response bias due to question order. Question order bias occurs when
respondents answer differently to later questions because of the influence of previously answered
questions. If questions are displayed in random order, question order bias still exists at the individual
level. But, because individuals each receive a different question ordering, the bias is nearly eliminated
when viewing aggregate results.
You can randomize the order that questions are asked within the same page. You cannot randomize the
order of questions across pages.
To randomize the order that questions are asked on a page:
1) Click Compose | Write Questionnaire and click the Layout button. From the Layout dialog,
click the Randomize Questions button.
2) Click Add Random Sequence. Select the beginning question. Next, a list of other available
questions within that same page is displayed. Click an ending question. All questions from the
beginning to the ending question (including the beginning and ending questions) are displayed in
random order when the questionnaire runs on a Web server.
Randomize Questions Grid
Each row in the Randomize Questions grid reflects a series of two or more questions to be randomized.
Beginning Question
Indicates the first question within a randomized question order sequence.
Ending Question
Indicates the last question within a randomized question order sequence.
Add Randomized Sequence
Lets you add a new randomized sequence.
Delete Randomized Sequence
Lets you delete an existing randomized sequence. To delete a specific row in the grid, click on any cell
within the row, and then click the Delete Randomized Sequence button.
Example
Assume a questionnaire with the following natural question order: Q1, Q2, Q3a, Q3b, Q3c, Q4, Q5. To
randomize the presentation of the Q3 series (Q3a, Q3b, and Q3c):
1. Use “Layout” to make sure the questions you want to randomize are all located on the same page.
2. Click the Add Randomized Sequence button. At the Select Beginning Question screen, choose
the first question in the question series to be randomized, Q3a, and then click OK.
3. The Select Ending Question dialog is displayed. Click the last question in the series to be
randomized, Q3c, and then click OK.
The possible outcomes for respondents are:
Q1, Q2, Q3a, Q3b, Q3c, Q4, Q5.
Q1, Q2, Q3a, Q3c, Q3b, Q4, Q5.
Q1, Q2, Q3b, Q3a, Q3c, Q4, Q5.
Q1, Q2, Q3b, Q3c, Q3a, Q4, Q5.
Q1, Q2, Q3c, Q3a, Q3b, Q4, Q5.
Q1, Q2, Q3c, Q3b, Q3a, Q4, Q5.
Each of these sequences has approximately an equal chance of occurrence.
Note: Adding and deleting questions or setting and removing page breaks can invalidate randomized
sequences that have been set. Because of this, we recommend that you add your randomizations after
you feel confident that the layout of the survey is relatively stable.
i.7.3 Page Skip Logic
Skip patterns let respondents skip over pages of the questionnaire that don’t apply, based on previous
answers. Skip patterns are executed at the end of a page (when the respondent clicks the submit
button). The respondent is skipped to a question at the beginning of another page.
For example, assume that a question called OWNLAPTOP (1=yes, 2=no) is asked on page 5 of a
questionnaire. Let’s say that we want to ask a series of questions (on page 6) about the laptop that the
respondent owns if the answer to OWNLAPTOP is equal to 1. If the respondent doesn’t own a laptop, we
want to skip them to page 7. Follow these steps:
1. Click Compose | Write Questionnaire and click the Layout button. Make sure that you have
created page breaks to isolate the follow-up questions about laptop ownership on page 6.
2. From the Layout dialog, click Page Skips. Click Add. The Add Skip Statement dialog is
displayed. Using the drop-down list, change the From Page field to “5" and change the Skip to
Page field to “7."
3. You can either type the name of the question to be referenced “OWNLAPTOP” or you can
search for the question in the scroll box at the bottom left, as follows: Once you have found the
question in the scroll box, double-click it, and the question name is inserted within the next
available input box.
4. Directly to the right of the Question Name OR Literal Value field you just modified, choose the
Operator “=” using the drop-down box. Then, type a “2" in the Question Name OR Literal Value
field directly to the right of that. The table should read as follows:
From Page
5
Skip to Page
7
Operator
=
Question/Value
2
If Condition is True:
Question/Value
OWNLAPTOP
You can add additional AND/OR statements to execute more complex logic.
You can perform skip patterns based on an openend text response. Type the text string into the
Question/Value field (surrounded by double quotes). Example: Q1=”hello”. The string compare logic is
not case sensitive. You can also evaluate empty strings (Q1 = “”).
Skip patterns only operate when the survey runs on a Web server (not during preview mode).
Unconditional Skips
Sometimes you want to perform a skip that has no conditions (it is always executed). The section below
“Randomly Displaying a Subset of a Block of Questions” is an example of an unconditional skip.
Unconditional skips can also be useful, for example, if displaying a follow-up message if the response is
somehow unacceptable. Consider a numeric question (on page 15) that is used to indicate the
respondent’s age. You can create an Explanatory Text question (no response required) on the next page
(page 16) directly following that numeric question. That text might state:
“You must be 18 to qualify for this survey, please have the individual in your household who is 18
or older and whose birthday is closest to today continue this survey.”
If the numeric value is greater than 17, the respondent is skipped to page 17. But, if the value is 17 or
less, page 16 is displayed with the text asking for an individual 18 or older. An unconditional skip is
created from page 16 back to page 15, as follows:
From Page
16
Skip to Page
15
If Condition is True:
Question/Value
<leave blank>
Operator
Question/Value
<leave blank> <leave blank>
The survey cannot continue until a valid age is entered. (We aren’t advocating this approach from a
survey research methodology standpoint, but have used this example as an illustration of unconditional
skips.)
Randomly Displaying a Subset of a Block of Questions
You can use a random system variable to display a random subset of a block of questions. For example,
assume four pages of questions (pages 11 through 14) and that we want respondents to see just one
randomly selected page. Random continuous numbers (rectangularly distributed) in the range of 0 to
1(but not including exactly 1) are available as system variables. You specify %RandnumN, where N is
any integer and N is used as a seed.
From page 10, we would:
skip to page 11 if %Randnum1<.25
skip to page 12 if %Randnum1>=.25 and %Randnum1<.5
skip to page 13 if %Randnum1>=.5 and %Randnum1<.75
skip to page 14 if %Randnum1>=.75
To skip out of the sequence after completing the appropriate page, three separate unconditional skips
(described above) are created to skip to page 15 from each of pages 11 through 13.
Additional Notes: the random numbers are drawn with at least 5 decimal places of precision. Using the
same random number seed (N) results in the same value for the generated random number for a given
respondent, but different values across respondents. You can display the %RandnumN on the screen
for debugging purposes by specifying %%RandnumN%, where N is an integer.
Performing Skips Based on “Select All That Apply” (Multi-Select) Questions
You can perform skip patterns based on a “select all that apply” questions. Assume a multiple-response
question named Q1 that has 5 options. Respondents can register up to 5 answers. If item number three
on the list was selected, then that item is referenced using “underscore notation”: Q1_3 = 1. If item three
was not selected, then Q1_3 = 0. If item number 1 was selected, then Q1_1 = 1, etc. (Note that the
“underscore” notation here acts quite differently from “dot notation” for multiple selects as is used in Ci3.
With our web-based interviewing system, you cannot detect which item in a multiple-select was selected
first, second, etc. as you can with Ci3.)
Randomizing items within a multiple-response list does not affect the operation of the system variable
Question_#. The item number after the underscore always refers to that items position within the original
un-randomized list.
Prohibited “Skip Backs”
1. It is prohibited to skip back to page one (the password page).
2. You cannot skip from a page containing a terminating question.
Notes:
SSI Web attempts to change the page numbers within your skip patterns when you make a modification
to the questionnaire that renumber the pages in the survey, but you should always double-check that the
skip patterns are still specified properly. For this reason, we suggest you implement your skip patterns as
one of the last steps in composing a study.
Skip patterns are executed in reverse when respondents back up through the questionnaire. But, if
respondents back up, change an answer, and proceed down a new “skip pattern” path, some of the data
previously entered (on now invalid pages) may no longer be valid. For example, consider the example
above wherein respondents are asked a series of follow-up questions if they own a laptop PC. Assume a
respondent starts to answer the questions on the page for laptop owners, then, recognizing the mistake,
backs up and changes his response regarding laptop ownership to “no.” The follow-up page for laptop
owners is skipped. But, the responses already provided on that page still remain in the data file. After
exporting the data to ASCII format, you need to clean the data using your own data processing software
to remedy such problems.
You can conduct skip patterns based on the Password or UserName from the passwords screen. Refer
to those in the skip pattern editor as %Password and %UserName (these labels are case sensitive).
You can conduct skip patterns based on JavaScript status. If %JavaScript=1, then the respondent has
JavaScript enabled. If %JavaScript=0, then JavaScript is disabled. For respondents that do not have
JavaScript enabled, you may want to skip them to a page that says “To enable JavaScript in Microsoft
Internet Explorer, go to Tools, Internet Options, Security, Custom Level. Under Scripting, Active
Scripting, choose Enable. To enable JavaScript in Netscape Navigator, go to Edit, Preferences,
Advanced, and then check Enable JavaScript.”
Skip pattern logic is processed in order from the first skip pattern listed in the Page Skip Logic dialog to
the last. When a respondent clicks the submit button, the first applicable (“true”) skip instruction is
executed, and all other skip logic statements involving skips from that same page (even if “true”) are
ignored.
SSI Web does not offer the general ability to control the precedence of AND/OR operations. However,
there are specific situations in which you can control the order of AND/OR operations. For example, the
skips:
Skip from page 4 to page 6 if x=1 AND y=1
Skip from page 4 to page 6 if y=2 AND z=1
Skip from page 4 to page 6 if z=3 AND y=1
is processed as:
Skip from page 4 to page 6 if ((x=1) AND (y=1)) OR ((y=2) AND (z=2)) OR ((z=3) AND (y=3))
i.7.4 Displaying Responses Given to Previous Questions
You can display responses given to previous questions by referring to variable names or system
variables, enclosed by percent symbols. The following are available:
%QuestionName% (returns openend text, numeric values, or item labels for single-response
select-type questions for the question name indicated) Example: %Q1%.
%QuestionName.label% (returns the label associated with the selected response to a singleresponse Select-type question or Select-type question within a Grid.)
%QuestionName.value% (returns the value associated with the selected response to a Select
question or Select question within a Grid. Does not work for Free Format questions)
%QuestionName_#% (returns a 1 (checked) or a 0 (not checked) for a specific list item in a
multiple-response select question. For example, %Q1_3% returns a 1 if item three was checked
in the check-box question Q1, or a 0 if not checked.)
%%UserName% (returns the User Name. This is case sensitive.)
%%Password% (returns the Password. This is case sensitive.)
If the response is not in the data file, a blank is returned.
Let’s assume that you wanted to display the numeric value that the respondent provided earlier in the
questionnaire in a question called “housesize.” For example, in the Header 1 text section of a select-type
question called “numminors”, you might type:
You previously said that you have %housesize% people in your household. How many of
those individuals are minors (17 or younger)?
When you view the question using the Preview function or using a local browser, “%housesize%” will
appear. But, when the questionnaire runs over the server, the numeric value that was provided in the
question named “housesize” is displayed instead of “%housesize%”.
You can also change the color or font size of the displayed variable’s text:
You previously said that you have <font size=5 color=”#ff0000">%housesize%</font>
people in your household. How many of those individuals are minors (17 or younger)?
When using ACA/Web, you can display conjoint utilities and importances for that respondent using the
following notation:
%%ANI% (attribute importance, where “N” is the attribute number.
%%ANLN% (attribute level utility, where “N” is the respective attribute and level number)
This feature is only available for ACA utilities (please see example html for presenting a report of utilities
and importances to respondents called acarpt.htm in the “resources” directory of your SSI Web
installation.) The utilities and importances are only available after the respondent has completed at least
the importance question for the attribute number you are referencing.
i.8.1 Testing/Running the Survey Locally
Once you have composed some questions and specified passwords, you can test/run your questionnaire
locally on your PC using the Local Web Server that is installed along with the SSI Web system. (Please
see the message below regarding Windows 95 systems.) Click Web Server | Local Web Server and
the Local Web Server dialog is displayed.
Prior to fielding your study using a remote server, it is important that you also test the survey carefully on
your remote server, to make sure that everything has been implemented properly for data collection.
Server Subdirectory Name
By default, SSI Web creates a subdirectory with the name matching your four-character study
name. You may change this name to upload to different folders. If the folder does not exist SSI
Web creates it.
Upload Survey
Click this button to “upload” your questionnaire and related study files/graphics to the directory
specified in the Server Subdirectory Name field inside LocalWeb\htdocs (Install directory) on your
hard drive. SSI Web automatically copies (uploads) the appropriate files to the correct directory
(we follow this process to remind you that when you eventually post the survey over a Web
server, that you’ll be uploading files to that server). Every time you make a modification to your
questionnaire, passwords, or graphics, you should click Upload Survey to update your previous
version of the study (just as you would update the files that had changed if you were updating a
study on a Web server). If you have uploaded the survey before, you are asked if you would like
to overwrite the previous instance of your study. Choose OK to proceed, and the old study is
overwritten, resetting the study, and deleting all the previous data.
Notes: SSI Web assumes that all of the graphics used in your questionnaire are located in the
graphics directory. SSI Web doesn’t automatically “clean up” the htdocs subdirectory after you
are finished with a study. You may want to clean the htdocs subdirectory from time to time.
Caution: If you are using the Local Web Server to collect real respondent data, take care
not to upload a new instance of your study, as the process will delete your existing data!
To help avoid this, you might change the folder name used under Server Subdirectory
Name to run the actual data collection in a different folder from the default name. Also,
please see the Acknowledgment and Caution section below.
If you are using the demo version of the software, you are limited to a total of 5 completed
records (either qualified or disqualified). The demo software is not to be used for professional
research studies.
Run Survey
Click this to test/run your survey using the Local Web Server.
Hint: If interviewing using the Local Web Server, you may want to provide a link from the final
page of your survey back to the login page. The easiest way to do this is to create a Text/HTML
Filler question at the end of your survey (on its own page) that has a hyperlink such as <A
HREF=”../XXXXlogn.htm”>Return to the Beginning of the Survey</A> where “XXXX” is the name
of your study. Please see the caution at the end of this section regarding the Local Web Server
software.
View Admin
Click this to run your online data management module, to view, accumulate or tabulate results for
your study running on your Local Web Server.
Settings...
Accesses the Local Web Settings dialog, that permits you to define whether the Apache server
should start up (and run in the background) when SSI Web is opened and closed when it is
closed, or whether the user can control manually when the Local Web Server is started and
stopped.
Close
Closes the Local Web Server dialog
Acknowledgment and Warning
The Local Web Server included in SSI Web includes software developed by the Apache Group for use in
the Apache HTTP server project (http://www.apache.org/).
The Apache Group gives the following disclaimer:
“The Win32 release of Apache should still be considered beta quality code. It does not meet the
normal standards of stability and security that Unix releases do. There are numerous known
bugs and inconsistencies. There is also a much greater chance of security holes being present in
the Win32 version of Apache.”
We at Sawtooth Software have spent many hours testing Apache Server code with SSI Web and have
seen good stability and performance. Using the Local Web Server within SSI Web is very convenient for
testing your survey prior to uploading to a Web server. But, because of this disclaimer, if you plan to
use the Local Web Server for actual data collection, you must test your study thoroughly to make
sure it is executed as you expect and that the data are being stored properly.
Notes for Windows 95 Installations
It may be necessary to install (by double-clicking the .EXE file) the W95ws2setup.exe file located in the
Resources subdirectory of SSI Web to make the local Web Server function. If you would like to turn off
the automatic start of Local Web go to Web Server | Local Web Server | Settings and change the
setting to Manually Start and Stop Apache.
Trouble Shooting
If the survey is not working (Login page cannot be found) you may need to adjust the browser settings. In
Explorer, select Tools | Internet Options | Connections | LAN Settings. Make sure if you are using a
proxy server that you bypass proxy server for local addresses.
Some corporate LANs have network configurations that do not allow a local web server to work. Contact
your IT department for help.
i.8.2 Local Web Settings
By default, Apache (Local Web) server is invoked and running in the background whenever SSI Web is
running (and shut down when ever SSI Web is shut down). You may wish to change this behavior. For
example, you may have your own installation of Apache and not want SSI Web’s installation of Apache to
interfere. Or, you may wish to close SSI Web but have Apache continue working (perhaps for the
purpose of data collection).
This dialog lets you specify whether Apache server should automatically start up when SSI Web is
running or whether you wish to manually start and stop Apache. If you choose to manually start and stop
Apache, the Start and Stop buttons are activated.
i.9.1 How Respondents Access Your Survey
Once you’ve uploaded your survey to your Internet server and verified that it is working properly, you
need to decide how to invite respondents to complete it. There are two main approaches: Active and
Delayed Web connections.
Active Web Connection
An active Web connection means that the respondent actively connects to the Web to take your survey.
1. Typed URL. Respondents connect to the Internet and type the URL into their browser (or cut
and paste it from an Email message) to access your survey (the STUDYNAMElogn.htm page).
2. Hyperlink. Respondents browsing a site on the Internet (or after receiving an email
containing a hyperlink) click a hyperlink (either text-based or clickable graphic) that connects
them to your survey. You can also include the password information in the hyperlink. This
provides “one-click” access to the survey with password verification.
3. Submit button. Respondents click a submit button on an HTML page and are automatically
connected to your survey. You can also “pass through” password information as hidden variables
using this technique. This provides “one-click” access to the survey with password verification,
and can be used to transfer a respondent from a different page or survey into your SSI Web
survey.
Delayed Web Connection
The delayed Web connection uses an Email message, or an attached link or HTML file within the Email
message that, once received, resides locally in respondents’ mailboxes. Once respondents click the link
or the submit button after opening the attached survey page, their computer automatically connects to
the Internet and your survey site.
1. Email URL links. Most (but not all) Email packages are “Web-enabled,” meaning that they
recognize URL addresses such as http://www.sawtoothsoftware.com as Internet sites. Whenever
the Email package encounters an Internet address in the course of a message, it changes the
text string to a hyperlink (underlines the text, changes its color, and makes it a clickable area).
When Email recipients click hyperlinks, their computer automatically connects to the Web and
their browser points them to your survey site referenced in the hyperlink. You can also include
the password information in the hyperlink for “one-click” access to the survey with password
verification.
Some Web browsers let you send Email messages and attach a URL. This performs just like the
text-based URL link, but looks like a file icon.
2. Email with attached STUDYNAMElogn.htm file. You can attach (embed) the
STUDYNAMElogn.htm file within an Email message. When respondents double-click the
attached file, their browser is launched and the passwords screen is displayed (this can happen
locally, without a respondent being connecting to the Internet). Respondents type their password
(optional) and click the submit button. Their computer automatically connects to the Internet,
their browser accesses your survey site, and the password information is submitted for
verification. If the password is valid, the survey proceeds with the respondent actively connected
to the Internet until finishing the survey.
To use this option, you must have supplied a URL in the URL Pathname for Email Surveys field
under File | Settings.
3. Email with attached partial or entire survey. This is very similar to option #2, except that
more than just the passwords (login) part of the survey is included in the STUDYNAMElogn.htm
file. A section of–or even the entire survey–could be included. The survey appears as a single
form with one submit button at the end. Under this option, respondents could complete the
survey without being actively connected to the Internet.
When respondents click the submit button, their computer automatically connects to the Internet, their
browser accesses your survey site, and the password information and all other survey data are submitted
for verification. If the password and all other responses are valid, the data are accepted and stored; if
not, an error message directs respondents to remedy the problem. If the survey is not yet complete, the
next page is displayed. If the survey is complete, the final sign-off screen appears.
To use this option, you must have supplied a URL in the URL Pathname for Email Surveys field under
File | Settings, and formatted your page breaks to include additional (or all) survey questions on the first
page of your questionnaire (under Compose | Write Questionnaire | Layout). (If you do this, you
should still include a “thank you” screen on a final, separate page.)
Note that you cannot execute skip logic, randomizations, CBC randomized tasks, or Adaptive Conjoint
Analysis questionnaires with an attached entire survey. Also, some respondents may be reluctant to
open up attached HTML files, given virus concerns. Some Email systems may have security controls that
prohibit users from receiving attached files.
i.9.2 “One-Click” Respondent Access to Survey
Your survey is just “one click” away for respondents if you email them a hyperlink. The hyperlink includes
the address for your survey, and optionally “passes along” information about the user name and/or
password. Email packages automatically create a hyperlink when you type a text string that it recognizes
as a potential web address, such as “http://”.
We suggest you use one of the many bulk email packages available on the market. These permit you to
automatically process hundreds or thousands of email messages at a time. When the respondent
receives the email, they do not have access to the email addresses of other respondents; they only have
the return path email address to your account that sent the message. An automatic bulk email package
can also import customized text strings from a database into the email message. Those text strings might
include the respondent’s user name and/or password generated by SSI Web’s Password
Program/Generator.
Simple URL Hyperlink
If you type the following into an email message, it becomes a clickable “hyperlink” (underlined clickable
text string, that sends respondents automatically to a web page):
http://www.sawtoothsoftware.com/gonzo/STUDYNAMElogn.htm
Note that the “study root directory” on the server is called “gonzo” and the link sends the respondent to
the opening login (password) page, called “STUDYNAMElogn.htm”.
URL Hyperlink with Passwords
If you want to automatically “pass through” the user name and/or password, you should link respondents
directly to the Perl script located in the cgi-bin directory. This lets qualified respondents start the survey
without having to type in passwords or see the login page. The following text included in an email
message starts a survey with a studyname “tep1" located in the “gonzo” study root directory, with a valid
username of “michael” and a password of “A6Z99":
http://www.sawtoothsoftware.com/gonzo/cgi-bin/ciwweb.pl?hid_bypass=tep1,michael,A6Z99
The order of comma-delimited parameters after “hid_bypass=” is:
studyname (case sensitive)
username
password
Here is an example:
http://www.sawtoothsoftware.com/gonzo/cgi-bin/ciwweb.pl?hid_bypass=tep1,michael,A6Z99
In this next example, only a username is used:
http://www.sawtoothsoftware.com/gonzo/cgi-bin/ciwweb.pl?hid_bypass=tep1,michael
(Note that to specify a later parameter, all parameters prior to it must be specified. For example, if
password only is used, we still must provide a “blank” username parameter, reflected by the two commas
in a row.)
You can also pass additional variables into the data set, as long as those are defined as questions on the
bypassed page 1 (the login screen):
http://www.sawtoothsoftware.com/gonzo/cgi-bin/ciwweb.pl?hid_bypass=tep1,,A6Z99,Q1=1,Q2=5
In this example tep1 is the study name, there is no username, A6Z99 is the password. Q1 and Q2 are
additional variables to be passed into the data set, and questions with these same names need to be
placed in the questionnaire on page 1 (the login page), that is bypassed. You can pass in as many
additional variables as you want, as long as questions with those same names are placed on page 1.
When respondents start the survey by clicking the hyperlink, if the passwords are valid (and the quota is
not complete), the respondent is sent to the page following the password screen. If the respondent is
restarting an interview, the page where he/she left off is displayed.
Warning: If you are “passing in” a variable to an ACA/Web or CBC/Web study that you plan to use
as a case identifier for later merging the conjoint data with other survey variables, you should
only use numeric values. SSI Web will not permit the creation of conjoint data files for
summarizing or computing utilities that use non-numeric case identifiers.
Important Note:
If the text string for the URL and passwords becomes very long, your email package may automatically
“wrap” the text in the middle of the hyperlink by inserting a hard return. This disables the hyperlink and it
will no longer work. Some email packages by default wrap after 80 characters. Some email packages
let you control how many characters are displayed before automatic wrapping occurs. For example, if
using Microsoft’s Outlook Express, select Tools | Options | Send and under the Mail Sending Format
click the Settings... button. There is a field to specify “Automatically wrap text at ___? characters, when
sending”.
Make sure to test that the hyperlink works properly by sending test messages to yourself and other
individuals (preferably outside your company’s own internal email system) prior to emailing invitations to
respondents.
i.9.3 Linking to Sawtooth Software’s Web Survey from another
Survey Program
Some users may find the need to link to Sawtooth Software’s web survey software from another HTML
form or Web-based survey. One of the most critical aspects is that you pass through an identifier that can
be saved as the user name or password with the Sawtooth Software data set. That way, respondents
from the two separate data sets can be matched during data processing.
Note: Linking multiple SSI Web surveys that result in more CiW questions than your license
conveys is a violation of your license agreement.
URL Hyperlink to Pass Respondents Directly in SSI Web Survey
If you want to automatically pass respondents from another Web survey page to an SSI Web survey and
automatically “pass through” the user name and/or password, you should link respondents directly to the
SSI Web Perl script located in the cgi-bin directory. The following text included in an email message
starts a survey with a studyname “tep1" located in the “gonzo” study root directory, with a valid username
of “michael” and a password of “A6Z99":
http://www.sawtoothsoftware.com/gonzo/cgi-bin/ciwweb.pl?hid_bypass=tep1,michael,A6Z99
The order of comma-delimited parameters after “hid_bypass=” is:
studyname (case sensitive)
username
password
Here is an example:
http://www.sawtoothsoftware.com/gonzo/cgi-bin/ciwweb.pl?hid_bypass=tep1,michael,A6Z99
In this next example, only a username is used:
http://www.sawtoothsoftware.com/gonzo/cgi-bin/ciwweb.pl?hid_bypass=tep1,michael
(Note that to specify a later parameter, all parameters prior to it must be specified. For example, if
password only is used, we still must provide a “blank” username parameter, reflected by the two commas
in a row.)
You can also pass additional variables into the data set, as long as those are defined as questions on the
bypassed page 1 (the login screen):
http://www.sawtoothsoftware.com/gonzo/cgi-bin/ciwweb.pl?hid_bypass=tep1,,A6Z99,0,Q1=1,Q2=5
In this example tep1 is the study name, there is no username, A6Z99 is the password. The "0" in the
above example turns JavaScript verification off (a “1" turns it on). Q1 and Q2 are additional variables to
be passed into the data set, and questions with these same names need to be placed in the
questionnaire on page 1 (the login page), that is bypassed. You can pass in as many additional variables
as you want, as long as questions with those same names are placed on page 1.
When respondents start the survey by clicking the hyperlink, if the passwords are valid (and the quota is
not complete), the respondent is sent to the page following the password screen. If the respondent is
restarting an interview, the page where he/she left off is displayed.
Form Submit Button
You can create a “submit button” that sends respondents to the address of a Sawtooth Software survey
and “passes through” a variable to be saved as the user name and/or password. Note that your survey
program must be able to customize the HTML text on the fly with respect to the values of the variables
passed through.
Here’s some sample HTML:
<HTML>
<BODY>
Click the button below to continue survey.
<FORM METHOD=”POST” ACTION = “http://www.website.com/survey/cgi-bin/ciwweb.pl”>
<INPUT TYPE=”HIDDEN” NAME=”resp_name” value=”michael”>
<INPUT TYPE=”HIDDEN” NAME=”password” value=”A6Z99”>
<INPUT TYPE=”HIDDEN” NAME=”hid_studyname” value=”tep1”>
<INPUT TYPE=”SUBMIT” VALUE=”Continue”>
</FORM>
</BODY>
</HTML>
Linking from SSI Web to Another System
You can link from SSI Web to another system by creating an HTML hyperlink within an HTML/Filler
question (a terminating question) within an SSI Web survey. This hyperlink should include the URL
address of the new survey to be linked to. You can dynamically insert respondent identifiers (such as
user name and password) or even responses to questions within the SSI Web survey within the HTML of
that hyperlink.
For more information about dynamically inserting respondent name, password, or answers to previous
questions within the hyperlink, please see the section on Displaying Responses to Previous Questions.
i.9.4 What Happens at the End of the Survey?
You should plan what the respondent does at the end of your survey. Your survey can “dead end”
(default) or you can provide a hyperlink for respondents to click to send them to a specific web site.
Typically, you should place a Text/HTML Filler question (marked as a “terminating question”) as the last
page of your survey. If it is not marked as a terminating question, the Sawtooth Software logo is
automatically displayed.
Sending Respondents to a Specific Web Page at the End of the Survey
If you don’t want your survey to “dead end” on the last page, you can provide a hyperlink (clickable area)
that sends respondents to a specific web page or site.
You should specify a single Text/HTML Filler question (marked as a “terminating question”) as the last
question on the final page of your survey. (Do not put a question that requires a respondent input on the
last page of your survey.) You can embed a hyperlink (following HTML conventions) or button in the
header, body or footer section of the text question.
As an example, the following text creates a hyperlink to send a respondent back to Sawtooth Software’s
home page:
Click <A HREF= “http://www.sawtoothsoftware.com”> HERE </A> to finish the survey and go to
Sawtooth Software’s home page.
(You can add hyperlinks when using SSI Web’s text editor by clicking the “anchor” icon, and customizing
the address for the HTML that is automatically inserted.)
You can also embed a clickable graphic that sends respondents to a specific web site. Assume that you
created an attractive graphic named “finish.gif” that says “Finish Survey” with your logo in the
background. If you wanted respondents to click on the graphic to go to your web site, you would first add
an hyperlink using the “anchor” icon. Then, within the anchor HTML, you would add the graphic by
clicking the Add Image icon, and selecting the graphic to add. The resulting HTML might look like:
<A HREF = “http://www.yourwebsite.com”> <IMG SRC= “%%graphics_path%finish.gif”
BORDER=”0"> </A>
If you are using this software to conduct repeated central-site, or mall-intercept surveys, you may want
the survey to return to the Password (login) Screen. In that case, specify “STUDYNAMElogn.htm” as the
destination in your hyperlink. You are not limited to one hyperlink--multiple “exit” options are possible.
Do not embed a hyperlink in the middle of your survey as this would lead to incomplete interviews.
i.10.1 Uploading Your Survey to Your Internet Server
Creating Directories and Uploading Files with FTP Software
When you are ready to field your Internet questionnaire, you upload the files listed below to your server.
The process by which files are uploaded to the web server is called “FTP” (File Transport Protocol). Use
a product like WS_FTP or Internet Explorer by typing ftp://yourwebsite to create directories on your server
and to upload files into these directories. Your ISP (Internet Service Provider) or network administrator
should be able to help you with these steps if you have trouble. (Warning: FrontPage software from
Microsoft typically does not give you the control needed to perform these steps correctly.)
The instructions below were written with generic Unix servers in mind. There are special considerations
for any server configuration--especially Microsoft servers.
First, decide where you want your survey to be located on the server. We’ll refer to that as the
yourstudyroot directory. Create that directory and four subdirectories below the yourstudyroot
directory:
\admin
\cgi-bin
\data
\graphics
If you plan to run multiple studies concurrently, you must create a separate \admin and \data directory for
each study.
Many of the files listed below are located in the \server subdirectory of the directory in which you installed
SSI Web on your hard drive. STUDYNAME is the name (four characters or less) that you gave your
study. The others are found in your working directory, where you saved your study (make sure NOT to
take the files from the “local web server area--/localweb/htdocs”).
The structure of the directories and files you need to upload to the server are as follows:
yourstudyroot
STUDYNAMEadmn.htm
STUDYNAMElogn.htm
(from study directory)
(from study directory)
(The STUDYNAMElogn.htm file is the page where respondents start the survey and type their passwords.
You can rename this file if you like. The STUDYNAMEadmn.htm is where survey administrators access
the Online Data Management Module. You can also rename this file if you like.)
We strongly suggest you turn off directory browsing so that people cannot “browse” your directories. To
turn off directory browsing in Unix, upload the file called .htaccess from the \resources directory on your
hard drive to each of your directories. For IIS, locate the setting in IIS for turning off directory browsing.)
\admin
STUDYNAMEpath.cgi
(from study directory)
STUDYNAMEpidx.cgi
(from study directory)
STUDYNAMEp001.cgi
(from study directory)
(also ...p002.cgi, ...p003.cgi, etc. if they exist)
STUDYNAMEqst.cgi
(from study directory)
STUDYNAMEpacc.cgi
(from study directory)
STUDYNAMEdsgn.cgi
(from study directory, only if CBC study)
STUDYNAMEctxt.cgi
(from study directory, only if CBC study with conditional relationships)
\cgi-bin
*acalib.pl
admin.pl
authlib.pl
**cbclib.pl
lite.pm
ciwlib.pl
ciwweb.pl
(from server directory)
(from server directory)
(from server directory)
(from server directory)
(from server directory)
(from server directory)
(from server directory)
\graphics
ssilogo.gif
(from /graphics subdirectory of your study)
[PLUS ANY OTHER GRAPHICS YOU USE IN YOUR SURVEY]
*acalib.pl only needed if running an ACA (Adaptive Conjoint Analysis) study
**cbclib.pl only needed if running a CBC (Choice-Based Conjoint) study
\data
(DIRECTORY)
\data remains empty until respondents begin taking the survey.
Note: Make sure to upload all files in binary mode, and to pay special attention that the file names
are named correctly, including case.
The software assumes you are running Perl version 5.003 or later on your server, and that the path on
the server to the Perl interpreter is /usr/bin/perl.
If the Perl interpreter is at a different location, you must change the first line of the ciwweb.pl and admin.pl
files to reflect this change. (These are the only modifications to the Perl files permitted under your license
agreement). If you modify these files, make sure your editor does not add carriage returns to them. (We
have included a DOS utility in your \server directory to strip carriage returns out of Perl files in case you
need it. The syntax is: stripcr oldfilename newfilename.)
We are also assuming that you have “rights” to write files on your server into your \admin and \data
directories, and to delete files from the \admin and \data directories. If your ISP doesn’t give you these
rights, we can suggest one that does.
When respondents create survey records, the data are saved in the \data directory; one file per
respondent. You should delete incomplete records and accumulate respondent data to the \admin
directory regularly using the On-Line Data Management Module. The accumulated file is named
STUDYNAMEdat.cgi, and is located in your \admin directory. Eventually you must download this file to
your local study directory where you can export the data to an ASCII file and/or perform analysis on the
conjoint data.
Survey authors or clients if you wish should be pointed to the STUDYNAMEadmn.htm file, which is the
main menu for the On-Line Data Management Module. (You can rename that file to make it more
intuitive for your clients.) User name and passwords for the on-line administrative module are set in the
File | Settings menu.
Setting Permissions for Directories and Files
After you have created the study directories and uploaded the necessary files to your server, you are
ready to set permissions on the files and directories using a product like WS_FTP or Internet Explorer by
typing ftp://yourwebsite. Setting permissions secures your web survey and permits the programs to
function properly.
To set permissions on directories or files with WS_FTP
1. Open WS_FTP and connect to your site.
2. Highlight the directory or file for which you want to set permissions by left-clicking with the mouse.
3. Click the right mouse button to activate the pop up menu.
4. Select chmod (UNIX). The Remote file permissions dialog appears.
5. Select the combination of Read/Write/Execute permissions for each directory and file listed in the table
below. (If using telnet, simply set the permissions code as in the far right column.)
To set permissions on directories or files with Internet Explorer
1. Once you have connected to and are browsing the folders on your web site, highlight the directory or
file for which you want to set permissions by left-clicking with the mouse.
2. Highlight the directory or file for which you want to set permissions by left-clicking with the mouse.
3. Click the right mouse button to activate the pop up menu.
4. Select Properties. Under Permissions, select the combination of Read/Write/Execute permissions for
each directory and file listed in the table below.
Note: On some servers, the respondent data files within the \data directory are accessible (read access)
by anyone connected to the Web. Without the questionnaire (that is in a secure area), individual data
files by themselves are likely of limited value (with the exception of open-end responses). To reduce the
risk of making a large amount of data available to outsiders, we suggest you accumulate the surveys
often using the administrative module. (We also suggest you make frequent backups of the
STUDYNAMEdat.cgi files.) Accumulating the data moves qualified completed records from the \data
directory to a master file (STUDYNAMEdat.cgi) in the \admin directory, that cannot be accessed by
outsiders.
Sometimes you may find that your ISP doesn’t let you set up directories the way we’ve outlined. Or, you
may simply have other preferences regarding your study’s architecture on your server. You can change
the names and locations of the directories using the Advanced Settings dialog button under File |
Settings.
Important Note: After you have posted your survey to the website, you should test your survey
carefully to ensure that passwords are correctly being verified, the questions are as you expect
them, any skip patterns and randomizations are functioning, and that the data are being properly
stored and reported when you download the data and export them to a data file. Only by careful
testing can you make sure that your study will be executed properly.
i.10.2 Deleting Practice Data Prior to Taking the Study “Live”
You should collect some practice data and check that everything is turning out as you expect prior to
taking a study “live.” After collecting the practice data, you will probably want to delete these records prior
to starting actual data collection.
There are two methods for deleting the practice respondent data and resetting the survey:
1) Using the Online Data Management Module
Access the Online Data Management Module by connecting to the Web and browsing to you
STUDYNAMEadmn.htm page. Use the Administrative Read-Modify username and password specified
under File | Settings. Once at the main menu, choose Reset Web Survey and click Continue.
2) Using FTP Software
To delete all the respondent data up on the web that is associated with a study (to reset a study):
1. Delete all the files in the data directory.
2. Delete these files from the admin directory: STUDYNAMEdat.cgi, STUDYNAMEcnt.cgi.
3. Upload your password files (STUDYNAMEP001.cgi, STUDYNAMEP002.cgi, etc.), overwriting
the existing files, to reset them so all the quotas on passwords are set to their original open state.
4. Restore all permissions on the directories as specified in the SSI Web uploading instructions.
Note: If you find the file STUDYNAMElog.cgi (that contains possible error messages) in your \admin
directory, you might consider deleting this file to reset the error list. (Please send the log.cgi file to
Sawtooth Software so we can help you interpret what types of errors have occurred.)
i.11.1 Logging onto the Online Data Management Module
You or your client can access the Online Data Management Module by directing your browser to the
STUDYNAMEadmn.htm file located in your study’s root directory. You can rename the
STUDYNAMEadmn.htm file if you’d like before uploading it to the server.
When you access the STUDYNAMEadmn.htm page with your browser, you are asked to supply a Name
and Password. When you initially created your study, random six-character passwords were created for
administrative access. You may have kept those randomly-generated passwords or changed those
passwords for your needs under File | Settings. If you cannot remember your password, you can go to
File | Settings to review the passwords.
You must type the correct name and password to access the administration module. The User Name and
Password are case-sensitive. (If the User Name is AdminJudy, then adminjudy and ADMINJUDY are
not valid.)
Type the Name and Password, and click the Login button.
Main Menu
After the system verifies a correct Name and Password, the main menu is displayed, with the following
options:
*Study Summary
*Password Reports
*Accumulate Qualified/Complete Surveys
*Accumulate Disqualified and Incomplete Surveys (read/modify access only)
Delete Disqualified and Incomplete Surveys (read/modify access only)
View Survey Data
Download Accumulated Surveys (read/modify access only)
*Marginals
Reset Web Survey (read/modify access only)
To choose an option, select the item and then click the Continue button.
*If your study has many respondents, these options place significant load on the server, which could
interfere with respondents currently taking surveys. These operations are best done when you expect
relatively “light” traffic.
i.11.2 Study Summary
When you choose Study Summary from the main menu, a report is displayed on the screen. The report
consists of three tables:
Study Summary: SODA
Total Surveys
Qualified/Complete
(Status - 5)
128
Disqualifed
(Status - 4)
57
Incomplete
(Status - 2)
73
Accumulated Data Area (Admin Directory)
Accumulated Data Area (Admin Directory)
Qualified/Complete
(Status - 5)
91
Disqualifed
(Status - 4)
21
Incomplete
(Status - 2)
53
Unaccumulated Data Area (\Data Directory)
Unaccumulated Data Area (Data Directory)
Qualified/Complete
(Status - 5)
37
Disqualifed
(Status - 4)
36
Incomplete/
In Progress or Abandoned
(Status - 2)
20
This report gives you an overall summary of activity for your survey, the details of which are described
below:
Study Summary
Qualified/Complete
The total number of surveys that have been completed and terminated with a qualified/complete
status (status=5). It should be equal to the total number of qualfied completes in both the
unaccumulated data area (the \data directory) and in the accumulated data area (the \admin
directory), unless you have deleted some complete surveys with your other file management
software. A survey is complete when respondents reach a “terminating” question marked as
qualified/complete.
Disqualified
The total number of surveys that have been terminated as disqualified (status=4). It should be
equal to the total number of diqualfied records in both the unaccumulated data area (the \data
directory) and in the accumulated data area (the \admin directory), unless you have deleted
some disqualified surveys with your other file management software. A survey is disqualified
when respondents reach a “terminating” question marked as disqualified.
Accumulated Data Area (Admin Directory)
When you accumulate surveys (described later), respondent records are moved from the \data directory
to the \admin directory. The categories for the summary report are the same as described above.
Unaccumulated Data Area (\Data Directory)
When respondents start the survey, an individual record is stored in the \data directory. Incomplete
records in the \data directory are either abandoned or still in progress. The categories for the summary
report are the same as described earlier. Any records that are incomplete or in progress (status=2) may
be restarted, as long as the passwords for each respondent are unique with a quota of 1 per password.
If you click the Continue button, you are returned to the main menu.
Note: Be careful to properly reset the survey after collecting practice data, so that the administrative
module reports correctly and that practice data are not combined with real data.
i.11.3 Passwords Table
You can view or modify your passwords table while a study is running. You can monitor how many
surveys have been completed in each password row (quota cell), or modify the number of completed
records that are permitted for each quota cell. If you have read/modify privileges, you can “close” a quota
cell by setting the maximum number of completes equal to the number that have already been collected.
You can also “open” a closed quota cell by increasing the maximum number of completes.
Password Summary
Closed Password
Rows
0
Open Password
Rows
1
Open Surveys
9999
Restartable
Surveys
0
Closed Password Rows
Each Password File can have from 1 to 8,192 rows, each representing a valid password/user
name combination. Each row has a quota defining the maximum number of completed surveys
permitted (either qualified or disqualified). If that quota has been met, the password row is
considered closed and full. (You can “open” a closed password cell while a study is running by
increasing the quota. This is described in the View Passwords Table section below.)
Open Password Rows
The number of password rows that are still valid and open (the quota for completed interviews
has not yet been filled). The number of open plus closed password rows equals the total number
of rows in the table. (You can “close” a password cell while a study is running by decreasing its
quota. This is described in the Passwords Table section below.)
Open Surveys
The number of additional surveys that could be completed, taking into account the number
already completed and the maximum number of completed interviews permitted across all rows
of the password table. It is equal to the total potential interviews less completed surveys.
Restartable Surveys
The number of surveys that are in progress or abandoned that can be restarted. Surveys can
only be restarted if the respondent has a unique password and a quota of 1 for that password.
If you choose View Password Tables, the next menu shows:
Enter the starting row number to view: ______
Enter the ending row number to view: ______
Select the Password File [studynamep???.cgi ]
Each passwords file can have up to 8,192 rows. Specify the beginning and/or ending rows that you want
to view, and then click the Continue button. That portion of the table is displayed. (Note: the passwords
file is displayed in the sorted “binary tree” order, not necessarily in the same order you created when you
first specified passwords within the table.)
If you have read/modify access, you can specify which row to modify and click Continue, or click Return
to the Main Menu.
Modifying the Number of Maximum Respondents per Password Row
(This option is only available with read-modify access.)
1. After listing password rows from the Password Tables menu, specify the row to modify, and
click Continue.
2. The specified record is displayed, followed by:
Enter the new maximum number of respondents: ______
3. Specify the new maximum number of respondents, and click Continue. The table is updated,
and you are returned to the previous screen.
i.11.4 Downloading Passwords Report
You can download a list of User Names and Passwords for respondents with different states of
completion for the survey. This can be helpful if you want to send reminder notices to respondents who
have not yet completed the survey, or if you want to identify those who have already completed the
survey, etc.
While in the Online Data Management Module, from the main menu, click Password Reports. At the
next dialog, click Download Password Status Reports.
The first reporting options listed are only used if each respondent has a unique username-password
combination (the maximum number of respondents permitted per username-password is 1).
Password reports for unique username-password combinations only (maximum
respondents set to one).
(Select one or more of the following categories to include in the report):
[ ] Qualified / Complete Usernames and Passwords (status 5)
[ ] Disqualified Usernames and Passwords (status 4)
[ ] Incomplete (In Progress / Abandoned) Usernames and Passwords (status 2 – unaccumulated)
[ ] Usernames and Passwords for respondents that have not started the survey yet
Following those reporting options are other selection criteria that apply to either unique usernamepassword combinations per respondent, or username-passwords that allow multiple respondents per
combination.
Additional password reports
(Select one or more of the following categories to include in the report):
[ ] Open Usernames and Passwords
[ ] Closed Usernames and Passwords
“Open” indicates any usernames and passwords that are still valid (haven’t reached maximum
respondents permitted for that combination). “Closed” refers to any usernames and passwords that are
no longer valid (maximum respondents permitted for that combination has been reached).
After you check all categories to include in your report, click Continue. Finally, you click a download link
that saves an ASCII text file to your hard drive named STUDYNAMEPRT.txt that contains the report.
i.11.5 Accumulate Qualified/Complete Surveys
When respondents generate records (either complete or incomplete), each is saved as a separate file in
the \data directory on your server. Each data file starts with the four-character Study Name followed by
an alpha-numeric code. For example, the first three respondent records for a study with a study name of
soda will be named:
sodaa.cgi
sodab.cgi
sodac.cgi
Before running marginals or downloading data, you will usually want to accumulate qualified complete
records. The Accumulate Qualified/Complete Surveys process transfers qualified completed records from
the \data directory to a single master data file in the \admin directory. More specifically, it appends all
records with a status of 5 from the \data directory to the STUDYNAMEdat.cgi file in your \admin directory,
and then deletes those same completed records from the \data directory.
Both read and read/modify access passwords convey permission to accumulate qualified/completed
records. It’s a good idea to accumulate surveys often, as the \admin directory provides a higher level of
security for your data.
Note: It is possible that respondents have answered questions in the survey more than once (due to
backing up). When data are accumulated, only the last recorded response for each question is retained.
Earlier responses are deleted.
Notes: This procedure causes traffic on the server and potentially could slow performance for your
respondents. Best done at “low” traffic times. It is a good practice to make regular backups of the
downloaded STUDYNAMEdat.dat file. (If using FTP software, the file to be backed up is called
STUDYNAMEdat.cgi and is located in the \admin directory.)
i.11.6 Accumulate Disqualified and Incomplete Surveys
(Read/Modify Access Only)
In the previous section, we explained how records are saved and how qualified completed records are
accumulated to the master data file in the \admin directory. Any surveys that are incomplete or
disqualified remain in the \data directory until you choose to either accumulate or delete them.
Important Note: When you accumulate or delete incomplete surveys, they cannot be restarted.
When you choose Accumulate Disqualified and Incomplete Surveys from the main menu, the following
options are displayed:
Accumulate disqualified and incomplete surveys older than ______ days
You can accumulate disqualified and incomplete surveys by the length of time elapsed since they were
started, expressed in increments of days. You can also specify decimal places, with 1.5 days meaning 36
hours, etc. If you want to accumulate all disqualified and incomplete records, regardless of age, specify 0
days. You cannot delete disqualified or incomplete records once they have been accumulated to the
\admin directory. You are not permitted to delete qualified complete surveys using this program.
i.11.7 Delete Disqualified and Incomplete Surveys (Read/Modify
Access Only)
In the previous section, we explained how records are saved and how completed records are
accumulated to the master data file in the \admin directory. Any surveys that are incomplete or
disqualified remain in the \data directory until you choose to either accumulate or delete them.
When you choose Delete Disqualified and Incomplete Surveys from the main menu, the following options
are displayed:
Delete disqualified and incomplete surveys older than ______ days
You can delete incomplete disqualified surveys by the length of time elapsed since they were started,
expressed in increments of days. You can also specify decimal places, with 1.5 days meaning 36 hours,
etc. If you want to accumulate or delete all incomplete records, regardless of age, specify 0 days. You
cannot delete disqualified or incomplete records once they have been accumulated to the \admin
directory. You are not permitted to delete qualified complete surveys using this program.
i.11.8 View Survey Data
This option lets you view responses for individual records. You can view records that have already been
accumulated (in the \admin directory) or unaccumulated records (in the \data directory).
Order of Record Display:
\admin: sequential in order of accumulation
\data: in order appearing in directory. Not necessarily in order of creation.
i.11.9 Download Accumulated Surveys (Read-Modify Access Only)
The download option retrieves data only from the \admin (accumulated data) directory. You should make
sure to accumulate any wanted records from the \data directory to the \admin directory before
downloading.
When you choose Download Accumulated Surveys, your browser asks you to supply a destination drive
and directory. In most cases, you will probably specify the study directory on your hard drive. The
accumulated data file, named STUDYNAMEdat.dat, is copied to your hard drive (note that this is a
renamed copy of the STUDYNAMEdat.cgi file, that is the master data file that stays in your \admin
directory).
If you prefer, you can download the data file directly using your FTP software: just make sure to rename
the data file from .cgi to .dat and place it in your study directory on your hard drive.
i.11.10 Tabulating Data On-Line
The marginals program lets you run simple frequency tabulations of the data in real-time on the Internet
server. You can analyze complete and/or incomplete records, from either the \admin directory
(accumulated data file) or the \data directory (unaccumulated records).
Select the Question
This combo box lets you select which question to tabulate. You can only tabulate
results for one question at a time.
Respondent Filter
Here you specify whether you want to tabulate results for All Respondents,
Qualified/Completes Only, or Disqualified and Incompletes Only.
Data Source
Choose the data area to use. You can tabulate results from the accumulated
data file (in the \admin directory), or those not yet accumulated (in the \data
directory).
Click Continue to run the marginal report, or click Return to Main Menu. You can print the results of any
marginals run by clicking the Print icon in your browser, or choosing File | Print.
Notes:
Decimal Answers: The Marginals program within the Online Data Management Module truncates
any decimal answers to numeric questions, treating them as integers, prior to computing
frequencies, min, max, and means.
Grid Questions: Multiple-response check-box questions within grids are available for tabulation.
The variable for a row-based question is listed using the convention grid_r1_c* where the
asterisk means that checkboxes across all columns for the question in row1 will be tabulated.
The convention for column-based questions is grid_r*_c1 where the asterisk means that
checkboxes across all rows for the question in column1 will be tabulated.
i.11.11 Reset Web Survey (Read/Modify Access Only)
This option resets your survey data. It deletes all respondent data and resets password files to reflect no
completed surveys. Resetting your survey doesn’t delete the questionnaire, just the respondent data.
This is a convenient way to delete practice data prior to taking a study live.
Warning: Resetting your study deletes all existing respondent data!
i.12.1 Export All Data
(Export File Type)
(Before you can export data, you must have accumulated the data and downloaded the
STUDYNAMEdat.dat file to the study directory on your hard drive using the Online Data Management
Module.)
Click File | Export Data | All Data and the Export All Data dialog is displayed. This dialog offers three
tabs:
Export File Type (described here)
Respondent Filter
Data to Export
You can export the data from your Internet questionnaire to one of the following files:
.CSV File (Microsoft Excel)
This file can be directly opened with Microsoft Excel(TM). It is a comma-delimited text file, where
the first line includes variable labels.
.SAV File (SPSS)
This file can be directly opened with SPSS. It is a binary file.
.TAB and .LAY Files (Text/ASCII Format)
The .TAB file is a fixed column text file. This file can be read by most any statistical processing or
cross-tab package. The layout for the .TAB file is included in the .LAY file, which is also a text
file. The .LAY file specifies the variable names and in which column in the .TAB file each variable
begins and ends.
A default name and path is provided in the Output Filename: field. You can directly edit the name. By
clicking the browse button, you can specify the folder in which you wish to save the file.
i.12.2 Export All Data
(Respondent Filter Tab)
Include
All Respondents
Choose this option to export all respondents: both complete and incomplete. Data for questions that were
not completed are left blank.
Qualified/Completes Only
Choose this option to export only qualified complete data records. A survey is considered qualified
complete when respondents reach a “terminating” question marked with qualified status.
Disqualified and Incompletes Only
Choose this option to export only disqualified and incomplete data records. A survey is considered
disqualified when respondents reach a “terminating” question marked with disqualified status. A survey is
considered incomplete if respondents did not reach a “terminating” question.
i.12.3 Export All Data
(Data to Export)
This dialog lets you specify the data you wish to export to the output file. All items in the Export List on
the right are exported when you click the Export button. You can move any items from the left window
(list of available data) to the Export list using the arrow buttons.
The > button moves just the highlighted items. The >> button moves all items. You can highlight a
range of items by holding down the Shift key and clicking with the mouse. You can highlight multiple
items selectively by holding down the Ctrl key and clicking each one with the mouse.
Items marked with an asterisk (*) are considered “system variables.” These include respondent number,
passwords, interview timing information, and respondent status.
User Name is one of the fields from the passwords module.
Password is also one of the fields from the passwords module
Sequential Respondent # are integers 1 through n assigned to each respondent record that is
exported.
Internal Interview Numbers are those that the system assigns when respondents complete the
survey over the Web. There can be gaps in the sequence if some respondents started a survey
but didn’t finish. The internal respondent numbers are alphanumeric, and are converted to
numeric.
Start Date/Time is the time (as recorded on the server, often GMT-Greenwich Mean Time) that
respondents entered the survey.
End Data/Time is the time (as recorded on the server, often GMT-Greenwich Mean Time) that
respondents last submitted data.
Respondent Status (2 = incomplete; 5 = complete)
Notes Regarding Exporting Multiple-Response Data:
With multiple-response select questions, more than one variable is stored for the question. For example:
Q1. Which of the following colors do you like?
(Select all that apply)
Red
Blue
Yellow
Green
Violet
None of the above
The data for this question are saved as six separate variables: Q1_1, Q1_2, Q1_3, Q1_4, Q1_5 and
Q1_6. If respondents chose the item, the value is 1, if not, it is set to 0. The variable positions and
names for multiple-select (and all other questions) are indicated in the STUDYNAME.lay file.
Grid questions can also save multiple responses. Grid data use the convention gridname_r#_c#, where
the r# refers to the row number and c# refers to the column number. For example, q11_r3_c5 stores
information for the grid named q11 and the response contained in row 3 column 5.
Some Notes Regarding Missing and Blank Data
There are different ways to achieve blanks in data positions, and it is important to understand the
distinction when analyzing the data.
1) Respondent never saw the question. The data are blank for questions on the pages the respondent
never saw. Or, the respondent failed to click the submit button on a page with questions. Skip patterns
can create blank data.
2) Respondent saw the question, but did not respond. For each CiW question-type, you can specify
whether the question can be skipped by respondents. If you need to distinguish between whether none
of the categories of a multiple-response question truly apply for a respondent versus whether the
respondent simply skipped the question, you should probably include a “None Apply” response category
and then require a response for that question. (Note: currently SSI Web permits respondents to check
both “None Apply” options together with other options within the list. Such nonsensical responses must
be cleaned later using your own data processing software.)
i.12.4 Exporting Open-End Data
The Export All Data option creates a data file (STUDYNAME.tab) intended for reading by a statistical or
cross-tabulation package. If you want to export the open-ends in their entirety in a format that is intended
for use with word processors or coding programs, you should choose File | Export Data Open-ends
Only. The Export Open-end Data screen is displayed.
Respondent Numbering
You can choose from five options for assigning respondent numbering. This lets you specify how the
respondent numbers (case IDs) are to be generated. User Name is one of the passwords from the
passwords module. You should only choose this if these values were purely numeric (no letters).
Password is also one of the passwords from the passwords module. You should also only use this if the
passwords were purely numeric (no letters). Internal Interview Numbers are those that the system
assigns when respondents complete the survey over the Web. There can be gaps in the sequence if
some respondents started a survey but didn’t finish. Sequentially Assigned Numbers (default) are
integers 1 through n. Other lets you choose another numeric variable to use for respondent numbering.
This last option is useful if you are linking surveys from another source and need to merge the data sets
by a “passed-in” variable.
By default the open-ends are saved in the STUDYNAME.txt file, but you can change this name if you
want. The data are grouped by question. Within each question, responses are displayed by respondent
number.
Here is a sample open-end file for two respondents and three open-end responses:
Why_like
196
234
I have tried other soft drinks, but I like Sawtooth Spritz best.
Don’t know, I just like its zing.
First_Name
196
234
John
Mary
Last_Name
196
234
Jones
Burns
Any carriage returns respondents type are converted to spaces in the STUDYNAME.txt file.
i.12.5 Moving Data from SSI Web to SMRT for Analysis
If conducting conjoint analysis studies (ACA and CBC) with SSI Web, you’ll need to move the data from
SSI Web into the SMRT software package for analysis (market simulations, counting analysis, logit, etc.).
Sometimes data are moved directly into SMRT, and other times data are moved from SSI Web into a
preliminary analysis program such as hierarchical Bayes, and then afterward to SMRT.
CBC (Choice-Based Conjoint) Data
When you download SSI Web data from the web server, your data reside in your SSI Web project folder
in a text-only format STUDYNAMEdat.dat file. The steps for moving data into SMRT are:
1. Export the CBC data (just the data directly involving the choice tasks) to a studyname.CHO
(text-only format) file. From the SSI Web menu, click File | Export Data | Prepare CBC Data
files (*.cho and *.att)….
2. Start the SMRT software (a different program from SSI Web, which ships on the same
installation CD as SSI Web), by clicking Start | Program Files | Sawtooth Software | Sawtooth
Software SMRT.
3. Within SMRT, create a new study for analyzing the results (select File | New and choose a
folder and a studyname). You can choose any folder or studyname that you want, as this new
study functions independently of your original SSI Web study.
4. Import the .CHO file into SMRT, by clicking (from the SMRT software menu) File | Import…,
browsing to your .CHO file, and clicking Import. The raw choice data (respondent answers and
experimental design) are now within the SMRT study. You can proceed with counting analysis or
logit within SMRT.
Note: If you use CBC/HB or Latent Class software to estimate part worth utilities, use the .CHO and .ATT
files with those systems (see their respective manuals for details) to produce a data file containing
respondent part worths (a .HBU file if HB analysis, or .LCU and .P0x files if Latent Class analysis). Then,
import the resulting utility file into your SMRT study by clicking (from the SMRT software menu) Analysis
| Run Manager | Import… and following the prompts to browse to and select your .HBU or Latent Class
probability file (.P0x). Once you import your utilities, you can select Analysis | Market Simulator and
begin simulations.
ACA (Adaptive Conjoint Analysis) Data
When you download SSI Web data from the web server, your data reside in your SSI Web project folder
in a text-only format STUDYNAMEdat.dat file. The steps for moving data into SMRT are:
1. If using the default OLS (Ordinary Least Squares) part worth utility estimation method, from
the SSI Web menu, click Analysis | Calculate ACA Utilities…. This process creates a text-only
format file called STUDYNAME.utl.
2. Start the SMRT software (a different program from SSI Web, which ships on the same
installation CD as SSI Web), by clicking Start | Program Files | Sawtooth Software | Sawtooth
Software SMRT.
3. Within SMRT, create a new study for analyzing the results (select File | New and choose a
folder and a studyname). You can choose any folder or studyname that you want, as this new
study functions independently of your original SSI Web study.
4. Import the STUDYNAME.utl file (file of respondent part worth utilities) into your SMRT study
by clicking (from the SMRT software menu) Analysis | Run Manager | Import… and following
the prompts for importing ACA utilities (selecting ACA v4.x Utilities *.utl as the file type, and
browsing to your .utl file). Once you import your utilities, you can select Analysis | Market
Simulator and begin simulations.
Note: If you use ACA/HB software to estimate part worth utilities, from the SSI Web menu, select File |
Export Data | Prepare Conjoint Data for HB… to export the ACA information to a STUDYNAME.acd
format into the same folder containing your ACA/HB system. See the ACA/HB manual for details of
producing a data file containing respondent part worths (a .HBU file). Then, import the resulting utility file
into your SMRT study by clicking (from the SMRT software menu) Analysis | Run Manager | Import…
and following the prompts to browse to and select your .hbu file. Once you import your utilities, you can
select Analysis | Market Simulator and begin simulations.
Moving Segmentation (Generic Survey) Data from SSI Web to SMRT
We assume you have first followed the procedures above for estimating part worth utilities and
establishing those as a utility run in a new project within SMRT.
1. From the SSI Web system, export the data you wish to merge into SMRT. From the SSI Web
menu, select File | Export Data | All Data…, select the respondent numbering method (the
numbering method must match the method you chose when exporting the conjoint data or
estimating utilities), and export the data you wish to merge into SMRT (only numeric data may be
merged). Select either a .CSV file (comma-delimited text-only file) or a fixed text-only file (.TAB).
Note: if you select .CSV format, you must delete the first line of the file (line of variable labels)
using a text editor such as Wordpad, Notepad, or Excel(TM) prior to importing within SMRT.
2. Within the SMRT software, open the project previously established that contains the conjoint
data/utility run. Use File | Merge Variables | Add… to merge the data into your SMRT project
(see the SMRT manual or online help within SMRT for details). After merging the data, you’ll
additionally need to specify Custom Segmentation Variables (Under Analysis | Custom
Segments) if you wish to use the new variables as banner points or respondent filters during
simulations.
i.13.1 How Data Are Stored
When respondents generate records (either complete or incomplete), each is saved as a separate file in
the \data directory on your server. Each data file starts with the four-character Study Name followed by
an alpha-numeric code. For example, the first three respondent records for a study with a study name of
soda might be named:
sodaa.cgi
sodab.cgi
sodac.cgi
The data files are in comma-delimited ASCII format. The data files include all information provided by the
respondent. This can include multiple answers for questions, if respondents backed up in the survey and
went forward again. When data are accumulated, only the last recorded response for each question is
retained. Earlier responses are deleted.
When you use the On-Line Data Management module to accumulate the data, each of the separate data
files in the Data directory is appended in a master data file (STUDYNAMEdat.cgi) in the \Admin directory
(again comma-delimited ASCII format). Only the last response provided for questions is placed in the
final data file. When data are accumulated, each data record that has been accumulated into the master
data file is deleted from the \data directory.
Accumulating data is final: you cannot restart incomplete interviews that have been accumulated.
i.13.2 Files Created by SSI Web
SSI Web is a suite of Web-based market research software from Sawtooth Software, including CiW,
ACA/Web and CBC/Web. The files these packages create when running studies are listed below:
Files created when creating surveys on the local PC
Name
Description
STUDYNAME.bak
Backup of .SSI file
STUDYNAME.ssi
Contains short study name, server path, CBC, and Export settings
STUDYNAMEadmn.htm HTML login file for study administrators
STUDYNAMEcrt.rel
Conditional relationship information
STUDYNAMEctxt.cgi Conditional relationship text for web server
STUDYNAMElogn.htm HTML login file for respondents
STUDYNAMEp???.cgi Contains respondent passwords and quota settings
STUDYNAMEpacc.cgi Administrative passwords
STUDYNAMEpath.cgi Server path information and study name
STUDYNAMEpidx.cgi Index for password ranges in p00? files
STUDYNAMEqst.bak Backup of questionnaire file
STUDYNAMEqst.cgi
Questionnaire file, including any conjoint attributes
STUDYNAMEtest.txt
CBC design efficiency report
TempWeb.htm
Preview HTML page for current question
TempWeb??.htm
Preview HTML pages for complete survey
STUDYNAMEdsgn.cgi Design file for CBC/Web
STUDYNAMEatto.dsg Attribute order file for CBC/Web
ssilogo.gif
Sawtooth Software graphic file
Files created during data collection on the Web server (you should not delete these files manually during
fielding. If you click the Reset Survey option in the Online Administrative Module, these files and folders
are deleted.
Name
STUDYNAMEcnt.cgi
STUDYNAME????.cgi
STUDYNAMEdat.cgi
STUDYNAMEdat.dat
STUDYNAMElog.cgi
\STUDYNAMEpass
UserNamePassword.cgi
Description
Counter file (in \admin directory) for assigning internal respondent numbers
Individual respondent data record (in \data directory)
Accumulated data file (in \admin directory)
Accumulated data file when downloaded to local drive
Text file containing error messages indicating inconsistent data files. Please
review contents and contact Sawtooth Software if this file exists in your \admin
directory.
Created within the \admin directory to keep track of password quotas
If respondents have unique User Name/Password combinations, this file is
created when a respondent begins the survey. Contains internal respondent
number.
UserNamePassword.don
If respondents have unique User Name/Password combinations, this file is
created when a respondent finishes the survey.
\UsernamePassword Created inside the \STUDYNAMEpass directory, only if multiple respondents can
use the same User Names and/or Passwords. Used to keep track of password
quotas.
RESPNUM.don
Created inside the \UsernamePassword directory (only in the case of multiple
respondents using the same User Name and/or Passwords), when a respondent
has completed a survey.
Files created during analysis of data on the local PC
Name
STUDYNAME.aca
STUDYNAME.acd
STUDYNAME.att
STUDYNAME.cho
STUDYNAME.csv
STUDYNAME.lay
STUDYNAME.sav
STUDYNAME.tab
STUDYNAME.txt
STUDYNAME.utl
Description
Contains attribute level text to accompany .utl for ACA
ACA audit trail file (export to HB)
CBC Attribute label file (created by export)
CBC choice data file (created by export)
Exported data file (Comma-Separated Values)
Data layout file for .tab file
Exported SPSS(TM) data file
Exported ASCII data file
Text file of open-end responses (created by export)
ACA conjoint utility file (created by computing utilities)
i.13.3 Error Messages (from Perl scripts)
1-4
Error in QST file. Try re-saving the QST file and uploading again.
5-7
General Sawtooth Software Perl error--Call Sawtooth Software.
8
Can’t restart respondent. The data for this respondent indicates the file is not incomplete or restartable
(status is not 2).
9
Bad respondent data file. Comma doesn’t separate question label from answer. Possibly the number of
answers do not match the number expected.
10
Missing ACA data--possibly did not include needed ACA question for utility calculations. Please review
your ACA questions to make sure your Rating and Importance questions are included and have been
answered by this respondent (review your skip logic also).
.
11
General Sawtooth Software Perl error--Call Sawtooth Software.
12-13
Error in ACA algorithm--Call Sawtooth Software.
14
Too many constraints in ACA. Reduce the number of prohibited pairs.
15-16
General Sawtooth Software Perl error--Call Sawtooth Software.
17
Old version of questionnaire. The version of the Perl scripts Is later than the questionnaire version.
Please update your questionnaire by importing the questionnaire, resaving, and uploading again.
18
Mismatch in files. The name of the study as indicated by the STUDYNAMEpath.cgi file does not match
the studyname in the hidden field within the STUDYNAMElogn.htm file.
19
JavaScript parameter doesn’t exist. Call Sawtooth Software.
20
Studyname doesn’t exist. The studyname in the link is incorrect or has been broken. Hid_studyname is
not defined and so the studynamepath.cgi cannot be found.
i.13.4 Trouble-Shooting Server-Side Problems
If you are running on a Microsoft Web Server, please view alternate setup instructions.
Below is a list of suggestions you might find helpful in trouble-shooting web problems:
1. Make sure that the path on the first line of the script (ciwweb.pl and admin.pl) is the correct path to
your Perl executable. We have set the path at the most common directory for the Perl interpreter
[sometimes called the Perl executable] which is: /usr/bin/perl. Another common location is
/usr/local/bin/perl. If you edit the first line, make sure you do not add a carriage return. If you
inadvertently do, the script will not run and you must strip the carriage returns using the stripcr utility that
we have provided in the \resources directory. (Note: only run the stripcr program on the .pl files.) Consult
the Web server administrator if you don’t know the correct path.
2. When uploading your files, make sure you upload them in "binary" mode so that the number of bytes in
the files do not differ between your machine and the server.
3. Unix servers are case sensitive when it comes to filenames. Make sure that the case on your
filenames matches. We’d suggest using all lower case names. It is a good idea to make your study
names all lower case.
4. Make sure you set the permissions correctly on the files. Some servers won’t run Perl scripts with
permissions of 777 (all permissions). The admin and data directories need "write" permissions so we can
store data and update the password tables. When troubleshooting, you can set permissions to 777 to
make sure it is not a permissions problem.
5. Beware of Microsoft's Front Page. Many users have discovered that files and paths within the files get
mysteriously changed by this program which tries to "outguess" the user - usually the program guesses
wrong.
6. Some ISPs or web servers require a special subdirectory where Perl scripts can run. Find out from
your ISP what their rules are for running Perl scripts. Sometimes subdirectories for Perl scripts are called
cgi-local or cgi-pvt. Restrictions on directory names are a good thing since there is usually added security
on directories of fixed names such as cgi-bin. Also, some servers provide added protection for files with
an extension of .cgi. This is why many of our files have this extension even though not all of them are CGI
scripts. If you use a directory other than cgi-bin, make sure you update this information in your server
directory paths under Advanced Settings.
7. Some servers that do not use conventional permissions as we have outlined have a utility that should
allow you to "set" an "executable bit" on the .pl files (ciwweb and admin). This is required for running the
scripts.
8. Some servers, including many NT servers require alternate setups that are different from our default
procedure. In these cases, you might need to change the paths in Advanced Settings. Under the File +
Settings + Advanced, you can modify the system paths.
When changing the paths under Advanced Settings, the five paths are as follows:
Administrative Directory:
Path from where the Perl files are running to the /admin directory
CGI Scripts Directory:
Path from STUDYNAMElogn.htm and STUDYNAMEadmn.htm to the
directory that contains ciwweb.pl and admin.pl
Data Directory:
Path from where the Perl files are running to the /data directory
Relative Path from
CGI Scripts to Graphics:
Relative Path from
logn.htm to Graphics:
Path from where Perl scripts are being executed to the /graphics
directory
Path from STUDYNAMElogn.htm to the /graphics directory
9. If you have changed the default directory structure, you may need to move the file
STUDYNAMEpath.cgi to your web root so the scripts can find it. Here are the search paths for
STUDYNAMEpath.cgi from the directory from which the Perl scripts are running:
../admin/
If not found in ../admin/, then it looks for the STUDYNAMEpath.cgi in:
current directory
If not found there, it looks for:
studyname/admin/
If not found there, it looks for the STUDYNAMEpath.cgi in:
../studyname/admin/
10. If you want to delete all the respondent data up on the web that is associated with a study, go to the
admin.htm and choose Reset Survey, OR, delete all the files in the data directory and the following files in
the admin directory: STUDYNAMEdat.cgi, STUDYNAMEcnt.cgi. Then, upload your password files again
(pidx, p001, p002, etc.) to reset them so all the quota cells are open. If you discover the file
STUDYNAMElog.cgi (that contains possible error messages) in your \admin directory, call Sawtooth
Software to report.
11. When you use the stripcr utility, make sure you only run it on the Perl scripts and NOT on the study
files.
CiW/ACA Perl test scripts
If you are experiencing difficulty in getting your questionnaire to run on the web, it may be helpful to
trouble-shoot the problem by using very simple Perl scripts. You can find these scripts in your
\Resources directory in your SSI Web installation. Once these 3 test pl files run successfully, the
standard CiW and ACA Web scripts have a very good chance of also running successfully.
The following Perl scripts should be renamed ciwweb.pl one at a time and placed in the cgi-bin directory
for testing purposes. They should be "called" via the STUDYNAMElogn.htm page that you have
uploaded to your base study directory on the web.
TEST1.PL - simply tests Perl linkage. A debug test message is written (“Debug Test #1
Successful”) if Perl is installed correctly and we are able to call it from our STUDYNAMElogn.htm
file.
Failure here would mean that the Perl interpreter is not being called and executed. You can check for the
correct path to the interpreter on the first line of the script and that Perl has been installed on the
machine. You can also check the permissions on the script to make sure it has "executable" permission.
Call your ISP and explain that you need help running a simple “hello world” script.
TEST2.PL - tests linkage of a simple Perl program with an industry standard library lite.pm . If the
test is successful, a debug test message is written (“Debug Test #2 Successful”). A list of form
and environment variables are passed back and listed on the screen. (lite.pm must reside in your
scripts directory).
Failure here can mean that the Perl libraries are not in the expected directory or that ciwweb.pl cannot
find any of its “included” files. You may need to modify the path for the "include" statements at the
beginning of the script.
TEST5.PL - tests to see if we can write a test file "debug.txt" to the admin directory. If the test is
successful, a debug test message is written (“Debug Test #5 Successful”). Requires that
TEST2.PL has already run successfully. (lite.pm must reside in your scripts directory).
Failure here can mean that you do not have "write" permissions to the admin directory.
Test6.pl - Helps you determine where your current working directory for Perl is.
i.13.5 Web Server Setup for Microsoft IIS Servers
If you are hosting your surveys on a dedicated IIS server and you can create your own virtual directories
you might consider setting up a virtual directory for each survey. For documentation on how to set up a
dedicated IIS Server please refer to http://www.sawtoothsoftware.com/support/ssiweb/310/IISsetup.html
i.13.6 Networked Environments Where Users Cannot Access the
Program Files Folder
Software users may wish to install SSI Web on networked computers to allow multiple users access to the
program. SSI Web was not designed for use in this type of networked environment, so users will need to
make a few adjustments to use some software features.
In some systems, users without administrative rights are prohibited from accessing, modifying or storing
files the “Program Files” area of the hard drive. Unfortunately, this is the location where SSI Web stores
the tutorial files, saves files for the Local Web feature of the program, and is also the default Study
directory.
The following procedures can help you use SSI Web when individuals do not have access to the Program
Files directory:
1. Individual users will need to save study files in directories to which they have access.
2. If tutorial files are needed, they should be copied by the administrator into a directory that will be
accessible to the individual user or users.
3. Local Web will not work using the automatic upload feature. To test a survey using Local Web, users
will need to manually establish the required file structure.
a) Turn off Apache by selecting Web Server | Local Web Server | Settings and clicking Stop
Apache. Then select Manually Start and Stop Apache.
b) Copy the \Local Web folder from the installation directory to a writable directory on your hard
drive (this must be done by a user with administrative rights).
c) Open the \htdocs folder inside the copy of the \Local Web folder. Set up your survey files as
if you were setting it up on a remote server. You will need to create the admin, data, cgi-bin, and
graphics directories. Copy the required study files from your study directory, and place the
ciwweb.exe and admin.exe files in the cgi-bin in place of the *.pl files.
d) Modify the STUDYNAMElogn.htm file to go to ciwweb.exe instead of ciwweb.pl, as follows:
<FORM NAME="mainform" METHOD="POST" ACTION="cgi-bin/ciwweb.pl"> should be changed
to: <FORM NAME="mainform" METHOD="POST" ACTION="cgi-bin/ciwweb.exe">
e) Repeat step (d) for STUDYNAMEadmn.htm
f) Inside the Local Web folder, double-click runweb.exe. This starts Apache and configures it to
run in its new location.
g) Open your web browser, and type “http://localhost/STUDYNAMElogn.htm” in the address bar.
You should then be able to run your survey, including execution of any skip patterns, JavaScript,
etc. The Online Administration Module is accessed at, “http://localhost/STUDYNAMEadmn.htm”.
i.14.1 HTML Cheat-Sheet
Text Tags
<pre></pre>
Creates preformatted text
<h1></h1>
Creates the largest headline
<h6></h6>
Creates the smallest headline
<b></b>
Creates bold text
<u></u>
Creates underline text
<i></i>
Creates italic text
<font size=?></font>
Sets size of font, from 1 to 7
<font color=?></font>
Sets font color, using name or hex value
Links
<a href="URL"></a>
Creates a hyperlink
<a href="mailto:EMAIL"></a>
Creates a mailto link
<a name="NAME"></a>
Creates a target location within a document
<a href="#NAME"></a>
Links to that target location from elsewhere in the document
Formatting
<p></p>
Creates a new paragraph
<p align=?>
Aligns a paragraph to the left, right, or center
<br>
Inserts a line break
<blockquote>
</blockquote>
Indents text from both sides
<dl></dl>
Creates a definition list
<dt>
Precedes each definition term
<dd>
Precedes each definition
<ol></ol>
Creates a numbered list
<li></li>
Precedes each list item, and adds a number
<ul></ul>
Creates a bulleted list
Graphical Elements
<img src="name">
Adds an image
<img src="name" align=?>
Aligns an image: left, right, center; bottom, top, middle
<img src="name" border=?>
Sets size of border around an image
<hr>
Inserts a horizontal rule
<hr size=?>
Sets size (height) of rule
<hr width=?>
Sets width of rule, in percentage or absolute value
<hr noshade>
Creates a rule without a shadow
Tables
<table></table>
Creates a table
<tr></tr>
Sets off each row in a table
<td></td>
Sets off each cell in a row
Table Attributes
<table border=#>
Sets width of border around table cells
<table cellspacing=#>
Sets amount of space between table cells
<table cellpadding=#>
Sets amount of space between a cell's border and its contents
<table width=# or %>
Sets width of table -in pixels or as a percentage of document width
<tr align=?> or <td align=?>
Sets alignment for cell(s) (left, center, or right)
<tr valign=?> or <td valign=?>
Sets vertical alignment for cell(s) (top, middle, or bottom)
<td colspan=#>
Sets number of columns a cell should span
<td rowspan=#>
Sets number of rows a cell should span (default=1)
<td nowrap>
Prevents the lines within a cell from being broken to fit
Forms
<select multiple name="NAME" size=?></select>
Creates a scrolling menu. Size sets the number of menu items visible before you need to scroll.
<option>
Sets off each menu item
<select name="NAME"></select>
Creates a pulldown menu
<option>
Sets off each menu item
<textarea name="NAME" cols=40 rows=8></textarea>
Creates a text box area. Columns set the width; rows set the height.
<input type="checkbox" name="NAME">
Creates a checkbox. Text follows tag.
<input type="radio" name="NAME" value="x">
Creates a radio button. Text follows tag
<input type=text name="foo" size=20>
Creates a one-line text area. Size sets length, in characters.
<input type="submit" value="NAME">
Creates a Submit button
<input type="image" border=0 name="NAME" src="name.gif">
Creates a Submit button using an image
<input type="reset">
Creates a Reset button
i.14.2 HTML Editor (Pencil Icon)
SSI Web includes a simple editor with a useful toolbar for writing text and inserting HTML syntax for your
survey. The buttons on the toolbar are useful for making questionnaire editing more efficient. For a
description of each button, hold your mouse over each icon for a few moments and a tooltip message
appears describing the button. If you are still uncertain regarding the functionality, click the button to add
HTML elements or modify your existing text within the editor and click the Preview button to observe the
effect.
HTML supports many fine formatting options, and we suggest you become familiar with the ways to make
your surveys look even more professional. Please see the HTML Cheat-Sheet for additional help.
The first few buttons have to do adding style (e.g. bold, italics, underline, italics) and other buttons deal
with formatting (insert blank lines, horizontal rules, text sizing, centering, block quotes, and color). When
you highlight text and click these items, HTML instructions are added to your text. Of course, if you prefer
to type the HTML tags yourself, you do not need to use the buttons on the toolbar.
When you use the Add Image icon to add graphics to your study, your <IMG> tags then contain the
variable %%graphics_path% in their paths. %%graphics_path% will be replaced on the web server with
the path that you provided here in the Advanced Settings area. For example if you set your graphics path
to "../MyGraphics/" and you included <IMG SRC="%%graphics_path%picture.jpg" BORDER="0"> in your
questionnaire, then on the web server that path would dynamically change to <IMG
SRC="../MyGraphics/picture.jpg" BORDER="0">.
You can add hyperlinks when using SSI Web’s text editor by clicking the “anchor” icon, and customizing
the address for the HTML that is automatically inserted.
A “table” button automatically inserts skeleton HTML for formatting tables.
When editing a Free Format question, the Free Format Controls icon helps you insert appropriate HTML
instructions to ask the particular variable type within your Free Format question. This tool further ensures
that your variable names match the names you previously declared.
i.15.1 System Variables
The following system variables are available for displaying on the screen (after enclosing in % symbols),
or using in skip patterns and in hidden variables (Free Format). To return the value of the system
variable, you should enclose it within % symbols (i.e. %%A12I%).
%ANI
%ANLN
%debug
%graphics_path
%JavaScript
%Password
%progress_bar
%RandnumN
%RespNum
%tab_width
%UserName
Returns Importance score where N is the attribute (1-30)
Returns Utility score where first N is the attribute and the second N is the level
Returns debug info (displays all saved data to this point and system variables)
Returns the graphics path
Returns 1 if they have JavaScript, 0 if they don't. Typically used in skip logic
Returns password (Case-Sensitive)
Returns the progress bar
Displays random number with N as the seed
Returns the internal respondent number (not letter)
Returns the global tab width
Returns username (Case-Sensitive)
Some of the system variables can be used in error text messages, and they do not need to be enclosed
in % symbols.
%field
%questionname
Returns question number on page. Only used in Error messages
Returns the name of the question. Only used in Error messages.
Some system variables may be used with CBC studies.
%%CBC_NONE%
%%CBC_Version%
Displays the None text and radio button in the header or footer of a CBC
question (Shelf-facing Display only)
(The variable is case sensitive). If placed in the header or footer of a CBC task, it
displays the current questionnaire version# and task name. This is useful for
paper-and-pencil surveys, for aid in field management and data processing.
i.16.1 CAPI Interviewing with “SSI Web CAPI”
SSI Web generally is used for interviewing over the Internet. But, SSI Web studies can also be used for
interviewing in “CAPI” mode (Computer Aided Personal Interview), which generally refers to data
collection using a laptop or a PC not connected to the internet. We recommend Windows 98 or later for
the PC, laptop or server used for collecting data.
You do not need to buy an additional module or software package other than the general SSI Web
system to conduct CAPI interviewing (the additional software can be downloaded at no charge from our
web site). The main steps are as follows:
1. Develop your study as you would for Web interviewing, including setting up passwords, etc.
2. Test the study to ensure it works properly, including checking skip patterns, examining the
resulting data, conducting a pre-test, etc.
3. Install the “SSI Web CAPI” software on laptops or PCs used for interviewing (if using a networked
environment for interviewing, SSI Web CAPI only needs to be installed on the server). This is
freely downloaded software at
www.sawtoothsoftware.com/download/ssiweb/capi/ssiwebcapi.htm. Or, you can burn this
installation program onto a CD-ROM to deliver to the data collection facility. (The CAPI
installation program is also included in the “CAPI” subdirectory of the SSI Web program.)
4. Use Field | Create CAPI Package to create a “CAPI package,” which is a single file (.CAPI
extension) that contains all the information necessary to run the study. If multiple PCs (not
networked) are used in interviewing, you may want to create multiple .CAPI files, one for each
interviewing PC, for better field data management.
5. Email or otherwise deliver the .CAPI file(s) to the data collection facility. They load the study and
begin interviewing. They can give you periodic reports regarding the number of complete,
incomplete and disqualified records.
6. The data collection facility sends you a data file for each .CAPI file that was loaded and used for
interviewing, which represents all data records collected to that point.
7. Place each data file within a folder. Select Field | Accumulate CAPI Data File(s)… and select the
folder that contains the data files. That folder is searched for .dat files with the format
STUDYNAME_#.dat, where # is a number.
Create CAPI Package Files Dialog
Each .CAPI file you create contains all the files and information needed to run the study (the
questionnaire, passwords, graphics, Perl scripts, etc.) By including the Perl scripts in each .CAPI file, this
ensures that each project is fully self-contained, and the data collection site can manage multiple studies
without regard to which version of SSI Web was used to create the studies.
Beginning CAPI Package Index Number
Select a beginning number for the .CAPI package(s) to be created in the current session. If you
select “1” as the starting point (with multiple packages to create), the .CAPI packages are named
starting with STUDYNAME_1.CAPI. The next .CAPI packages in the series follow as
STUDYNAME_2.CAPI, STUDYNAME_3.CAPI, etc. Creating multiple .CAPI packages and
indexing them is a useful aid for field data management, however, there isn’t any requirement that
you create multiple packages. In reality, multiple .CAPI files from the same study are identical in
every way (assuming you didn’t change anything else about your study between .CAPI file
creation sessions, such as to add new passwords or modify the questionnaire).
You can create .CAPI packages in multiple sessions. For example, you may discover after a few
days of interviewing that you want to create a few additional .CAPI packages. If you earlier had
created .CAPI packages 1-10, you might now create .CAPI packages 11-20. (If for some reason
you wanted to start naming the files in the second session starting with #83, it wouldn’t matter.
Again, the numbers attached to files are merely to help you organize and manage files coming
from potentially multiple sources in the field.)
Some SSI Web users that are familiar with our other interviewing systems (such as Ci3 or SMRT)
may wonder how the beginning respondent numbers are specified for each .CAPI file, or how
many respondents are allowed per .CAPI installation. Internally, we use a random seed based on
the system clock to assign each record an internal respondent number (this is used as a seed for
question/item randomizations and assignment to CBC designs). If you interview many
respondents, it is possible to get duplicates (there are 475,254 possible unique internal
respondent IDs). However, we generally recommend that you use a different variable (such as
User Name or Passwords) as respondent IDs, or that you simply export the data with the cases
numbered sequentially from 1 through n. We don’t suggest you use SSI Web’s internal
respondent number in analysis. Each .CAPI installation can collect up to 475,254 possible
respondents.
Number of CAPI Package Files to Create
Select the number of .CAPI files to create within the current session. If the PCs or laptops used
for interviewing operate independently (are not set up to use a single networked version of SSI
Web CAPI), then you may want to create one file per interviewing PC, for ease of field data
management. (If you and your field agency would not be bothered by different batches of
respondents having the same file name, then you are welcome to create a single .CAPI package
and distribute this same file to all data collection sites and PCs.)
The name of the .CAPI files follows directly from your study name. If the study is named ABCD
and you create 10 CAPI packages, starting with an index number of 1, the files are named
ABCD_1.CAPI, ABCD_2.CAPI . . . ABCD_10.CAPI.
Warning: Only one .CAPI file with the same study name should be used by each computer.
If (using the SSI Web CAPI program) the interviewer loads a second .CAPI file (say
XXXX_2.CAPI) onto a PC that has already been interviewing using a XXXX_1.CAPI
installation, all the data files for the XXXX_1.CAPI installation will be overwritten (though
we always store vital data in a sub-directory of the /Archive folder within a SSI Web CAPI
installation prior to deleting any data). We provide a warning dialog within the CAPI
module prior to any step that would delete the data.
Directory to Create CAPI Package Files:
This indicates the folder in which to save the .CAPI files. You cannot change this folder.
Include Administrative Module
If you want the individual(s) conducting data collection to have access to SSI Web’s Online
Administrative Module, check this. We generally suggest not including the module, unless the
individual(s) performing data collection have a good understanding of the functionality of the
Administrative Module.
Even without including the Administrative Module, the SSI Web CAPI program lets the data
collection facility accumulate data for each .CAPI file into a single data file, and gives them a tally
of the number of completes, incompletes, and disqualified records in each data file.
Interview “Recycle”
When running in CAPI mode, you may want the questionnaire to automatically recycle from the last page
of the survey back to the first, so that it doesn’t drop to the main SSI Web CAPI menu each time.
The easiest way to do this is to create a Text/HTML Filler question at the end of your survey (on its own
page) that has a hyperlink such as <A HREF=”../XXXXlogn.htm”>Return to the Beginning of the
Survey</A> where “XXXX” is the name of your study.
i.16.2 Accumulate CAPI Data Files
This dialog lets you select the folder that contains the .dat files that were delivered to you (for example,
from the data collection facility). Any and all .dat files are accumulated into your project from that folder
and all its sub-folders.
For example, let’s assume your main SSI Web study (short study name 0893) is located in a folder
named Auto0893. Further let’s assume you have created three CAPI packages for interviewing:
0893_1.CAPI, 0893_2.CAPI, and 0893_3.CAPI.
The data collection site has interviewed using three PCs and delivered three separate .dat files to you via
email: 0893_1.dat, 0893_2.dat, and 0893_3.dat. You place these three files in a folder (perhaps a subfolder from your main Auto0893 folder) called Auto0893 CAPI Data:
Auto0893
(Contains SSI Web study files, including your final 0893dat.dat file, which is the one used in all
export and data analysis functions in SSI Web.)
Auto0893 CAPI Data
0893_1.dat
0893_2.dat
0893_3.dat
During the Accumulate CAPI Data Files process, you’ll select the Auto0893 CAPI Data folder, and all
data files in that folder named 0893_?.dat will be accumulated and either append to or overwrite (based
on your selection when prompted) the 0893dat.dat file in your Auto0893 folder.
Please note that the data files delivered to you from Web CAPI interviewing are cumulative data files,
including all interviews conducted to date. Therefore, we suggest when you accumulate CAPI data files,
that you accumulate all data over all CAPI files, overwriting any data in the current project file.
i.16.3 Deleting Sensitive Data from CAPI Interviewing PCs
We have designed SSI Web CAPI to make it very difficult to lose important data. Prior to any step in
which data might be deleted, we warn the interviewer that important data are about to be deleted. Even if
the interviewer chooses to perform an operation within the SSI Web CAPI interface that deletes data, we
still save a copy of vital data files within a sub-directory of the /Archive folder within the SSI Web CAPI
installation. This keeps interviewers from accidentally deleting critical data. However, this also makes it
more difficult to clean sensitive data completely from a PC at the end of a project (e.g. prior to returning
rental PCs).
Even if you uninstall the SSI Web CAPI program, data files created during project administration are still
maintained on the hard drive.
To completely delete study data from an interviewing PC (running under SSI Web CAPI):
1. From the SSI Web CAPI menu, highlight the study to be deleted on the list of Available Studies. Click
Delete Study button. This deletes the data for the project from the project folder. However, for safety
precautions, we place critical data files in a sub-folder of the /Archive directory as a result of this deletion
step.
2. Browse to the ../Program Files/Sawtooth Software/SSI Web CAPI Module/Archive directory. You
may find sub-folders within that directory with the study name, plus a time-stamp reflecting the exact time
and date the data were archived. Delete any of these folders containing sensitive data.
3. Empty the Recycle Bin. (Even though you deleted the folders in Step 2, Windows retains these in the
Recycle Bin until you empty it.)
ii.1.1 Getting Started with ACA/Web
ACA/Web is software for conducting Adaptive Conjoint Analysis studies over the Internet, over an
intranet, or using PC (CAPI-based) data collection. ACA/Web is part of the SSI Web software suite.
ACA/Web does not include printed hard-copy documentation. We hope that the electronic documentation
provided here is sufficient to answer your questions. For more general information about ACA, we
suggest that you read the “ACA Technical Paper” available for downloading from our home page at
www.sawtoothsoftware.com.
Your license entitles you to free technical support via phone, email or fax. We encourage you to contact
us (360/681-2300) if you cannot find the answer to your problem or question in this documentation. We
are available Mon-Fri, 8:30 am to 5:00 pm Pacific time. (Note: we provide full technical support for
composing and testing your Internet survey on your local PC, and provide written instructions on how to
install your questionnaire on your server. We cannot take responsibility for or be expected to provide
technical support to debug any customized HTML or JavaScript you may add to your survey. You are
also responsible to make sure that Perl is running and that you can run Perl (CGI) scripts on your web
server. We can provide some assistance related to SSI Web’s use with your server once the fundamental
Perl functionality is in place.)
A sample ACA/Web questionnaire (tut2.ssi) and supporting files are available in the Tutorials folder within
your SSI Web installation. You can easily access this example study by clicking Help | Tutorial Studies.
This questionnaire lets you see an example of an ACA/Web survey and investigate how it is set up. To
see how the questionnaire looks and adapts when it is running “live” over the Web, you can log onto
www.sawtoothsoftware.com/acanet/login.htm.
If you know a little HTML (which is not necessary to use ACA/Web), you can employ some “tricks” that will
help you customize the look of your survey beyond the standard options provided by the software. By
opening the sample study and looking at the settings and text we’ve used, you can become familiar with
some of the possibilities.
Using ACA/Web requires the following skills and knowledge base:
The analyst managing the survey should know the basics of conjoint analysis, including defining
attributes and levels properly, correctly phrasing the questionnaire text and response scales,
interpreting the conjoint part worth utilities, and running simulations. It is not necessary to be a
statistician to use our software effectively, but knowledge of basic statistics for market research is
highly recommended.
The questionnaire is first composed locally on a PC running under Windows 95 or later with
Internet Explorer, Netscape or another Web browser installed. The individual inputting/formatting
the survey should have working knowledge of standard Windows applications such as word
processing, Web browsers and/or spreadsheets. Though it is not necessary, some knowledge of
HTML is valuable.
To upload the survey to the Web, one must have access to an Internet/intranet server running
Perl 5.003 or later. This can be your company’s own Web server, or your ISP’s (Internet Service
Provider’s) Web server. The user must know how to use FTP software to create directories,
upload files to those directories, and set permissions on the files and directories. Click here to
receive detailed instructions on the directories, files and permissions.
To get started, we suggest you view other areas within this on-line help library, especially:
What is Adaptive Conjoint Analysis?
Basic Overview of ACA/Web
Sections and Flow of an ACA Questionnaire
ii.1.2 What Is Adaptive Conjoint Analysis?
Adaptive Conjoint Analysis (ACA) was first introduced by Sawtooth Software in 1985 and went on to
become the most widely used conjoint software and conjoint analysis technique in the world during the
1990s (Wittink, Huber, and Vriens 1997), (Green 2000). Most recently, Choice-Based Conjoint
techniques have become more popular than ACA, but ACA still finds a great deal of use and has unique
benefits for many situations.
Conjoint (tradeoff) analysis is a technique frequently used for market research applications. Such
applications are usually concerned with consumer preference and they attempt to assess the impact of
specific product features on overall preference. Products or services are thought of as possessing
specific levels of defined attributes, and a respondent’s “liking” for a product is modeled as the sum of the
respondent’s “part worths” (sometimes called utilities) for each of its attribute levels. For example,
attributes for a car could include: Price, Type, Gas Mileage, and Color. Levels of the Gas Mileage
attribute might include 15 MPG, 20 MPG, and 25 MPG; and levels for the Type attribute might include:
Convertible, Minivan, Compact, and Full Size.
The strength of conjoint analysis is its ability to ask realistic questions that mimic the tradeoffs that
respondents make in the real world. Respondents evaluate product alternatives (concepts) described by
various attributes and indicate which products they prefer. By analyzing the answers, conjoint analysis
can estimate the weights and preferences respondents must have placed on the various features in order
to result in the observed product preferences. In contrast to direct questioning methods that simply ask
how import each attribute is or the desirability of each level, conjoint analysis forces respondents to make
difficult tradeoffs like the ones they encounter in the real world. As in real purchase decisions, buyers
cannot get all the best features at the lowest prices.
It would be time consuming and difficult for respondents to evaluate all possible product combinations in
order to provide information on their values for the various product features. Conjoint analysis offers the
researcher a more efficient way to obtain such information: only a carefully chosen set of hypothetical
product concepts is presented to respondents for evaluation. For example, a respondent might be asked
to choose between the following two concepts:
The answers to this and successive questions are used to determine the respondent’s part worths for
each of the attribute levels. Once part worths have been determined, the respondent’s overall utility
(preference) for a given product can be estimated by summing the part worths for each attribute level that
describes that product.
Perhaps the most serious practical problem for users of conjoint analysis is that of dealing with large
numbers of attributes. Clients often have a detailed understanding of their product category, and
previous research has usually identified many issues of interest. The respondent, on the other hand,
usually has less interest and enthusiasm, and is rarely willing to submit to a long interview. The
researcher is often in conflict. Should there be a long interview, risking unhappy respondents and data of
questionable quality? Or should the researcher insist on a narrower focus in the interview, providing the
client with less breadth than desired? ACA resolves this conflict by eliminating the tradeoff altogether.
For example, a traditional (non-ACA) full-profile conjoint study with 12 attributes and four levels per
attribute would require estimation of 37 utilities (the number of attributes x (the number of levels per
attribute -1) +1). If the questions presented to the respondent have been well chosen, they will be difficult,
perhaps requiring much thought. In addition, under traditional full-profile conjoint analysis, each product
concept is described using all 12 attributes, requiring much reading on the part of the respondent. At the
very minimum, the respondent would have to provide 37 answers; if there is any random component to
the responses we would need more observations. An accepted rule of thumb is to require three times as
many observations as parameters being estimated. This would require the respondent to answer 111
questions, each presenting a concept described on 12 attributes. Adaptive Conjoint Analysis solves this
problem by customizing the interview for each respondent: the respondent is asked in detail only about
those attributes and levels of greatest relevance, and the task is made more manageable due to partial
rather than full profiles (only showing a subset (usually 2 to 5) of the attributes in any one question). The
term "adaptive" refers to the fact that the computer-administered interview is customized for each
respondent; at each step, previous answers are used to decide which question to ask next, to obtain the
most information about the respondent's preferences.
Once the data are collected, ACA lets the researcher simulate respondent preferences for new or
modified products. The Market Simulator can be used to explore "what if” scenarios, such as changes in
product formulation. The researcher describes a group of hypothetical products by specifying each
product's level on each attribute. Respondent part worth utilities are used to estimate strengths of
preference or buying likelihoods for each product, and results are accumulated over respondents to
provide shares of preference among competing products or average estimated buying likelihoods for
each product.
ii.1.3 Basic Overview of ACA/Web
ACA for the Web lets the researcher design a computer-interactive conjoint interview for data collection
over the Web, an intranet, or using CAPI-based (stand-alone PC) interviewing. The interview can
consider many attributes and levels, paying special attention to those the respondent considers most
important. Questioning is done in an "intelligent" way; the respondent's part worths are continually
re-estimated as the interview progresses, and each question is chosen to provide the most additional
information, given what is already known about the respondent's values. In addition to the conjointrelated ACA questions, ACA/Web comes with a CiW/10 system that can include up to 10 “standard”
survey questions for capturing additional information for use in market segmentation. These 10 questions
can include randomizations and skip patterns. (More than 10 questions can be fielded in conjunction with
ACA/Web through larger CiW licenses. Both CiW and ACA/Web are products within the SSI Web suite of
Web interviewing products.) Respondent data are saved to a database (ASCII file) on the Web server (or
on separate PCs, if using CAPI-based data collection), and are available in real-time upon completion of
the interview.
ACA/Web was developed using established industry-standard technology (Perl and HTML) available on
most Web servers. The interviews can be completed on nearly any Web browser. The steps for
designing and conducting an ACA/Web study are typically as follows:
1.
Using a Windows-based tool running locally on a PC (without being connected to the Web), the
researcher designs the ACA study: types in the attributes and levels, adds additional survey text,
and chooses from a menu of options that control the overall flow of the interview. The researcher
formats questions using the point-and-click interface and views the questions one page at a time
using his/her browser (i.e. Internet Explorer or Netscape). Additionally, the researcher creates a
table of passwords to assign to respondents and quotas for each password combination.
2.
Once the researcher is satisfied with the look, flow and content of the interview, the interview is
uploaded onto a Web (or intranet) server (or, optionally, to multiple PCs for CAPI-based
interviewing). The online help includes detailed instructions regarding the directories, files and
permissions that must be included in the installation.
3.
The researcher tests the ACA/Web survey on-line by accessing the interview over the Web (or
intranet) using a Web browser, making sure the interview is administered as expected, and
checking the resulting data using the Online Data Management module to ensure that the data
are being stored correctly. A pretest is often conducted during this stage to evaluate the
suitability of the survey and to make sure the resulting utility data have face validity.
4.
Respondents are invited to participate in the survey (usually via email) and are given the URL
address and password. While the interview is in progress, the researcher and/or client can view
or download the data at any time without disturbing the data collection or interviews that may be
in progress.
5.
When interviewing is complete, the data are downloaded using ACA/Web’s Online Data
Management module, and the respondent utilities are estimated using ACA/Web’s utility
calculator locally on the researcher’s PC (or using ACA/HB software for hierarchical Bayes
estimation). The researcher then uses the Market Simulator software included with the ACA/Web
software to analyze the results and conduct market simulations.
ii.2.1 Sections and Flow of an ACA Questionnaire
Suppose the attributes being studied describe automobiles, with these levels:
BODY TYPE
Two-Door Sedan
Four-Door Sedan
Hatchback
Minivan
Convertible
DRIVE CONFIGURATION
Rear-Wheel Drive
Front-Wheel Drive
All-Wheel Drive
ORIGIN
Made in USA
Made in Japan
Made in Europe
COLOR
Red
Blue
Green
Yellow
White
Black
PRICE
$18,000
$20,000
$22,000
$24,000
$26,000
The ACA/Web interview has several sections, each with a specific purpose. We’ve provided examples of
the questions in each section, together with brief explanations of their purposes.
1) Preference for Levels (Required Section)
(The “ACA Rating” question type.)
First, the respondent rates the levels for preference. This question is usually omitted for attributes (such
as price or quality) for which the respondent's preferences should be obvious. (When you input the
attributes and levels, you can specify that the order of preference for levels is “best to worst” or “worst to
best” and the ratings question is skipped for such attributes.) The screen may look like the following:
(The rating scale can be defined from 2 to 9 points. We suggest using at least 5 scale points. In any
case, it is probably not wise to use fewer scale points than the number of levels in any one attribute for
which the Rating question is asked.)
The respondent is required to check one radio button per attribute level. If a level is skipped, ACA/Web
will prompt the respondent to complete the question prior to moving to the next page in the interview.
2) Attribute Importance (Required Section)
(The “ACA Importance” question type.)
Having learned preferences for the levels within each attribute, we next determine the relative importance
of each attribute to this respondent. This information is useful in two ways. First, it may allow us to
eliminate some attributes from further evaluation if the interview would otherwise be too long. Second, it
provides information upon which to base initial estimates of this respondent's utilities.
As a matter of fundamental philosophy, we do not ask about attribute importance with questions such as
"How important is price?" The importance of an attribute is clearly dependent on magnitudes of
differences among the levels being considered. For example, if all airline tickets from City A to City B
were to cost between $100 and $101, then price couldn't be important in deciding which airline to select.
However, if cost varied from $10 to $1000, then price would probably be seen as very important.
Our questioning is based on differences between those levels the respondent would like best and least,
as illustrated:
(The rating scale can be defined from 2 to 9 points. We suggest using at least 4 scale points.)
It is worth mentioning that there are different ways to phrase the “importance” question, and indeed
referring to attribute “importance” is sometimes difficult for some respondents to comprehend. How the
wording of this question should be phrased depends on the audience. Other possibilities with the
ACA/Web software include:
The “Regrets” Format:
The “Unexpected Windfall” Format:
At this point we have learned which attributes are most important for this respondent and which levels are
preferred. From now on, the interview is focused on those most important attributes and combinations of
the levels that imply the most difficult trade-offs.
3) Paired-Comparison Trade-Off Questions
(The “ACA Pairs” question.)
Next, a series of customized paired-comparison trade-off questions is presented. Up to this point in the
interview, we have collected “prior” information; no conjoint analysis has been involved. The Pairs
section elicits the conjoint tradeoffs. In each case the respondent is shown two product concepts. The
respondent is asked which is preferred, and also to indicate strength of preference. The example below
presents concepts differing on only two attributes. Although concepts may be specified on up to five
attributes, simple concepts like these present an easy task for the respondent, and are a useful way to
begin this section of the interview.
(The rating scale can be defined from 2 to 9 points. We suggest using at least 7 scale points.)
The number of attributes appearing in each concept is specified by the author, and can be varied during
the interview. Concepts described on more attributes have the advantage of seeming more realistic. It is
also true that statistical estimation is somewhat more efficient with more attributes.
However, with more attributes, the respondent must process more information and the task is more
difficult. Experimental evidence indicates that a tradeoff occurs: as the number of attributes in the
concepts is increased, respondents are more likely to become confused. It appears best to start with only
two attributes. Most respondents can handle three attributes after they've become familiar with the task.
Preliminary evidence suggests that beyond three attributes, gains in efficiency are usually offset by
respondent confusion due to task difficulty.
The computer starts with a crude set of estimates for the respondent's part worths, and updates them
following each submitted page (you can specify how many pairs questions will be completed per page).
The crude estimates are constructed from the respondent's preference ranking or rating for levels, and
ratings of importance of attributes. Each pairs question is chosen by the computer to provide the most
incremental information, taking into account what is already known about this respondent's part worths.
The interview continues in this manner until a termination criterion (specified by the author) is satisfied.
Every time the respondent completes a page of pairs question(s), the estimate of the respondent’s part
worths is updated. Updating the part worths improves the quality of subsequent pairs questions. We
strongly encourage you to place page breaks often throughout the pairs questions. If you can count on
fast load times (respondents will be using fast web connections), we suggest page breaks after each
pairs question.
4) Calibrating Concepts (Optional Section)
(The “ACA Calibration” question type).
Finally, the computer composes a series of "calibrating concepts" using those attributes determined to be
most important. These concepts are chosen to occupy the entire range from very unattractive to very
attractive for the respondent. The respondent is asked a "likelihood of buying" question about each.
We first present the concept we expect the respondent to like least among all possible concepts, and the
second is the one we expect to be liked best. Those two concepts establish a frame of reference. The
remaining concepts are selected to have intermediate levels of attractiveness.
This information can be used to calibrate the part worth utilities obtained in the earlier part of the interview
for use in Purchase Likelihood simulations during analysis. Conjoint part worths are normally determined
only to within an arbitrary linear transformation; one can add any constant to all the values for any
attribute and multiply all part worths by any positive constant. The purpose of this section is to scale the
part worths non-arbitrarily, so that sums of part worths for these concepts are approximately equal to logit
transforms of the respondent's likelihood percentages.
The screen format is:
If you plan only to conduct share of preference simulations, and particularly if you are using the ACA/HB
system for hierarchical Bayes estimation, there is no need to include calibration concepts in your ACA
survey. See Estimating ACA/Web Utilities for more information on the use of calibrating concepts.
Setting Page Breaks in ACA/Web Surveys
ACA/Web lets you include multiple questions per page. If the page cannot fit on the respondent’s screen,
a scroll bar is provided for respondents to scroll down the page. At the bottom of each page is a “Submit”
button that respondents click when they have completed the current page. Page breaks must occur
between each of the major ACA/Web sections:
Rating
Importance
Pairs
Calibration Concepts
You may include additional page breaks within each section if you think it is more convenient for your
respondents and provides a more desirable overall layout. We suggest you include page breaks within
the Pairs section so that the utilities can be updated and the subsequent pairs have an opportunity to
become more customized and better balanced. The number of pairs questions to include per page
represents a tradeoff between the benefits of updating versus the time required to process and receive a
new page of pairs questions over the respondent’s Web connection.
ii.3.1 Setting ACA Control Parameters
You control the way the ACA questionnaire is implemented using the Specify ACA Interview Settings
dialog, accessed by clicking Compose | Conjoint Settings | ACA Settings.
This dialog provides the following options:
Maximum Attributes for Pairs Section
This controls how many attributes will be carried forward to the pairs questions. If you specify as many
attributes as are in your study, all attributes will be used within the pairs questions. If you specify a
number n which is less than your total number of attributes, only the n most important attributes (as rated
by the respondent in the Importance question) will be used in the Pairs section. You can take as many as
20 attributes forward to the Pairs section. We generally recommend taking all attributes (up to 20)
forward to pairs questions.
Number of Pairs Questions (0-50)
This number controls how many total conjoint pairs are to be asked. We suggest that you ask a number
of pairs equal to 3 (N - n - 1) - N, where:
N = total number of levels taken into the Pairs section
n = total number of attributes taken into the Pairs section
There are many times when the number of pairs suggested by the above formula are more than
respondents can manage. (Most ACA/Web surveys will have between 15 to 30 pairs questions.) We
suggest you determine how many pairs questions respondents can reliably answer, and not try to ask
more pairs than is prudent.
There are also instances with very small ACA designs in which the above formula leads to very few
suggested pairs (i.e. sometimes six or fewer). In that case, we recommend that you increase the number
of pairs questions to a number that your respondents can reasonably answer.
Number of Pairs Questions at Each Stage
This value controls how many pairs questions are asked before the next degree of complexity is
implemented. For example, if you want the first 10 pairs questions to show two attributes at a time, and
the next 10 pairs to show three at a time, you should specify “10.”
Number of Attributes in Each Pairs Question in the First Stage (2-5)
This value specifies how many attributes will be presented at a time throughout the first stage. We
recommend showing two attributes at a time in the first few pairs.
Number of Attributes in Each Pairs Question in the Last Stage (2-5)
Controls the maximum complexity (number of attributes shown) in conjoint pairs questions. We generally
recommend that three be the maximum number of attributes in the last pairs. Although, if attribute text is
particularly short and you feel that respondents would not become overly confused or fatigued, you may
wish to display four or five attributes in the last pairs.
Number of Calibrating Concepts (0,3-8)
Controls how many calibrating concept questions will be shown. If you want to use the Purchase
Likelihood model during analysis, you should include at least three Calibrating Concepts. We
recommend using at least five if you plan to include this section. If you plan to use ACA/HB to compute
part worths (hierarchical Bayes) and if you do not need to run the purchase likelihood model, you should
probably not include a Calibration Concepts section.
The total number of calibration concepts cannot exceed the number of attributes (or the number of
attributes in each concept--see below) plus one.
Number of Attributes in Each Concept (2-8)
Specifies how many attributes are to appear within each Calibration Concept question. Only the most
important attributes are included (as rated in the Importances section). We recommend including no
more than about six attributes.
If you have specified prohibited pairs, there are situations in which fewer attributes than the number you
specify are actually displayed within the Calibration Concept section.
ACA Alternating Colors
If your questionnaire includes ACA (Adaptive Conjoint Analysis) questions, you can specify alternating
colors that can make the ACA questions easier to read for the respondent. With ACA questions that
show multiple levels within the same attribute, the background for each level text area is painted using the
alternating colors you specify. With conjoint pairs and calibration concepts, the background of the
product concept areas take on alternating colors. If you don’t want these colors to alternate, choose the
same color for the two alternating colors.
ii.4.1 Is ACA the Appropriate Technique?
Many methods are available for collecting and analyzing conjoint data, and the researcher contemplating
a conjoint study must choose among them. We at Sawtooth Software have had many years of direct
experience with these methods, as well as the benefit of many conversations with users of our own and
other software. Based on that experience, we offer the following suggestions:
The Full Profile Method was the original conjoint method introduced to the marketing research
community, and it remains a standard. Green and Srinivasan (1990) recommend use of the full profile
method when the number of attributes is six or fewer. We agree that six is an appropriate cutoff number.
We think respondents are likely to become overloaded and confused when confronted by large numbers
of lengthy profiles. Our experience is that, when there are more than six attributes, and pricing research
is not the goal, ACA works better. We also think the weight of evidence shows that ACA works at least as
well as full profile when there are fewer than six attributes (for example, see Huber et al., 1993) and
pricing research is not the goal, although with few attributes ACA has no compelling advantage.
The ACA System was developed specifically for situations where there are many attributes and levels.
Most of ACA's questions present only small subsets of attributes, so questions do not necessarily become
more complex when there are many attributes in the study. With more than six attributes, we think ACA is
likely to be the more appropriate method when pricing research isn’t the goal.
Like most full profile applications, ACA is a "main effects only" model, and assumes there are no
interactions among attributes. Many conjoint practitioners agree that one must remain alert for the
possibility of interactions, but that it is usually possible to choose attributes so that interactions will not
present severe problems. Like other conjoint methods, ACA can deal with interactions in a limited way by
defining composite variables. For example, we could deal with an interaction between car color and body
style by cross-classifying the levels:
Red Convertible
Black Convertible
Red Sedan
Black Sedan
However, if the attributes in question have many levels, or if an attribute (such as price, for example) is
suspected of having interactions with many others, then composite attributes will not be enough. In that
case too many parameters must be estimated to permit analysis at the individual level, and the most
common solution is to evaluate interactions by pooling data from many respondents. ACA has been
shown to have weaknesses in pricing research, where it often underestimates the importance of price.
We generally recommend that either CVA or CBC (described below) be used if pricing research is the
main purpose of your study. Some researchers include price as an attribute in ACA, but adjust the price
part worths using information gained from a secondary full-profile conjoint exercise or series of holdout
tasks.
The CVA System is conjoint software first introduced by Sawtooth Software in 1990 for traditional fullprofile conjoint analysis. It is a good technique when the number of attributes is about six or fewer. It
often does a better job than ACA in pricing research. CVA uses a paired-comparison interview that can
be administered either by computer (PC-based if using Ci3), over the Web (the CVA Internet Module) or
with paper and pencil.
The CBC System is conjoint software first introduced by Sawtooth Software in 1993 to administer and
analyze "Choice based Conjoint" studies. CBC conducts a paper- or computer-administered interview in
which the respondent sees a series of choice tasks. Each task displays several concepts and asks which
the respondent would choose from that set. Optionally, a "would choose none" option may be offered.
Attribute levels in each concept are varied in such a way that values similar to conjoint part worths can be
estimated for each attribute level. Analysis can be done at the group level with multinomial logit analysis,
which is included with the base CBC system. Additionally, latent segment-based part worths can be
generated using Latent Class. Individual-level part worths can be estimated from choice data using ICE
or hierarchical Bayes (CBC/HB).
We think CBC provides three potential advantages over other conjoint methods:
1. It presents tasks that may be more "realistic" than other conjoint methods. In the real world,
buyers express their preferences by choosing one product or another, rather than by rating or
ranking them.
2. By including the opportunity for the respondent to choose "None of these," CBC may be able
to deal more directly with questions relating to volume (rather than just share). By contrast, ACA
models volume using "Likelihood of Purchase" simulations, based on responses to Calibrating
Concepts.
3. Because CBC analysis can be done for groups rather than for individual respondents,
sufficient information is available to measure interactions as well as main effects.
However, CBC has the disadvantage of being an inefficient way of collecting data. The respondent must
read and process several full profile concepts before giving each answer. To keep the respondent from
becoming overloaded and confused, we suggest using no more than about six attributes with CBC’s
standard full-profile questionnaires. CBC should be considered when there are few attributes and when
interactions are likely to occur, both of which are often true of pricing studies. With the newer “partialprofile” CBC questionnaires (supported by the CBC/Web Advanced Design Module) CBC researchers
have found a way to study more attributes without overloading respondents. However, the resulting part
worths usually have less stability at the individual level than with ACA.
ii.5.1 Specify Attributes and Levels
Depending on your conjoint analysis license, you will be able to enter either up to 10 or 30 attributes in
your study. Up to 15 levels can be entered per attribute (unless you own the CBC/Web Advanced Design
Module, in which case up to 100 levels can be specified for an attribute).
An attribute is a characteristic of a product (e.g. color) which can take on various levels (e.g., red, yellow,
blue). Every attribute must have at least two levels. The underlying theory of conjoint analysis holds that a
buyer places a certain part worth (or utility value) on each attribute level, and that the overall utility of any
product is obtained by summing up the part worths values of its specific attribute levels.
In conjoint experiments, we show respondents product concepts that are described by different
combinations of attribute levels and ask them to somehow express their preferences for those concepts.
One way of thinking about conjoint analysis is that we are conducting a designed experiment for each
respondent to see how his/her preferences vary as we change attribute levels.
By observing how respondents evaluate products in response to changes in the underlying attribute
levels, we can estimate the impact (part worth) each attribute level has upon overall product preference.
Once we learn respondents’ preferences for the various attribute levels, we can predict how buyers might
respond to any potential combination of levels in our study, whether or not that actual product was ever
displayed during the interview.
Ranking (Applies only to ACA-type conjoint studies)
For each attribute, you can specify whether it has known (a priori) order. If you know ahead of time that
respondents prefer low prices to high prices (all else equal) or fast speeds to slow speeds, you can avoid
having to ask what might seem to be an obvious question (the ACA Rating question for this attribute).
We strongly encourage you to specify an order for attributes that have known rational order, like price and
speed. When you specify a ranking order “worst to best” or “best to worst,” you must be certain that the
rational respondent would agree with you, and that you have specified the correct order.
For example, the levels:
Level 1: $5
Level 2: $10
Level 3: $15
are arranged from “Best to Worst,” meaning, the best level is first and the worst level is last. It would be a
severe mistake to accidentally put “Worst to Best” for this attribute, as it will make the utility estimation
incorrect for this attribute. If you make this mistake, you cannot simply recover by reversing the order of
the utilities. The priors information will conflict with the pairs information, thus dampening and usually
reversing the true effect of price.
Guidelines
In generating attributes and levels, consider the following guidelines:
1. Attributes should be independent. This is especially important for ACA studies. Non-independent
attributes can be a problem with partial-profile conjoint and self-explicated models. In ACA, which
involves both, attributes that have overlap in meaning can get “double counted,” resulting in too much
inferred influence on product choice. It is therefore important to economize; including attributes with
overlapping meanings is wasteful and can lead to incorrect answers.
Furthermore, levels for related attributes may not combine naturally with one another. Though it can lead
to more realistic interviews, it is usually detrimental (and sometimes fatal) to prohibit levels from occurring
with others. (There are some intriguing exceptions to this that we will not discuss in this documentation.)
2. Levels within each attribute should be mutually exclusive. This point becomes clear when you specify
products using the market simulator (during the analysis phase) and are forced to choose only a single
level from each attribute.
Consider the following attributes for a study on optional automobile features:
Optional Features:
Sunroof
GPS (Global Positioning System)
Extended warranty
This formulation doesn’t permit simulating preference for a car that has both a Sunroof and a GPS.
Similarly, we could not simulate preference for an automobile that had none of these features. There are
two ways to resolve this quandary:
a) Create an attribute with all potential combinations of these features. This results in an eightlevel attribute, if you include the option that none of these features is available.
b) Formulate three separate attributes each with two levels: (No Sunroof, Sunroof), (No GPS,
GPS), (No Warranty, Warranty).
Option (a) adds seven parameters to the model and forces the measurement of an explicit three-way
interaction. With the more complex model definition, we can investigate whether there are diminishing
returns by bundling the features. By splitting the options into three distinct binary attributes (Option b),
only three parameters are added to the model; however, interactions are not measured.
3. Attribute levels should cover the full range of possibilities for existing products as well as products that
may not yet exist, but that you want to investigate. Although the market simulator allows you to
extrapolate and interpolate, only linear interpolation and extrapolation are possible. Although
interpolation is likely to produce acceptable results, extrapolation is prone to error and should be avoided.
One way to ensure that you are including the appropriate levels and ranges is to ask your client to specify
ahead of time the market simulations to be run during the analysis phase of your study. That exercise
can often reveal weaknesses in your attribute specifications.
4. Prohibitions, if at all possible, should be avoided. Specifying unnecessary or excessive prohibitions is
one of the most common mistakes. The problem usually begins when either the analyst (or the analyst’s
client) notices that some product combinations displayed during the interview are not realistic, given what
currently exists in the market. Sometimes a product is shown with all the best features at the lowest
price; or two attribute levels that would not naturally occur in the real world are paired together. The
inclination is simply to prohibit such combinations. We urge you to exercise restraint when considering
prohibiting pairs.
Too many prohibitions, in the best case, can lead to imprecise utility estimation and, in the worst case,
unresolvable (confounded) effects and the complete inability to calculate stable utilities. It is better to
prompt respondents that they will see combinations during the interview that are not yet available in the
market or that seem unlikely. You can urge respondents to answer as if these products were actually
available today.
There are other strategies for dealing with prohibitions. Consider the example below with brands of soda
and package types:
Brand:
Sawtooth Spritz
Kong Kola
Splut
Package Type:
2-liter bottle
6-pack of 12-oz cans
Suppose that Splut was only available in 6-packs of cans. Furthermore, you are displaying actual
pictures of the products, and thus can only display actual (not potential) products. Rather than define a
prohibition between Splut and the 2-liter bottle, it would make more sense to combine these two attributes
as a single attribute with five levels:
Sawtooth Spritz in a 2-liter bottle
Sawtooth Spritz in a 6-pack of 12-oz cans
Kong Kola in a 2-liter bottle
Kong Kola in a 6-pack of 12-oz cans
Splut in a 6-pack of 12-oz cans
Under this strategy, no prohibitions are required.
CBC is even more sensitive to too many attribute prohibitions than ACA. ACA tends to be more robust in
the face of attribute prohibitions given that a significant portion of the information used to estimate part
worth utilities comes from the priors section, which is not affected by attribute level prohibitions.
5. The number of levels you choose to define an attribute can have a significant bearing on the results.
The first concern has been called the “Number-of-Levels Effect.” All else equal, attributes defined on
more attributes tend to get more importance. Thankfully, the Number-of-Levels Effect is less problematic
in ACA than full-profile conjoint methods. Even so, we suggest you at least approximately balance the
number of levels across attributes.
The second concern is that you limit the number of levels on which quantitative attributes are described.
We suggest not including more than about five levels to describe attributes such as price or speed. It’s
usually better to have more data at each price point than to have thinner measurements at more price
points. Measuring too many points along a quantitative function can result in troublesome reversals. If
you cover the entire range of interest with fewer levels, you can interpolate between levels within the
market simulator to get finer granularity if needed.
There are additional considerations for CBC studies. If you plan to model quantitative attributes with
linear terms, which is an advanced option in our Latent Class and CBC/HB add-on modules for CBC
studies, you may decide to include more than five levels. With CBC, bear in mind that the interaction
between two 9-level attributes involves 64 parameters to be estimated, but the interaction between two 5level attributes requires only 16.
6. Attributes that cannot be adequately described in words should be represented in multimedia. But if
attributes do not require multimedia to adequately communicate their properties, it would probably be a
mistake to make them multimedia. Though the interview might appear more attractive, it might bias the
results in favor of multimedia attributes.
7. In our experience, respondents have a difficult time dealing with more than about six attributes in fullprofile conjoint methods like CBC. When faced with too much information, respondents often resort to
simplification strategies to deal with the difficulty of the task. Unless respondents employ the same sort of
simplification strategies when making real-world decisions, CBC results may place too much emphasis on
the few most important features.
Traditional CBC will work well for relatively small conjoint designs, and we generally suggest other
methods (such as ACA) for larger problems. Even if you decide to use ACA, you may still find a role for
CBC as the second half of a “dual conjoint” approach within the same questionnaire. ACA can be used to
cover the full breadth of features, whereas CBC can be used to study a subset of the same attributes,
such as brand, price and a key performance variable or bundle.
You may also consider the use of partial-profile designs using CBC/Web’s Advanced Design Module.
Like ACA, Partial-Profile CBC designs show only a subset of the total attribute set in any one tradeoff
task. This is a relatively new technique, but a body of research is building which suggests it can be
effective in certain situations.
ii.5.2 Attribute Text
This dialog lets you input attribute text for conjoint attributes. An example of attribute text is “brand,”
where the level text is “Coke,” “Pepsi” and “Sprite.” The default text for an ACA/Web displays the attribute
text within the Ratings question, using the system variable “%attribute”.
You can include just text, or HTML elements (to display graphics, provide hyperlinks, etc.) as attribute
text.
ii.5.3 Level Text
This dialog lets you input level text for conjoint attributes. An example of attribute text is “brand”, where
the level text is “Coke,” “Pepsi” and “Sprite.” Level text is used in all ACA questions: Ratings,
Importances, Pairs and Calibration Concepts.
The text for the most preferred and least preferred levels can be displayed dynamically within an
importance question using the “%best” and “%worst” system variables.
You can include just text, or HTML elements (to display graphics, provide hyperlinks, etc.) as level text.
For more information about specfying attribute levels, see Defining Attributes and Levels.
ii.5.4 Specify Prohibited Pairs
This dialog lets you specify which combinations of attribute levels (or attributes altogether) may not be
shown together in pairs and calibration concept questions.
A Warning
Prohibitions, if at all possible, should be avoided. Specifying unnecessary or excessive prohibitions is
one of the most common mistakes. The problem usually begins when either the analyst (or the analyst’s
client) notices that some product combinations displayed during the interview are not realistic, given what
currently exists in the market. Sometimes a product is shown with all the best features at the lowest
price; or two attribute levels that would not naturally occur in the real world are paired together. The
inclination is simply to prohibit such combinations. We urge you to exercise restraint when considering
prohibiting pairs.
Too many prohibitions can lead to imprecise part worth estimation. It is better to prompt respondents that
they will see combinations during the interview that are not yet available in the market or that seem
unlikely. You can urge respondents to answer as if these products were actually available today.
How Many Prohibitions Can I Safely Add?
A common question that users ask is “how many prohibitions can I specify without seriously damaging the
results of my study?” This cannot be answered without more information. For it is not the sheer number
of prohibitions but the specific pattern of prohibitions that more directly affects the degree of correlation
among the attribute levels in the design matrix, and thus the design efficiency.
Assume that the researcher wants to specify 3 prohibitions between a 2-level attribute and a 3-level
attribute. There are just 6 possible combinations that can occur when combining those two attributes. If
the researcher prohibits 3 of those combinations from occurring, this eliminates half of the possible
combinations between those attributes. There would probably be less damage to the efficiency of the
design if three total prohibitions were allocated across three separate attribute combinations (one per pair
of attributes).
As a side note, the researcher in this instance may decide to specify the three non-prohibited
combinations of those two attributes as a single attribute, rather than as two separate (and not
independent) attributes. The main drawback of this approach is that after combining these two attributes,
the researcher will not be able to compute the attribute importance or the relative partworths of each
attribute independently.
ii.6.1 ACA Rating Questions
(Question Text Tab)
For each attribute that has an unknown preference order (no a priori ranking), you must ask a ratings
question. These are labeled ACARAT1 through ACARATn, where the integer corresponds with the
attribute number.
This question is usually omitted for attributes (such as price or quality) for which the respondent's
preferences should be obvious. (When you specify the attributes and levels, you can specify that the
order of preference for levels is “best to worst” or “worst to best” and the ratings question is skipped for
such attributes.)
For all other attributes (for which we do not know ahead of time the rational ranking), the screen may look
like the following:
(The rating scale can be defined from 2 to 9 points. We suggest using at least 5 scale points. In any
case, it is probably not wise to use fewer scale points than the number of levels in any one attribute for
which the Rating question is asked.)
The respondent is required to check one radio button per attribute level. If a level is skipped, SSI Web
will prompt the respondent to complete the question prior to moving to the next page in the interview.
Alternating Colors to Improve Readability
Every other attribute level and the row dedicated to it can be given an alternating background color. You
do this by specifying ACA Alternating Colors under the ACA Settings dialog.
Customizing the Text for each ACARAT question
The text and other settings must remain the same across all ACA Ratings questions, with the exception of
the attribute label that can be dynamically included using the %attribute variable. When you run the
survey over the Web, the text you specified for the attribute under the Specify Attributes and Levels
screen will be substituted for %attribute. This means that if you want to customize the text placed at the
beginning of the ACA Ratings question, you have two options:
1) Type the full desired text to be displayed in the Scale Header as the attribute label within the Specify
Attributes and Levels dialog. Use the %attribute variable to dynamically display that text for each
ACARAT question.
2) Do not display a Scale Header at all, but insert a Text/HTML Filler question before each ACARAT
question.
The ACARAT?_? System Variable
You can refer to a specific rating of an attribute level later on in the questionnaire if you like, using
“underscore notation.” For example, %ACARAT3_4% contains the rating that the respondent provided to
the fourth level of the third attribute.
ii.6.2 ACA Rating Questions
(Question Format Tab)
This dialog lets you control the layout of the ACA Rating question.
Pixels: Many of the settings use “pixels” as a measure of width or height. A pixel is a tiny unit, whose
relative size depends on the resolution of the respondent’s monitor. A common monitor setting is 800 x
600 pixels, which means that the screen area is 800 pixels wide and 600 pixels tall. Thus, with an 800 x
600 resolution setting, a pixel is about 1/800th the width of the screen.
Question Format
Question Width (pixels): This determines the width of the entire question. We suggest you
choose a question width that will work well on most respondents’ screens, so that they will not
have to scroll to the right to see the full question. Most respondents use a resolution of at least
800 x 600. We have chosen 635 as the default so that even those using a 640 x 480 resolution
can see the entire question (assuming their browser window is maximized).
Width of Labels Column (pixels): The Labels Column is the area in the left-most part of the
question where the attribute level text is displayed.
Space Between Cells (pixels): This is the “cellspacing” attribute in HTML. SSI Web questions are
made up of many cells within tables. Generally, questions look best with no or very little space
between those cells. An interesting effect may be accomplished by specifying just a 1- or 2-pixel
border, as the small separation between the cells takes on the background color for your survey,
or if specified, the background color for this question.
Space Between Border and Cell Contents (pixels): This is the “cellpadding” attribute in HTML.
Generally, a cellpadding between 0 to 15 pixels works well.
Question Alignment: This determines the relative position of the question on the screen. You
may choose Left, Center, Right, or Tabbed. If you choose Tabbed, the question is indented the
number of pixels specified under Global Settings... on the Format Tab in the General Indent
field.
Background Color
The background color is used in the header,footer and anchor text areas at the top of the
question, and between cells if you specify Space Between Cells greater than zero pixels.
ii.6.3 ACA Rating Anchor Text
This dialog lets you specify the number of scale points and the anchor text to be used for ACARAT
questions. The Anchor Text is the wording (or the graphic) placed directly above each radio button on the
scale. Be careful not to include too much text or a graphic that is too large, or the buttons may not appear
equally spaced across the width of the question.
The rating scale can be defined from 2 to 9 points. We suggest using at least 5 scale points. In any
case, it is probably not wise to use fewer scale points than the number of levels in any one attribute for
which the Rating question is asked.
ii.6.4 ACA Importance Questions
(Question Text Tab)
Having learned preferences for the levels within each attribute, we next determine the relative importance
of each attribute to this respondent. These are determined using required questions ACAIMP1 through
ACAIMPn, where n is the number of attributes in your study. The importance question is a required
question type for every attribute in your ACA study.
This information is useful in two ways. First, it may allow us to eliminate some attributes from further
evaluation if the interview would otherwise be too long. Second, it provides information upon which to
base initial estimates of this respondent's part worths.
As a matter of fundamental philosophy, we do not ask about attribute importance with questions such as
"How important is price?" The importance of an attribute is clearly dependent on magnitudes of
differences among the levels being considered. For example, if all airline tickets from City A to City B
were to cost between $100 and $101, then price couldn't be important in deciding which airline to select.
However, if cost varied from $10 to $1000, then price would probably be seen as very important.
Our questioning is based on differences between those levels the respondent would like best and least,
as illustrated:
(The rating scale can be defined from 2 to 9 points. We suggest using at least 4 scale points.)
In every Importance question, the best and worst levels (determined from the ACA ratings question that
proceeded this section) are contrasted. The text is dynamically inserted into the question when the
survey runs “live” over the Web server using the system variables %best and %worst.
It is worth mentioning that there are different ways to phrase the “importance” question, and indeed
referring to attribute “importance” is sometimes difficult for some respondents to comprehend. How the
wording of this question should be phrased depends on the audience. Other possibilities with the
ACA/Web software include:
The “Regrets” Format:
The “Unexpected Windfall” Format:
At this point we have learned which attributes are most important for this respondent and which levels are
preferred. From now on, the interview is focused on those most important attributes and combinations of
the levels that imply the most difficult trade-offs.
Customizing the Text for each ACAIMP question
The text and other settings must remain the same across all ACA Importance questions, with the
exception of the attribute label that can be dynamically included using the %attribute variable and the best
and worst levels, which are dynamically included using the %best and %worst variables. When you run
the survey over the Web, the text you specified for the attribute under the Specify Attributes and Levels
screen will be substituted for %attribute, %best and %worst. This means that if you want to customize the
text placed at the beginning of the ACA Importance question, you have two options:
1) Type the full desired text to be displayed in the Scale Header as the attribute label within the Specify
Attributes and Levels dialog. Use the %attribute variable to dynamically display that text for each
ACAIMP question.
2) Do not display a Scale Header at all, but insert a Text/HTML Filler question before each ACAIMP
question.
Alternating Colors to Improve Readability
Every other attribute level and the row dedicated to it can be given an alternating background color. You
do this by specifying ACA Alternating Colors under the ACA Settings dialog.
The %best, %worst and ACAIMP_? System Variables
You can refer to a specific importance score later on in the questionnaire if you like, using “underscore
notation.” For example, %ACAIMP_4% contains the importance score that the respondent provided to
the fourth level of the third attribute.
Within an importance question, the text for the previously rated best and worst levels can be returned
using the variables %best and %worst.
ii.6.5 ACA Importance Questions
(Question Format Tab)
This dialog lets you control the layout of the ACA Importance question.
Pixels: Many of the settings use “pixels” as a measure of width or height. A pixel is a tiny unit, whose
relative size depends on the resolution of the respondent’s monitor. A common monitor setting is 800 x
600 pixels, which means that the screen area is 800 pixels wide and 600 pixels tall. Thus, with an 800 x
600 resolution setting, a pixel is about 1/800th the width of the screen.
Question Format
Question Width (pixels): This determines the width of the entire question. We suggest you
choose a question width that will work well on most respondents’ screens, so that they will not
have to scroll to the right to see the full question. Most respondents use a resolution of at least
800 x 600. We have chosen 635 as the default so that even those using a 640 x 480 resolution
can see the entire question (assuming their browser window is maximized).
Width of Labels Column (pixels): The Labels Column is the area in the left-most part of the
question where the attribute level text is displayed.
Space Between Cells (pixels): This is the “cellspacing” attribute in HTML. SSI Web questions are
made up of many cells within tables. Generally, questions look best with no or very little space
between those cells. An interesting effect may be accomplished by specifying just a 1- or 2-pixel
border, as the small separation between the cells takes on the background color for your survey,
or if specified, the background color for this question.
Space Between Border and Cell Contents (pixels): This is the “cellpadding” attribute in HTML.
Generally, a cellpadding between 0 to 15 pixels works well.
Question Alignment: This determines the relative position of the question on the screen. You
may choose Left, Center, Right, or Tabbed. If you choose Tabbed, the question is indented the
number of pixels specified under Global Settings... on the Format Tab in the General Indent
field.
Background Color
The background color is used in the header,footer and anchor text areas at the top of the
question, and between cells if you specify Space Between Cells greater than zero pixels.
ii.6.6 ACA Importance Anchor Text
This dialog lets you specify the number of scale points and the anchor text to be used for ACAIMP
questions. The Anchor Text is the wording (or the graphic) placed directly above each radio button on the
scale. Be careful not to include too much text or a graphic that is too large, or the buttons may not appear
equally spaced across the width of the question.
The rating scale can be defined from 2 to 9 points. We suggest using at least 4 scale points.
ii.6.7 ACA Pairs Questions
(Question Text Tab)
The Pairs section elicits the conjoint tradeoffs. According to the number of pairs questions (n) you
specified to include under Specify ACA Interview Parameters screen, ACAPAR1 through ACAPARn
are included in your questionnaire.
In each case the respondent is shown two product concepts. The respondent is asked which is preferred,
and also to indicate strength of preference. The example below presents concepts differing on only two
attributes. Although concepts may be specified on up to five attributes, simple concepts like these
present an easy task for the respondent, and are a useful way to begin this section of the interview.
(The rating scale can be defined from 2 to 9 points. We suggest using at least 7 scale points.)
Or Text: You can add a space between the product concepts that is just blank, or a word (such as “OR”) if
you would like. By default, the text “&nbsp;” is specified, which is a reserved series of letters that tells
your browser to print a space character. But you can delete this and type a word such as “OR” instead. If
you delete the text altogether from the “Or Text” field, the separating cell is deleted.
Every time the respondent completes a page of pairs question(s), the estimate of the respondent’s part
worths is updated. Updating the part worths improves the quality of subsequent pairs questions. We
strongly encourage you to place page breaks often throughout the pairs questions. If you can count on
fast load times (respondents will be using fast web connections), we suggest page breaks after each
pairs question.
Alternating Colors to Improve Readability
Every other product concept can be given an alternating background color. You do this by specifying
ACA Alternating Colors under the ACA Settings dialog.
ii.6.8 ACA Pairs Questions
(Question Format Tab)
This dialog lets you control the layout of the ACA Pairs question.
Pixels: Many of the settings use “pixels” as a measure of width or height. A pixel is a tiny unit, whose
relative size depends on the resolution of the respondent’s monitor. A common monitor setting is 800 x
600 pixels, which means that the screen area is 800 pixels wide and 600 pixels tall. Thus, with an 800 x
600 resolution setting, a pixel is about 1/800th the width of the screen.
Question Format
Question Width (pixels): This determines the width of the entire question. We suggest you
choose a question width that will work well on most respondents’ screens, so that they will not
have to scroll to the right to see the full question. Most respondents use a resolution of at least
800 x 600. We have chosen 635 as the default so that even those using a 640 x 480 resolution
can see the entire question (assuming their browser window is maximized).
Question Border: This is the “border” attribute for a table in HTML. The border is placed on the
outside of the product concepts and a narrow line separates the attribute levels. If you use a
border, a width of 1 to 7 pixels is common.
Space Between Cells (pixels): This is the “cellspacing” attribute in HTML. SSI Web questions are
made up of many cells within tables. Generally, questions look best with no or very little space
between those cells. An interesting effect may be accomplished by specifying just a 1- or 2-pixel
border, as the small separation between the cells takes on the background color for your survey,
or if specified, the background color for this question.
Space Between Border and Cell Contents (pixels): This is the “cellpadding” attribute in HTML.
Generally, a cellpadding between 0 to 15 pixels works well.
Question Alignment: This determines the relative position of the question on the screen. You
may choose Left, Center, Right, or Tabbed. If you choose Tabbed, the question is indented the
number of pixels specified under Global Settings... on the Format Tab in the General Indent
field.
Show Horizontal Line: This determines whether a horizontal line “<HR>” is used directly above
and below the product concepts.
Background Color
The background color is used in the header,footer and anchor text areas at the top of the
question, and between cells if you specify Space Between Cells greater than zero pixels.
ii.6.9 ACA Pairs Anchor Text
This dialog lets you specify the number of scale points and the anchor text to be used for ACAPAR
questions. The Anchor Text is the wording (or the graphic) placed directly above each radio button on the
scale. Be careful not to include too much text or a graphic that is too large, or the buttons may not appear
equally spaced across the width of the question.
The rating scale can be defined from 2 to 9 points. We suggest using at least 7 scale points.
ii.6.10 ACA Calibration Questions
(Question Text Tab)
(Optional Questions)
The computer composes a series of "calibrating concepts" using those attributes determined to be most
important. These concepts are chosen to occupy the entire range from very unattractive to very attractive
for the respondent. The respondent is asked a "likelihood of buying" question about each.
According to the number of calibration concepts questions (n) you specified to include under Specify
ACA Interview Parameters screen, n calibration concepts are included in your questionnaire.
The calibration concepts can be displayed in “vertical” mode as separate questions (one on top of the
other, on a single or across multiple pages) or in “horizontal” mode as a unified series of concepts
arranged horizontally on a single page.
We first present the concept we expect the respondent to like least among all possible concepts, and the
second is the one we expect to be liked best. Those two concepts establish a frame of reference. The
remaining concepts are selected to have intermediate levels of attractiveness.
This information can be used to calibrate the part worth utilities obtained in the earlier part of the interview
for use in Purchase Likelihood simulations during analysis. Conjoint part worths are normally determined
only to within an arbitrary linear transformation; one can add any constant to all the values for any
attribute and multiply all part worths by any positive constant. The purpose of this section is to scale the
part worths non-arbitrarily, so that sums of part worths for these concepts are approximately equal to logit
transforms of the respondent's likelihood percentages.
The horizontal screen format is:
When using the horizontal format, the Calibration Concept questions must be displayed on the same
page. If you choose the vertical format (one concept on top of the other), you can format them on a
single or across multiple pages. You define those formats on the Question Format tab in the Question
Layout field.
If you plan only to conduct share of preference simulations, and particularly if you are using the ACA/HB
system for hierarchical Bayes estimation, you may consider not including calibration concepts in your
ACA survey. See Estimating ACA/Web Utilities for more information on the use of calibrating concepts.
ii.6.11 ACA Calibration Questions
(Question Format)
This dialog lets you control the layout of the ACA Calibration Concepts question.
Pixels: Many of the settings use “pixels” as a measure of width or height. A pixel is a tiny unit, whose
relative size depends on the resolution of the respondent’s monitor. A common monitor setting is 800 x
600 pixels, which means that the screen area is 800 pixels wide and 600 pixels tall. Thus, with an 800 x
600 resolution setting, a pixel is about 1/800th the width of the screen.
Question Format
Question Layout: Lets you specify whether to show the calibration concepts as a unified question
on a single page (Horizontal layout) or one on top of the other (on a single or multiple pages) in
Vertical format.
Question Width (pixels): This determines the width of the entire question. We suggest you
choose a question width that will work well on most respondents’ screens, so that they will not
have to scroll to the right to see the full question. Most respondents use a resolution of at least
800 x 600. We have chosen 635 as the default so that even those using a 640 x 480 resolution
can see the entire question (assuming their browser window is maximized).
Question Border: This is the “border” attribute for a table in HTML. The border is placed on the
outside of the product concepts, and a narrow line separates the attribute levels. If you use a
border, a width of 1 to 7 pixels is common.
Space Between Cells (pixels): This is the “cellspacing” attribute in HTML. SSI Web questions are
made up of many cells within tables. Generally, questions look best with no or very little space
between those cells. An interesting effect may be accomplished by specifying just a 1- or 2-pixel
border, as the small separation between the cells takes on the background color for your survey,
or if specified, the background color for this question.
Space Between Border and Cell Contents (pixels): This is the “cellpadding” attribute in HTML.
Generally, a cellpadding between 0 to 15 pixels works well.
Question Alignment: This determines the relative position of the question on the screen. You
may choose Left, Center, Right, or Tabbed. If you choose Tabbed, the question is indented the
number of pixels specified under Global Settings... on the Format Tab in the General Indent
field.
Include Margins: Places an additional margin (that assumes the background color of the
question) to the left and right of the product concepts.
Background Color
The background color is used in the header,footer and anchor text areas at the top of the
question, and between cells if you specify Space Between Cells greater than zero pixels.
ii.6.12 ACA Calibration Settings
Numeric Response Range
These fields let you specify the minimum and maximum values to permit for Calibration Concept
responses. Respondents must provide an integer response, with no decimal portion. If you use another
response range (such as 1 to 10), the responses will be mapped back to a 0 to 100 point range for use in
calibration concept computations and in the purchase likelihood simulation model.
ii.7.1 Calculate ACA Utilities
Before computing part worth utilities, you must first have downloaded a data file (STUDYNAMEdat.dat)
from your Web server (or accumulated the data from PCs), and placed it within your study directory,
where all other study-related files are resident. We use Ordinary Least Squares to compute part worth
utilities. The resulting part worths are saved to a STUDYNAME.utl file, which is an ASCII file with a
specific format.
Select Analysis | Calculate ACA Utilities. You are able to specify a number of options on this dialog:
Respondent Numbering Method
You can choose from five options for assigning respondent numbering. This lets you specify how the
respondent numbers (case IDs) are to be generated. User Name is one of the passwords from the
passwords module. You should only choose this if these values were purely numeric (no letters).
Password is also one of the passwords from the passwords module. You should also only use this if the
passwords were purely numeric (no letters). Internal Interview Numbers are those that the system
assigns when respondents complete the survey over the Web. There can be gaps in the sequence if
some respondents started a survey but didn’t finish. Sequentially Assigned Numbers (default) are
integers 1 through n. Other lets you choose another numeric variable to use for respondent numbering.
This last option is useful if you are linking surveys from another source and need to merge the data sets
by a “passed-in” variable.
Respondent Filter
All Respondents: Choose this option to export all respondents: both complete and incomplete. Data for
questions that were not completed are left blank. Qualified/Completes Only: Choose this option to export
only qualified complete data records. A survey is considered qualified complete when respondents click
the Submit (Next) button on a terminating question that is marked with a qualified status. Disqualified and
Incompletes Only: Choose this option to export only disqualified and incomplete data records. A survey
is considered disqualified when respondents click the Submit (Next) button on a terminating question that
is marked with a disqualified status. A survey is considered incomplete if respondents have not reached
a terminating question.
Single Line Per Record
The normal format for the conjoint utility output file (STUDYNAME.utl) features multiple hard returns
within each respondent record. Our SMRT (Market Simulator) program requires that format. However,
for ease of use in reading the STUDYNAME.utl file within programs like Excel or SPSS, you may wish to
check this box, thereby formatting each respondent on a single (rather than multiple) line.
Compute
Click Compute to compute utilities and save the STUDYNAME.utl file to the current folder. A display
shows you the progress of the calculations, and provides a Cancel button if you wish to terminate the
process early.
ii.7.2 Details of ACA/Web Utility Estimation
ACA includes two major sections: self-explicated priors and conjoint pairs. Part worth utility estimates for
these two sections are determined as follows:
Prior Utilities:
If rank orders of preference are asked (not currently offered in ACA/Web v1) we convert them to relative
desirabilities by reversing them. For example, ranks of 1, 2, and 3 would be converted to values 3, 2, and
1, respectively. If desirability ratings are asked (only method offered in ACA/Web v1), those are retained
"as is."
The average for each attribute is subtracted to center its values at zero. For example, desirability values
3, 2, and 1 would be converted to 1, 0, and -1, respectively.
The values for each attribute are scaled to have a range of unity. For example, desirability values of 1, 0,
and -1 would be converted to .5, 0, and -.5.
The importance ratings for each attribute are scaled to range from 1 to 4, and then used as multipliers for
the unit-range desirability values. Thus, if an attribute has desirabilities of .5, 0, and -.5 and an
importance of 3, we get -1.5, 0, and 1.5.
The resulting values are initial estimates of part worths, with these characteristics:
For each attribute the range of utility values is proportional to stated importance, and attribute
importances differ by at most a factor of 9 (the maximum number of scale points that can be specified for
importance questions).
Within each attribute the values have a mean of zero, and differences between values are proportional to
differences in desirability ratings or rank orders of preference.
Pairs Utilities:
An independent variable matrix is constructed with as many columns as levels taken forward to the pairs
questions. If a level is displayed within the left concept, it is coded as -1; levels displayed within the righthand concept are coded as +1. All other values in the independent variable matrix are set to 0.
A column vector is created for the dependent variable as follows: the respondents’ answers are zerocentered, where the most extreme value for the left concept is given a -4, and the most extreme value on
the right +4. Interior ratings are fit proportionally within that range.
Each pairs question contributes a row to both the independent variable matrix and dependent variable
column vector. Additionally an n x n identity matrix is appended to the independent variable matrix,
where n is the total number of levels taken forward to the pairs questions. An additional n values of 0 are
also appended to the dependent variable matrix. The resulting independent variable matrix and
dependent variable column vector each have t + n rows, where t is the number of pairs questions and n is
the total number of levels taken forward to the pairs questions. OLS estimates of the n attribute levels are
computed by regressing the dependent variable column vector on the matrix of independent variables.
Combining the Priors and Pairs Utilities:
The priors and pairs part worths are normalized to have equal sums of differences between the best and
worst levels of each attribute across all attributes. (Note that the procedures described above
automatically result in zero-centered part worths within attribute for both sets of part worths.) The prior
part worth utilities for levels also included in the pairs questions are multiplied by n/(n+t), where n is the
total number of levels used in the Pairs section, and t is the number of pairs questions answered by the
respondent. Any element in the priors that was not included in the Pairs section is not modified. The
pairs utilities are multiplied by t/(n+t). The two vectors of part worths (after multiplication by the weights
specified above) are added together. These are the final part worths, prior to calibration.
As a final step, the part worth utilities are calibrated. It is widely recognized that the part worths arising
from most conjoint methods are scaled arbitrarily, and that the only real information is contained in the
relative magnitudes of differences among them. So far, that is true of ACA as well.
However, the calibration concepts permit scaling of part worths in a non-arbitrary way. In any product
category, some respondents will be more interested and involved than others. We attempt to measure
each respondent's degree of involvement by asking "likelihood of buying" questions for several concepts
that differ widely in attractiveness. The data obtained from those concepts is useful in three ways:
Correlations between part worths and likelihood responses may be used to identify unmotivated
or confused respondents. Respondents whose likelihood responses are not related to their part
worths should probably not be included in subsequent preference simulations.
The level of likelihood responses may identify respondents who are more or less involved in the
product category. Respondents who give low likelihood responses even to concepts
custom-designed for them should probably be treated as poor prospects in simulations of
purchase behavior.
Variation in likelihood responses may also identify respondents who are "tuned in" to the product
category. A respondent who gives a low likelihood rating to the least attractive concept and a
high rating to the most attractive should be made to respond sensitively in preference simulations,
whereas someone who gives every concept similar likelihood values should be made insensitive
in simulations.
Each respondent is first shown what should be the least attractive possible concept, followed by the most
attractive possible concept, as determined from his or her own answers. Those two concepts establish a
frame of reference. The remaining concepts are of middling attractiveness. We determine an intercept
and one regression coefficient to apply to utilities to best predict logits of likelihood responses. Those
parameters are then used in a final scaling of utilities, which are therefore no longer arbitrarily scaled.
The procedure is as follows:
Let:
p = the predicted likelihood of buying a concept
x1 = the concept's utility based on the final “uncalibrated” utilities
b1 = the coefficient used to weight the utilities
a = an intercept parameter
The actual likelihood response is a single digit on a scale with n points. Responses are trimmed to the
range of 5 to 95. Then, using the logit transformation, we model buying likelihood as a function of the
respondent's utilities as:
ln [ p / (100 - p) ] ~ a + b1x1
If the regression coefficient is less than 0.00001, we assume the estimation is faulty and use a
conservative positive value (0.00001). The r-squared (measure of fit) reported in the .UTL file is set to 0
in such cases. If the calibration concepts section is not included in the interview, the respondent is
assumed to have answered 0 and 100 to the worst and best concepts, respectively, and 50 to the other
concepts.
To calibrate the part worths, each is multiplied by b1. The intercept a is divided by the number of
attributes, and the quotient added to the part worth for every attribute level. The part worths can be
added up and antilogs of their sums are predictions of odds ratios for claimed likelihood of purchase of
any concept, just as though that concept had been included in the calibration section of the questionnaire.
A Note about Hierarchical Bayes Estimation
OLS has been successfully used in ACA calculations for well over a decade. However, a new technique
called hierarchical Bayes estimation provides a more theoretically satisfying way of combining information
from priors and pairs. The results are also usually better from the practical standpoint of improved
predictions of holdout questions. We recommend that the interested reader investigate ACA/HB by
downloading the technical paper from our Web site (www.sawtoothsoftware.com).
ii.7.3 How Utility Estimation in ACA/Web Differs from Previous
Versions of ACA
In earlier versions of ACA (ACA 3), the final part worth utilities were true least squares estimates, with the
priors and pairs information combined in the same regression tableau. In version 4 those two
components were estimated separately and optimal weights for the two components were chosen to best
fit responses to the calibration concepts.
The utility calculator for ACA/Web offers an ordinary least-squares (OLS) strategy that shares some
characteristics of both methods. It has the benefit of not mixing dependent variables that may have
different variances (as did ACA 3) within the same regression procedure. Combining priors and pairs
within the same tableau has been shown to enhance the Number of Levels Effect (which biases attribute
importance upward for those attributes with more levels). In version 4, if respondents provided
questionable responses to the calibration concepts, the benefits of the optimal weighting could be offset
by unstable estimates of the weights for pairs and priors. ACA/Web’s weighting strategy makes the
weights proportional to the amount of information contributed by each section of the interview. If very few
pairs questions are answered, the majority of the information for final utilities is contributed by the priors.
The more pairs questions are answered, the more influence they will have on the final utilities.
In ACA/Web, OLS estimates are provided for all levels of an attribute, up to 15 levels. In earlier versions
of ACA, only up to five levels of any attribute were measured using OLS, and the remaining levels were
estimated using interpolation or extrapolation. Even though ACA/Web permits estimation of up to 15
levels of an attribute, we strongly encourage you to limit the number of levels per attribute to five or fewer-especially for quantitative attributes like speed, weight and price.
ii.8.1 How ACA/Web Data are Stored
ACA/Web stores information in its data file detailing the attribute level combinations shown at each stage
of the interview along with the respondents’ answers. You can download the data file from the Web
server using ACA/Web’s Online Data Management module, or accumulate the files from different PCs if
using CAPI-based data collection. Then, using SSI Web you can export the responses to the “standard”
(non-conjoint related) questions to an ASCII fixed-delimited format using File | Export Data | All Data .
You can also export the ACA data to an “audit trail” .ACD file by clicking File | Export Data | Prepare
Conjoint Data for HB. The .ACD file records information about the combinations of levels displayed at
each stage of the interview along with respondent answers. Our ACA/HB software reads the .ACD file
when estimating part worth utilities under HB.
To obtain OLS estimates for the attribute levels, you should run ACA/Web’s utility calculator. The utility
calculator reads the information from the respondent data file (STUDYNAMEdat.dat), computes the part
worths, and saves the results to a STUDYNAME.utl file. This format is identical to the .UTL files used in
ACA version 4 for the PC. Click HERE to view the formats for the .UTL and .ACD files.
ii.8.2 Prepare Conjoint Data for HB
This dialog lets you prepare a STUDYNAME.acd file, which is used within the ACA/HB program for
computing part worths using hierarchical Bayes estimation. The STUDYNAME.acd file is an ASCII file
with a specific format. It is also known as the “Audit Trail File” and contains information about the design
(the combinations of attribute levels the respondent saw at each stage of the survey), along with the
respondent’s answers.
Respondent Numbering
You can choose from five options for assigning respondent numbering. This lets you specify how the
respondent numbers (case IDs) are to be generated. User Name is one of the passwords from the
passwords module. You should only choose this if these values were purely numeric (no letters).
Password is also one of the passwords from the passwords module. You should also only use this if the
passwords were purely numeric (no letters). Internal Interview Numbers are those that the system
assigns when respondents complete the survey over the Web. There can be gaps in the sequence if
some respondents started a survey but didn’t finish. Sequentially Assigned Numbers (default) are
integers 1 through n. Other lets you choose another numeric variable to use for respondent numbering.
This last option is useful if you are linking surveys from another source and need to merge the data sets
by a “passed-in” variable.
Include
All Respondents: Choose this option to export all respondents: both complete and incomplete. Data for
questions that were not completed are left blank. Qualified/Completes Only: Choose this option to export
only qualified complete data records. A survey is considered qualified complete when respondents reach
a terminating question that has been marked with a qualified status.
File Format
The only available format is the Audit Trail File (.ACD Format).
ii.9.1 Updating Utilities During the ACA Interview
Estimates of the respondent's part worth utilities are updated after each page of paired-comparison
responses is submitted.
First consider the general case of how least squares regression coefficients can be updated to include the
effect of an additional observation.
Let X be a matrix of predictor variables with a row for each of n observations and a column for
each variable.
Let y be a vector of responses for the first n observations.
Let z' be a row vector of predictor values for a new observation, appended as a row to X.
Let r be a response for the new observation.
Considering only the first n observations, we have the regression equation:
Xb ~ y
n
where
-1
b = (X’X)
(X’y)
n
(1)
is the vector of coefficients that would be obtained by least squares estimation based on the first n
observations.
Now consider adding one observation. The expanded layout is:
| X |
|
| b
~
| z’| n+1
| y |
|
|’
| r |
(2)
where
-1
b
~ = (X’X + z’z)
(X’y + zr)
n+1
is the least squares estimate based on n+l observations. Suppose we already have bn , X, y, z, and r,
and we want to obtain bn+1. First consider an identity. Let
-1
v = (X’X)
Z
(3)
Then it can be shown that
-1
(X’X + z’z)
=
-1
(X’X) -
v’v
----1 + v’x
(4)
Substituting into equation (2), we get
b
= b + v
n+1
n
r - z’b
n
-------1 + v’z
(5)
Equation (5) gives a formula for updating the estimate of part worths following each response, a relatively
easy computation since the numerator and denominator on the right are both scalars. We must also
update the inverse as in equation (4). That is also fairly easy since the vector v is already available. If we
are dealing with k attribute levels, then an updating cycle requires about 2k (k + 1) multiply and add
operations. This is a significant savings when compared to the cost of re-estimating "from scratch" after
each response, and the final results are identical.
Now consider how this scheme is applied to the specific situation in ACA:
Before the first updating we set X equal to the identity matrix and both bn and y equal to the
initial utility estimates.
The vector z consists of plus and minus 1's and 0's. An element equals 1 if the corresponding
attribute level appeared in the concept on the right of the screen, -1 if in the concept on the left of
the screen, and 0 if that level did not appear in either concept.
The response r is coded so that +4 means "strongly prefer right," -4 means "strongly prefer left,"
and 0 means indifference.
ii.9.2 Choosing the Next Paired-Comparison Question
The first part of an ACA interview is concerned with screening the attribute levels and learning enough
about the respondent's preferences to construct initial part worth estimates. After that is done we begin
the paired-comparison section, in which pairs of concepts are shown and preference questions are
asked. Following each response (submitted page of pairs questions) we update our estimates of part
worths and then decide what pair of concepts to present next.
The number of possible concepts is very large, and we need some reasonably efficient procedure to
choose a pair of them at each stage that will be most beneficial in some way. There are several
principles to keep in mind when thinking about how to choose concepts.
Concepts should be chosen by a method that gives the author as much control as possible over
the interview, in terms of the complexity of the concepts and the number of questions asked.
The design should be as "balanced" as possible. Observations should be spread as evenly as
possible over all attribute levels, and the columns of the design matrix should be as nearly
orthogonal as possible.
We should ask the respondent questions that require careful consideration. There is no point in
asking questions for which we already know the answer, such as "High quality at a low price"
versus "low quality at a high price." We learn more if we choose concepts nearly equal in
attractiveness.
Our procedure addresses these points. The author may specify the number of attributes to appear in
each concept. The range is from two to five. It is possible to start with only two attributes per concept
and, after the respondent has gained experience, to increase their complexity.
The concepts in a pair always have different levels of the same attributes. Our procedure for choosing
those concepts is:
Count the number of times each pair of attributes has appeared together in any concept. Pick a
set of attributes at random from among those whose members have previously appeared
together the fewest times.
For each of the chosen attributes, repeat similar logic to find levels that have been paired least
frequently.
Examine all possible ways of combining these levels into concepts (with just two attributes there
are only two possible ways; with 5 attributes there are 16 ways. Find the pair of concepts most
nearly equal in attractiveness, using the current estimates of the respondent's part worth utilities.
Randomly determine which concept will appear on each side of the screen.
Accordingly, ACA presents pairs of concepts that are as nearly equal as possible in estimated utility. At
the same time, constraints are imposed to ensure that the overall design is nearly orthogonal. Within
concepts, each pair of attributes is presented with equal frequency, and within each attribute, each pair of
levels is presented with equal frequency. In addition, if the paired-comparison questions show only two
attributes at a time, further steps are taken to insure that the overall design is "connected."
Such an approach has these benefits:
1) It gives the respondent the impression that the system is paying attention to his or her
answers, and it seems to be asking increasingly insightful questions.
2) It helps keep the respondent operating within the defined range of a response scale rather than
at its ends.
3) It provides data on "tight" inequalities if estimation is later to be done by nonmetric methods.
ACA lets the author specify certain pairs of attribute levels that must not appear together in the same
concept. The procedure described above is modified slightly to take account of such prohibitions. When
concepts are described on only two attributes, ACA chooses the first few questions in a slightly different
way. (When the concepts are described on only two attributes, it would be possible to blunder into a
design in which the attributes would be divided into subsets in such a way that those in one subset would
never be paired with those in another subset. Such designs would provide no information about the
relative importance of attributes in different subsets, and ACA automatically corrects the design in such a
situation.)
ACA's designs usually have good statistical efficiency, although they are not strictly orthogonal.
Statistical efficiency is increased as more attributes are used in each concept, and it is also possible to
produce concepts more nearly equal in attractiveness when there are more attributes with which to work.
However, using larger numbers of attributes has the unfortunate consequence of making the questions
more complicated, and respondents are more easily confused.
Both anecdotal and experimental evidence has shown that it is usually best to start with only two
attributes per concept and, after a few pairs, to increase the number of attributes to three. Beyond three
attributes, gains in efficiency are usually offset by respondent confusion due to task difficulty.
ii.9.3 Differences between ACA/Web and ACA 4
With ACA/Web, all aspects of the system are managed through a Windows point-and-click interface.
ACA version 4 used a DOS-based menu-driven system.
ACA/Web benefits from the features available to Web browsers, such as font, point size and style control
for text, along with the ability to display graphics and play sounds. ACA version 4 for the PC can only
display text in the default character display mode supported by DOS, or the DOS emulation provided by
Windows.
ACA version 4 permitted respondents to rank-order non a priori ordered attributes. ACA/Web only offers
rating of those levels.
ACA/Web lets you customize the text displayed for each separate Ratings or Importances question, by
placing Text/HTML Filler “questions” prior to each.
ACA version 4 included a “Most Likelies” section in the priors. ACA/Web does not offer that section.
ACA version 4 included an “Unacceptables” section in the priors. ACA/Web does not offer that section.
ACA/Web can include up to 30 attributes with 15 levels each. ACA version 4 was limited to 9 levels per
attribute.
ACA version 4 carried at most five levels of an attribute forward to the pairs and calibration concepts
section. ACA/Web can carry up to 15 levels forward through the pairs and calibration concepts.
ACA/Web can include up to 10 additional “standard” survey questions, supporting skip patterns and
randomizations. For additional standard questions, ACA/Web can be included within a CiW Web survey,
with up to 500 additional questions. ACA version 4 did not permit any questions outside of the conjoint
interview. ACA version 4 could be linked within a Ci3 survey for the PC, with up to 2000 additional
questions.
The OLS estimation of part worth utilities is slightly different for ACA/Web than ACA version 4. Please
see the section entitled Estimating ACA/Web Utilities for details.
ACA/Web surveys permit displaying more than one question per screen (page). In the pairs section, part
worths are updated after each page of responses is submitted. A page may include just one, or many
pairs questions.
ii.10.1 Format of .UTL and .ACD files
The .UTL File
The .UTL file contains the estimated part worths for each respondent. Each respondent’s data are
formatted across multiple lines. An example is as follows:
104
9.465
-9.999
-2.473
935
0
23
7
9.746 -9.999 -9.999
-9.999 -9.999 -9.999
-6.029 -4.550
-9.999
-1.665
-2.108
-1.764
-1.305
-4.172
-1.275
-2.473
-0.547
-4.131
-0.635
-3.254
Following is a description of each line:
Line
Columns
Line 1 1-9
Description
Respondent Number
Notes
Number identifies record
10-15
Correlation
A “-1" indicates Calibration Concepts were not
shown and as a result a correlation was not
calculated; consequently the Purchase
Likelihood model cannot be used for simulations
16-21
Merged Variables
Always zero when generated by ACA/Web
22-27
Number of Utility Levels
Number of levels in record
28-33
Interview Duration
Always a zero in ACA/Web
1-8
First Utility Level
Value of level
9-16
Second Utility Level
Value of level
,,,
Remainder of Utility
Levels
And so on, for all existing utility levels.
Line 2+
There can be up to 18 lines of part worth utility levels; each line has 8 columns per level, up to 10 levels
per line; up to a maximum of 180 levels. Each value has 3 places after the decimal point.
The STUDYNAME.ACD File (Audit Trail)
The STUDYNAME.ACD file contains an "audit trail" of the questionnaire for each respondent. The file
contains information about which questions were presented and about the responses. In earlier versions
of ACA, such as ACA v4 for DOS, this file also contained three sets of part worths, computed at various
stages of the interview. We expect that most ACA/Web users will use the .ACD file generated by
ACA/Web with ACA/HB (Hierarchical Bayes estimation). ACA/HB does not use the three sets of part
worths available in previous versions of the .ACD file, so we have inserted “0.000" values to maintain
compatibility with the prior .ACD file format.
There is one record for each respondent, in the order respondents appear in the STUDYNAME.UTL file.
Following is an example of one respondent's record in the audit trail file:
5
7
104
1
6
12
4
1
4
1
5
4
3
21
3 2
2 0
5
0
0
0
0
0
0
0
0
0
14
9
2
1 2
16
17
3
2 3
19
23
1
6 5
-1 -7
2
8
0
-6 -10
7
9
0
-9 -16 10 17
0
-17 -20 15 18
0
-1 -20
2 21
0
-6 -10 -19
8
9
-1 -16 -19
2 15
-1 -10 -16
2
9
-6 -10 -20
7
9
-8 -17 -21
6 16
-1 -7 -10 -15
2
-1 -9 -16 -19
2
0.000 0.000 0.000
0.000 0.000 0.000
0.000 0.000 0.000
0.000 0.000 0.000
0.000 0.000 0.000
0.000 0.000 0.000
0.000 0.000 0.000
0.000 0.000 0.000
0.000 0.000 0.000
4 5
1
6
9 15 21
2
7 10 16 17
1
7 10 16 17
2
6
9 14 20
0.000 0.000 0.000
5 4 3 6 0 0 0
1 0 0 0 0 0 0
4 3 2 1 0 0 0
0
0
0
0
0
7
0
0
0
0
0
6
0
0
0
0
0
3
0
0
0
0
0
4
0
0
0
0
0
8
18
0
0
0
0
4
17
0
0
0
0
7
15
0
0
0
0
7
21
0
0
0
0
7
17
0
0
0
0
7
8
9 17
0
0
7
10 15 21
0
0
7
0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000
0.000
0.000
0.000
0.000
12
99
24
55
First line:
The first line contains the respondent number (104); the number of pairs that were shown in the Pairs
section of the interview (12), the number of attributes in the study (5); and the number of attribute levels in
the study (21).
Attribute levels are assigned consecutive numbers. Numbers are assigned beginning with the levels of
the first attribute, then the levels of the second attribute, and so on. For example, the chart below shows
how numbers are assigned to the levels of the attributes in the questionnaire that generated this data file:
Body Type
Two-Door Sedan (1)
Four-Door Sedan (2)
Hatchback (3)
Minivan (4)
Convertible (5)
Configuration
Rear-Wheel Drive (6)
Front-Wheel Drive (7)
All-Wheel Drive (8)
Color
Manufacturer
Price
Red (9)
US Manufacturer (14)
$16,000(17)
Blue (10) Japanese Manufacturer (15) $18,000(18)
Green (11) European Manufacturer (16) $20,000(19)
Yellow (12)
$22,000(20)
White (13)
$24,000(21)
Second and next lines, the Importance Ratings:
Next follows one line for each attribute. Each line has 12 columns (unless there are more than 9 levels
for any attribute, in which case the layout is expanded to 18 columns). These 12 columns allow for the
maximum number of levels to be displayed (9), the two attribute levels that were shown, and the
respondent's answer to the Importance question.
The numbers in the first and second columns in each line represent the attribute levels that were shown
in the Importance Rating section. The number in the third column is the respondent's answer to the
Importance question. Then, the remaining 9 columns have numbers for each level that show the
preference rating for each level, in order, for as many levels as exist for the attribute. These numbers are
ratings, where the "1" indicated the least-preferred level. Any remaining columns contain zeros.
Using our example record that has 5 attributes, the lines just described are lines 2-6:
5
7
14
16
19
1
6
9
17
23
4
4
2
3
1
1
1
1
2
6
4
3
2
3
5
3
2
5
1
4
2
0
4
0
3
5
0
3
0
2
0
0
6
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
The second line represents the second attribute (Drive Configuration); the first number, 7 (Front-Wheel
Drive), and the next number, 6 (Rear-Wheel Drive) represent the attribute levels that were shown in the
Importance Rating Section. The third number, 4, is the respondent's answer to the Importance question.
The next numbers, 1, 3, and 2, indicate that the first level (Rear-Wheel Drive) was least preferred (1), the
second level (Front-Wheel Drive) was most preferred (3), and the preference for the third level (All-Wheel
Drive) was in between (2). The second attribute has only three levels, so the remaining columns on this
line contain zeros.
Next lines, the Paired-Comparisons:
Next follows one line for each of the pairs that were shown in the Pairs section of the interview. Each line
has 11 columns. These 11 columns allow for the maximum number of levels to be displayed in the Pairs
(5 in each of 2 concepts) and one column for the respondent's answer. The numbers represent the levels
that were shown. Numbers with negative signs indicate that the level appeared on the left side of the
screen, and those without negative signs indicate that the level appeared on the right side of the screen.
The final number on the line is the respondent's answer.
Using our example record that has 12 pairs, the lines just described are 7-18:
-1
-6
-9
-17
-1
-6
-1
-1
-6
-8
-1
-1
-7
-10
-16
-20
-20
-10
-16
-10
-10
-17
-7
-9
2
8
7
9
10 17
15 18
2 21
-19
8
-19
2
-16
2
-20
7
-21
6
-10 -15
-16 -19
0
0
0
0
0
9
15
9
9
16
2
2
0
0
0
0
0
18
17
15
21
17
8
10
0
0
0
0
0
0
0
0
0
0
9
15
0
0
0
0
0
0
0
0
0
0
17
21
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
7
6
3
4
8
4
7
7
7
7
7
7
The first of those lines represents the first pair; the first number, -1, indicates that level number 1,
Two-Door Sedan, appeared on the left of the screen. The second number, -7, indicates that level
number 7, Front-Wheel Drive, appeared on the left of the screen. Those levels were paired with the next
two numbers, which appeared on the right of the screen: 2 (Four-Door Sedan) and 8 (All-Wheel Drive).
The final number in the line, 7, indicates the response to this Pairs question.
Next lines, the Utilities Before Calibrations:
In ACA v4, the next lines represent the part worths for each level of each attribute, in order, for this
respondent, prior to calibration. In ACA/Web, we insert “0" values as placeholders. Each line has up to
10 part worths.
Using our example record that has 23 attribute levels, the lines just described are 19-21:
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000
The values are displayed for each level.
Next lines, the Prior Utilities:
In ACA v4 the next lines represent approximate part worths for each level of each attribute, in order, for
this respondent. These part worths are based only on the information from the interview prior to the Pairs
section. In ACA/Web, we insert 0 values as placeholders. Each line has up to 10 part worths.
Using our example record, the lines just described are 22-24:
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000
The values are displayed for each level.
Next lines, the Contribution to the Utilities from the Pairs Section:
In ACA v4 the next lines represent part worths for each level of each attribute, in order, for this
respondent. These are the components of the part worths from the Pairs section. In ACA/Web, we insert
0 values as placeholders. Each line has up to 10 utilities.
Using our example record, the lines just described are 25-27:
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
0.000 0.000 0.000
The values are displayed for each level.
Next line, Number of Calibration Concepts and Number of Attributes:
The next line contains two numbers that report the number of Calibration Concepts shown to the
respondent and the number of attributes in each concept.
In our example record, this is line 28:
4
5
The first number, 4, indicates that 4 concepts were shown. The second number, 5, indicates that there
were 5 attributes in each concept.
Next lines, Calibration Concepts:
Next, there is one line for each calibration concept. For as many columns as there were attributes in
each concept, numbers indicate the attribute levels included in the concept. Then, the next column
shows the calculated utilities for the concept (in ACA/Web we set those to 0.000). The last column
contains the respondent's purchase likelihood value for the concept.
In our example record, the lines just described are lines 29-32:
1
2
1
2
6
7
7
6
9
10
10
9
15
16
16
14
21
17
17
20
0.000
0.000
0.000
0.000
12
99
24
55
The previous line, 28, indicated there were 4 concepts, so there are 4 lines (29-32 ), one for each
concept. Line 28 also indicated there were 5 attributes in each concept, so each of the 4 lines uses the
first 5 columns to show which attribute levels were included in that concept. For example, the first five
numbers in line 29 indicate these levels were shown: 1 (Two-Door Sedan), 6 (Rear-Wheel Drive), 9
(Red), 15 (Japanese Manufacturer), and 21 ($24,000). In ACA v4, the next number indicates the total
utility for the concept. In ACA/Web we put a value of 0.000 there as a placeholder. The last number, 12,
represents the respondent's purchase likelihood value for the concept.
Last line, Calibrating Coefficients:
In ACA v4 the last line contains the calibrating coefficients and measure of fit from the logit regression,
which is used to predict likelihoods of purchase and to weight the priors and pairs components when
estimating "optimal" utilities. In ACA/Web we insert “0.000" as placeholders for those values.
In our example record, the line just described is line 33:
0.000 0.000 0.000 0.000
ii.11.1 Real-Time Feedback of ACA Results to Respondents
There are some situations in which you might want to show respondents their attribute importances
and/or part worth utilities at the completion of an ACA survey. ACA/Web enables this by making the part
worths and attribute importances available as system variables that you can display on the screen.
We suggest the following steps:
1. Generate a Text question after the ACA questions. This question can be in any position
following the last ACA Pairs question. This question will be used to display the ACA utilities
report.
2. There is a template (HTML) text file installed with your system called acarpt.htm. Make a copy
of that file and modify it with a text editor (such as Wordpad or Notepad) to suit your list of
attributes and levels. Follow the instructions and patterns in that template file (it is helpful to know
something about HTML coding when working with this file.) Copy the coding patterns as many
times as needed to match your attribute list. You will note that the template refers to system
variables that contain part worth utility and importance values. The attribute importances are
contained in system variables called %A1I (attribute 1 importance), %A2I (attribute 2 importance),
etc. The level part worths are contained in system variables called %A1L1 (utility of attribute 1,
level 1), %A1L2 (utility of attribute 1, level 2), A2L1 (utility of attribute 2, level 1), etc. You can
show these variables at any time during an ACA/Web interview (following the priors questions) by
enclosing the system variables within percent symbols, as following:
%%A1I%
(Returns the value for the importance of attribute 1)
%%A1L1% (Returns the value for the utility of the first level of attribute 1)
We suggest that you open (view) the template file with your browser to check your work as you
modify that file. Once you are happy with the look of your report, you are ready to paste the
HTML text into the Text question you have defined in the ACA/Web questionnaire.
3. Using a Windows editor such as Word, Wordpad or Notepad, highlight the appropriate text of
the renamed template HTML file you’ve been editing and cut-and-paste that text into either the
Header, Footer or Body section of the Text question you generated in Step 1 above. When the
survey runs on the Web, values will be inserted in place of the system variable strings, and the
bars in the bar chart for importances will extend the appropriate lengths to reflect the relative
importance of each attribute.
Notes: Utilities and importances are reported with no decimal places of precision. Utilities are reported
as “zero-centered diffs.” Importances are reported as integers that sum to 100. The utilities are
generated by the efficient updating routine used when the ACA Survey runs on the Web. These utilities
are very similar to, but not exactly the same, as the final utilities generated by the ACA/Web OLS Utility
Calculator.
For each page that includes a report of ACA utilities or importances, the utilities are re-computed “from
scratch”. This can be computationally intensive for the server. If you show these values over many
pages, the computations must be repeated for each page.
If you change the number of attributes, levels in your study, or the text for attributes/levels, make sure to
manually change the HTML report to reflect those changes. There are so many aspects about ACA/Web
that are automatic that it is easy to forget to do this.
ii.12.1 Analyzing ACA Data with the Market Simulator
You were given a companion piece of software called the Sawtooth Software Market Simulator, which is
also referred to as SMRT. You use this system to analyze your ACA part worth utilities and to run market
simulations (what-if scenarios). Follow the instructions in the Sawtooth Software Market Simulator
manual to import and analyze your ACA data.
To get started:
1. Open the SMRT application. If you have a default installation, you start the application by
choosing Start | Programs | Sawtooth Software | Sawtooth Software SMRT.
2. Choose File | New to define a new study. If asked, specify ACA (Adaptive Conjoint Analysis)
as the type of study.
3. Select Analysis | Run Manager. Click the Import button. Under Files of Type, specify ACA
Utilities (.UTL). Browse for the .UTL file that the utility calculator generated, highlight it, and click
the Open button to import the data.
4. Once the utility data have been imported, choose Analysis | Market Simulator to start the
Market Simulator.
iii.1.1 Getting Started: What is CBC Analysis?
CBC/Web is used for conducting “Choice-Based Conjoint Analysis” studies over the web, in non-web
“CAPI” (Computer-Aided Personal Interview) interviewing mode, or via paper-and-pencil questionnaires.
CBC studies are used for learning about respondents’ preferences for the combinations of features that
make up products or services. CBC analysis can help with (among other things) product design, line
extensions, pricing research, and market segmentation. The market simulators that result from CBC
analysis enable managers to test numerous product formulations and competitive scenarios. Market
acceptance/competitiveness, price sensitivity, and cannibalization are just some of the issues that market
simulators based on CBC data can probe.
The main characteristic distinguishing choice-based conjoint analysis from other types of conjoint analysis
is that the respondent expresses preferences by choosing concepts (products) from sets of concepts,
rather than by rating or ranking them. Over the last few years, choice-based conjoint has become the
most widely used conjoint-related technique.
The Choice Question
A CBC question is often referred to as a “task.” A set of products (concepts) is displayed on the screen,
and the respondent chooses among the concepts. For example:
This is a basic display for illustration. The user has a great deal of control over the fonts, colors, and
layout. Graphics may also be used. Realistic-looking “store shelf displays” can be developed with scores
of product graphics displayed if using the Advanced Design Module for CBC/Web
The attributes that make up each concept are carefully chosen so that the independent effect of each
attribute level upon a product concepts’ likelihood of choice can be estimated. CBC/Web software
automatically designs the concept combinations based on a few user-defined inputs.
Two important decisions to make when constructing choice tasks are 1) how many concepts to present
per task, and 2) how many total tasks to ask. These are sometimes difficult issues to answer, and
ongoing research is helping to define guidelines and understand the consequences. A typical study might
include about a dozen or more choice tasks, with about three to five product concepts per task. The
composition of the choice tasks usually will vary between respondents. Each respondent receives one
version of a larger pool of efficient designs. Because respondents are randomly selected to receive a
different version of the overall design, we refer to these designs as “random” designs. However, the term
random can be misleading, as the designs themselves are carefully selected using a computer search
algorithm that ensures that each separate version has level balance and near orthogonality.
Often some “holdout” choice tasks are included in the CBC questionnaire. These are not used for
estimating the preferences (part worth utilities) for the respondents, but are used to check the validity of
the estimated utilities. We refer to these as “holdout” tasks or “fixed” choice tasks. They are “fixed” in the
sense that the product combinations within these tasks are shown in exactly the same way to all
respondents.
The Role of CBC
Choice-based conjoint analysis has attracted much recent interest in the marketing research field. There
are several reasons for its increasing popularity:
Some researchers favor it because the task of choosing a preferred concept is similar to what
buyers actually do in the marketplace. Choosing a preferred product from a group of products is
a simple and natural task that anyone can understand.
Choice-based conjoint analysis lets the researcher include a “None” option for respondents, such
as “I wouldn’t choose any of these.” By selecting that option, respondents who do not like any of
the options can express their lack of interest. Comparing “None” usage across groups of
respondents can reveal segments that are relatively more or less likely to purchase product
concepts.
Most conjoint analysis studies use “main effects only” assumptions. Choice-based conjoint
analysis can be easily analyzed at an aggregate level rather than for respondents individually,
making it feasible to quantify interactions. This capability is enhanced by the random designs
used by the CBC System, which, given a large enough sample, permit study of all interactions,
rather than just those expected to be of interest when the study was designed.
It is possible in choice-based conjoint analysis to have “product-specific” (alternative-specific)
attributes. For example, in studying transportation we might consider walking shoes and bicycles.
The attributes describing shoes are different from those describing bicycles, and yet one might
want to learn how much improvement in walking shoes would be required to switch a respondent
from cycling to walking. CBC/Web’s Advanced Design Module permits alternative-specific
designs.
Choice-based conjoint analysis does have a disadvantage, however: it is an inefficient way to elicit
preferences. Each concept is described using many attributes and each choice set contains several
concepts. Therefore, the respondent has to do a lot of reading and process a lot of information before
giving each answer. Moreover, the response data are sparse. Each choice does not indicate how much
more preferred that option is relative to the remaining options, or the relative preferences for the notchosen alternatives.
For those reasons, in the early days of CBC research, choice-based conjoint studies were not used to
estimate the values that individual respondents attach to attribute levels, as has been done with
traditional ratings-based conjoint methods. Instead, data from groups of respondents were aggregated
for analysis. This has been done either by combining all respondents or by studying subsets defined by
specific market segments. “Part worth utility values” could be produced for each group of respondents
that summarize the choices made by those individuals. And, as in other conjoint methods, the part worth
values could be used to simulate and predict respondent reactions to product concepts that may not have
actually appeared in the choice tasks (questions).
Aggregating respondents in CBC analysis assumed respondent homogeneity, which was not always
appropriate or desirable. Developments since the mid-1990s have recognized segment-based or even
respondent-by-respondent differences for CBC analysis. Latent Class analysis (as offered in the CBC
Latent Class Segmentation Module) can simultaneously delineate relatively homogeneous segments and
estimate their unique part worth functions.
Computationally intensive Bayesian estimation (available in Sawtooth Software’s CBC/HB System) has
permitted estimating individual-level part worths from choice data. HB has proven to give better
understanding of market structure and generally leads to more accurate predictions and greater flexibility
during analysis.
If you are relatively new to CBC research, we highly recommend that you visit our Technical Papers
library at www.sawtoothsoftware.com. We recommend you download and read:
The CBC Technical Paper
An Overview and Comparison of Design Strategies for Choice-Based Conjoint Analysis
The Benefits of Accounting for Respondent Heterogeneity in Choice Modeling
Special Features of CBC Software for Packaged Goods and Beverage Research
iii.1.2 Getting Started: Overview of CBC/Web Software
CBC/Web is a component within the SSI Web suite of Internet Interviewing products. CBC/Web is used
for conducting Choice-Based Conjoint (CBC) studies over the web, in non-web “CAPI” interviewing mode,
or via paper-and-pencil questionnaires. CBC questions are usually integrated within a longer SSI Web
questionnaire that includes other standard survey questions. The number of additional questions that can
be added depends on your license. A CBC/Web study can include at least 10 additional standard survey
questions. Up to 1000 additional questions can be used if the largest CiW interviewing license is
acquired.
CBC studies are used for learning about respondents’ preferences for the combinations of features that
make up products or services. CBC analysis can help with (among other things) product design, line
extensions, pricing research, and market segmentation.
Your license entitles you to free technical support via phone, email or fax. We encourage you to contact
us (360/681-2300) if you cannot find the answer to your problem or question in this documentation. We
are available Mon-Fri, 8:30 am to 5:00 pm Pacific time. (Note: we provide full technical support for
composing and testing your Internet survey on your local PC, and provide written instructions on how to
install your questionnaire on your server. We cannot take responsibility for or be expected to provide
technical support to debug any customized HTML or JavaScript you may add to your survey. You are
also responsible to make sure that Perl is running and that you can run Perl (CGI) scripts on your web
server. We can provide some assistance related to SSI Web’s use with your server once the fundamental
Perl functionality is in place.)
A sample CBC/Web questionnaire (tut3.ssi) and supporting files are available in the Tutorials folder within
your SSI Web installation. You can easily access this example study by clicking Help | Tutorial Studies.
This questionnaire lets you see an example of a CBC/Web survey and investigate how it is set up.
If you know a little HTML (which is not necessary to use CBC/Web), you can employ some “tricks” to help
you customize the look of your survey beyond the standard options provided by the software. By opening
the sample study and looking at the settings and text we’ve used, you can become familiar with some of
the possibilities.
Using CBC/Web requires the following skills and knowledge base:
The analyst managing the survey should know the basics of conjoint analysis, including defining
attributes and levels properly, correctly phrasing the questionnaire text and response scales,
interpreting the part worth utilities, and running simulations. It is not necessary to be a statistician to
use our software effectively, but knowledge of basic statistics for market research is highly
recommended.
The questionnaire is first composed locally on a PC running under Windows 95 or later with Internet
Explorer, Netscape or another Web browser installed. The individual inputting/formatting the survey
should have working knowledge of standard Windows applications such as word processing, Web
browsers and/or spreadsheets. Though it is not necessary, some knowledge of HTML is valuable.
For web-based data collection, one must have access to an Internet/intranet server running Perl
5.003 or later. This can be on your company’s own Web server, or your ISP’s (Internet Service
Provider’s) Web server. The user must know how to use FTP software to create directories, upload
files to those directories, and set permissions on the files and directories. Click here to receive
detailed instructions on the directories, files and permissions. (Sawtooth Software also provides web
hosting services for SSI Web studies.)
Relationship between CBC/Web and SMRT
CBC/Web is used for designing, creating, and fielding CBC/Web studies (either web-based, non-web
“CAPI” interviewing, or paper-and-pencil modes). But you analyze the CBC data using the SMRT
software. These two software systems are installed and run independently--but they share a common file
format (*.CHO), enabling the SMRT software to easily read the data resulting from a CBC/Web study.
If you don’t already own SMRT software, you receive a special copy (along with a manual) to analyze
your CBC/Web data.
Steps in Conducting a CBC/Web Study
1) Formulate attributes and levels and type them into the CBC/Web interface.
2) Make decisions regarding your study design: decide how many product concepts to display per choice
question (task), whether to include a “None” option, and how many choice tasks to field in your study.
Using the point-and-click CBC/Web interface, add those CBC questions to your SSI Web study.
3) Test your questionnaire in preview mode, by using the Local Web Server included in SSI Web, or by
uploading the files to your Web server. Make modifications and re-test.
4) Decide how many unique versions of the CBC questionnaire to use in your overall design (default is
300). If using paper-and-pencil data collection, it is common to use a few to a dozen design versions.
Using the Generate/Test Design module, test the CBC design to make sure the design is efficient to
estimate the part worth utilities (main effects).
5) If using web-based data collection, upload your study to an Internet server, and continue testing--by
yourself, with colleagues, and later preferably with a small number of pilot respondents. Receive
feedback from colleagues and pilot respondents. Examine the data to make sure all data are being
stored properly using the Online Data Management Module, and by downloading the data to your hard
drive. Export the data to an ASCII (Text-only) *.CHO file. Import the results into the SMRT software
package for analysis. Compute logit utilities to ensure that the design is not deficient.
6) After ensuring that your CBC questionnaire works properly, that the data are being stored correctly,
and that your design is not deficient, invite respondents to take the survey.
7) Monitor the data collection using the Online Data Management Module. Accumulate the data, and
download, and then Prepare files for analysis within the SMRT system. See the SMRT manual, or online
help within the SMRT software for documentation regarding CBC analysis.
iii.1.3 CBC Questionnaires and Design Strategy
CBC/Web creates Web-based, CAPI, or paper-and-pencil interviews. CBC/Web comes with a 10question CiW system, that provides the capability of asking up to 10 other standard survey questions (e.g.
numerics, open-ends, check-box). One can purchase additional capacity for standard survey questions
by purchasing a larger CiW license.
The Choice Question
A CBC question is often referred to as a “task.” A set of products (concepts) are displayed on the screen,
and the respondent chooses among the concepts. For example:
Two important decisions to make when constructing choice tasks are 1) how many concepts to present
per task, and 2) how many total tasks to ask. These are sometimes difficult issues to answer, and
ongoing research is helping to define guidelines and understand the consequences.
From a statistical viewpoint, choice tasks are not a very efficient way to learn about preferences.
Respondents evaluate multiple concepts, but only tell us about the one they prefer. We don’t learn how
strong that preference is relative to the other product concepts. Showing more product concepts per
screen increases the information content of each task. Recent research has shown that respondents are
quite efficient at processing information about many concepts. It takes respondents considerably less
than twice as long to answer choice tasks with four concepts as with two concepts. In general, we
recommend showing around three to five concepts per task (particularly if your attributes also have about
three to five levels each). However, there may be instances (e.g. a beverage study with 25 brands plus
price), where showing more product concepts per screen is appropriate and more realistically portrays the
actual buying situation. With the Advanced Design Module for CBC/Web, you can display up to 100
product graphics on the screen, presented as if they were resting on “store shelves.”
With randomized choice designs, given a large enough sample size, we could model preferences at the
aggregate level by asking each respondent just one choice task. In practice, researchers recognize that
individuals are expensive to interview and that it makes sense to collect more information from each
person in CBC studies. With multiple observations per respondent, one can model heterogeneity in
preferences, which leads to more accurate choice simulators.
In a recent meta-analysis of 21 CBC data sets, we found that multiple observations per respondent are
quite valuable, and that respondents can reliably answer up to at least 20 questions, and perhaps even
more. However, we should point out that respondents process earlier tasks differently from later
questions. Respondents pay more attention to brand in the first tasks, and focus more on price in later
questions. (See Johnson and Orme’s article entitled “How Many Questions Should You Ask in ChoiceBased Conjoint?” available for downloading from the Technical Papers section of our home page:
http://www.sawtoothsoftware.com).
With CAPI or paper-based CBC research, we recommend asking somewhere in the range of 12 to 20
choice tasks. With Web interviewing, fewer tasks might be appropriate if there is an opportunity for
increased sample sizes (which often is the case). For the typical CBC design, if you plan to calculate part
worths at the individual level using ICE, we recommend twenty tasks or more. If estimating individual-level
utilities using CBC/HB, we'd recommend at least six choice tasks to achieve good results based on
simulated shares, but about 15 choice tasks or more for developing robust predictions at the individual
level (again assuming a typical design, and a healthy degree of sample heterogeneity). Before finalizing
the number of concepts or tasks to be asked, we urge you to pretest the questionnaire with real
respondents to make sure the questionnaire is not too long or overly complex.
With CBC/Web, all of the tasks are tied to the same layout, including header text and footer text. You
cannot change the number of product concepts or other settings within your choice tasks midstream. In
addition to the regular choice tasks, you can add tasks that have fixed designs (“holdout” tasks) described
later. You can also add other text screens or generic survey questions between choice tasks.
The “None” Option
Choice-based conjoint questions can be designed to have a “None” option, sometimes referred to as the
constant alternative. It is argued that the None option in CBC tasks better mimics the real world, since
buyers are not required to choose products that don’t satisfy them. The None option can also be used to
reflect a status quo choice, such as “I’d continue to use my current long-distance service provider.”
CBC/Web gives you the option of including a None option in the questionnaire. If the None option is
present, a separate logit utility weight is computed for the None parameter (if using logit analysis). The
more likely respondents are to choose None relative to product concepts, the higher the utility of the None
option. The None parameter can be used in market simulations to estimate the proportion of respondents
that would not choose any of the simulated product concepts. You can include more than one constant
alternative in the questionnaire if using the CBC/Web Advanced Design Module .
In general, we recommend including None in CBC questionnaires, but paying less attention to (or
completely ignoring) it in market simulations. The propensity to choose None can be a telling measure
when comparing groups or individuals. We suggest that percent of Nones in Counts or simulations be
viewed in a relative rather than an absolute sense.
The use of the “None” concept in partial-profile CBC studies (only supported by the Advanced Design
Module for CBC/Web) is problematic. The None weight varies significantly depending on how many
attributes are displayed in the partial-profile task. Patterson and Chrzan (2003) showed that as the
number of attributes increases, the propensity to choose None also increases.
Randomized Versus Fixed Designs
Two schools of thought have developed about how to design and carry out choice-based conjoint studies.
1) Some researchers prefer fixed orthogonal designs. Such designs often employ a single version of
the questionnaire that is seen by all respondents, although sometimes respondents are divided
randomly into groups, with different groups receiving different questionnaire versions (blocks, or
subsets of the larger fixed design). Fixed orthogonal designs have the advantage of high efficiency at
measuring main effects and the particular interactions for which they are designed. It is interesting to
note that for imbalanced asymmetric designs (where there are large differences in the numbers of
levels among attributes) fixed orthogonal designs often can be less efficient than random designs.
2) Other researchers, particularly those accustomed to computer-administered interviewing, prefer
random designs. The CBC/Web software, for example, can design interviews with approximately
orthogonal designs, in which each respondent receives a unique set of questions. Attribute and
concept order can be randomized across respondents. Such designs are often slightly less efficient
than truly orthogonal designs (but can be more efficient with asymmetric designs), but they have the
offsetting advantage that all interactions can be measured, whether or not they are recognized as
important at the time the study is designed. Randomized plans also reduce biases due to order and
learning effects, relative to fixed plans.
CBC/Web can administer either fixed or random designs. If a fixed design is chosen, then the researcher
must specify that design. If a randomized design is chosen, then it will be produced automatically and
saved to a design file that is later uploaded to your server if using Web-based data collection, or to
another PC if using CAPI-based interviewing. If using paper-based interviewing, a limited set of
questionnaire versions (a subset of the potential random plan) is often used. Most CBC users have
favored the ease of implementation and robust characteristics of the randomized approach.
We have tried to make CBC/Web as easy to use and automatic as possible. The researcher must decide
on the appropriate attributes and their levels, and compose whatever explanatory text and/or generic
survey questions are desired during the interview. Absolutely no “programming” is involved. Every
aspect of designing the interviews and conducting the analysis is managed through the point-and-click
Windows interface. Thus, we hope that CBC/Web will make choice-based conjoint analysis accessible to
individuals and organizations who may not have the statistical or internet expertise that would otherwise
be required to design and carry out such studies.
Random Design Strategies
Fixed experimental designs may be specified in CBC/Web, but most users will rely on one of the four
randomized design options. When CBC/Web constructs tasks randomly, some efficiency is often
sacrificed, compared to strictly orthogonal designs of fixed tasks. But, any loss of efficiency is quite small,
usually in the range of 5 to 10%. However, there is an important compensating benefit: over a large
sample of respondents, so many different combinations occur that random designs can be robust in the
estimation of all effects, rather than just those anticipated to be of interest when the study is undertaken.
Also, potential biases from learning and order effects can be reduced.
The earliest versions of CBC offered two randomized design options: complete enumeration and the
shortcut method. Though we refer to these as “randomized designs,” these designs are chosen very
carefully, as will be demonstrated.
The complete enumeration and shortcut methods generate designs conforming to the following principles:
Minimal Overlap: Each attribute level is shown as few times as possible in a single task. If an
attribute’s number of levels is equal to the number of product concepts in a task, each level is shown
exactly once.
Level Balance: Each level of an attribute is shown approximately an equal number of times.
Orthogonality: Attribute levels are chosen independently of other attribute levels, so that each
attribute level’s effect (utility) may be measured independently of all other effects.
Recent research in design efficiency for CBC has revealed that the criterion of minimal overlap is optimal
for the efficiency of main effects, but not for measurement of interactions. Allowing some degree of
overlap may improve the precision of interactions--but at the expense of precision of main effects. To
accommodate designs for more efficient measurement of interactions, we’ve included two other design
methods in CBC/Web: Random and Balanced Overlap.
We’ll first describe each of the four design strategies. We conclude with some advice on when to use the
different design approaches.
Complete Enumeration:
The complete enumeration strategy considers all possible concepts (except those indicated as prohibited)
and chooses each one so as to produce the most nearly orthogonal design for each respondent, in terms
of main effects. The concepts within each task are also kept as different as possible (minimal overlap); if
an attribute has at least as many levels as the number of concepts in a task, then it is unlikely that any of
its levels will appear more than once in any task.
Complete enumeration may require that a very large number of concepts be evaluated to construct each
task, and this can pose a daunting processing job for the computer. The base CBC/Web software
permits up to 10 attributes, with up to 15 levels each. Suppose there were 4 concepts per task. At those
limits the number of possible concepts to be evaluated before displaying each task would be 4 x 15^10 =
2,306,601,562,500!
This is far too great a burden for even the fastest computers today, and would take a great deal of time
for the design generator to produce a design file. Generally, CBC design specifications in practice are
more manageable and speed of computation is usually not an issue. Furthermore, with CBC/Web, the
researcher generates the experimental plan prior to conducting data collection. Thus, respondents do not
need to wait for a new design to be generated--they are simply randomly assigned to receive one of the
many questionnaire versions previously generated.
The time required to compute designs under Complete Enumeration is more sensitive to the number of
attributes in your study than to the number of levels for attributes. If you have just a few attributes, you
should experience little delay in design computation, even if one of your attributes has scores of levels.
Shortcut Method:
The faster “shortcut” strategy makes a much simpler computation. It attempts to build each concept by
choosing attribute levels used least frequently in previous concepts for that respondent. Unlike complete
enumeration, that keeps track of co-occurrences of all pairs of attribute levels, the shortcut strategy
considers attributes one-at-a-time. If two or more levels of an attribute are tied for the smallest number of
previous occurrences, a selection is made at random. With the shortcut method, as well as with complete
enumeration, an attempt is made to keep the concepts in any task as different from one another as
possible (minimal overlap). When there is more than one less-frequently-used level for any attribute, an
attempt is made to choose one that has been used least in the same task.
Designs composed using complete enumeration are of high quality, and those composed by the shortcut
method are also quite acceptable.
Random Method:
The random method employs random sampling with replacement for choosing concepts. Sampling with
replacement permits level overlap within tasks. The random method permits an attribute to have identical
levels across all concepts, but it does not permit two identical concepts (on all attributes) to appear within
the same task.
The random method computes about as quickly as the shortcut method. Unless the primary goal of the
research is the study of interaction effects, we generally do not recommend using the purely Random
method.
Balanced Overlap Method:
This method is a middling position between the random and the complete enumeration strategies. It
permits roughly half as much overlap as the random method. It keeps track of the co-occurrences of all
pairs of attribute levels, but with a relaxed standard relative to the complete enumeration strategy in order
to permit level overlap within the same task. No duplicate concepts are permitted within the same task.
The balanced overlap method takes less time to compute than complete enumeration, but is slower than
the random or shortcut methods.
Note: Balanced Overlap and Complete Enumeration strategies are not available for the partial-profile or
alternative-specific designs provided by the Advanced Design Module.
CBC/Web’s Design File
CBC/Web automatically creates a design file that is a) uploaded to the server if using web-based data
collection, b) is copied to other PCs if using CAPI-based data collection, or c) is used to generate a
questionnaire file if using paper-based data collection. During the process, you are asked to specify how
many versions of the design should be written to that design file. For computer-based interviewing, we
generally recommend that you include at least 100 designs, as there are benefits to fielding many
versions of the questionnaire. Optimally, each respondent would receive his/her own design. However,
at some point, the benefit of increasing the size of the design file is not worth the decreases in
questionnaire performance for the server to manage so many designs. For that reason, we limit the
number of designs that can be placed in the design file to 999. If you have more than 999 respondents,
this means that some respondents will by necessity receive the same designs. For all practical purposes,
once there are so many unique designs in the total design pool, this poses no harm.
Suggestions for Randomized Designs
We suggest that one include at least some degree of overlap in the CBC design when interaction terms
are of particular interest. Overlap for an attribute can be added to a design simply by using more
concepts than attribute levels in tasks. We expect that minimal overlap strategies may be about as
effective as the random approach for estimating interactions between attributes that have fewer levels
than concepts per task.
In summary, we suggest using complete enumeration (or its sister shortcut method) for main-effects only
designs. If detecting and measuring interactions is the primary goal, then the random approach is
favored. If the goal is to estimate both main effects and interactions efficiently, then overlap should be
built into the design, at least for the attributes involved in the interaction. Using more concepts than
attribute levels with complete enumeration, or utilizing the compromise balanced overlap approach would
seem to be good alternatives.
iii.2.1 Attribute Interactions
Most conjoint methods assume “main effects only” estimation. These are simple additive models in which
the value of a product concept is equal to the simple additive sum of its parts (the attributes). The part
worths for each attribute are measured independently of all others. However, there are instances in
which the presence of interactions between attributes makes such a simplistic model incorrect and not as
accurate as could be.
In a recent Sawtooth Software conference, the following explanation of an interaction was provided,
“Interactions occur when the combined effect of two attributes is different from the sum of their two main
effect utilities. For example, being stranded on a deserted island is pretty bad, say it has a utility of -40.
Attending a party hosted by cannibals is also a bad thing, say with a utility of -50. But attending a party
hosted by cannibals on a deserted island could be altogether worse, in grisly sorts of ways (utility -250).”
(“An Overview and Comparison of Design Strategies for Choice-Based Conjoint Analysis,” Keith Chrzan
and Bryan Orme, 2000 Sawtooth Software Conference.)
One of the strengths of aggregate-level analysis using CBC is its ability to estimate the effects of
interactions. Most conjoint methods are based on “main effects only” models that ignore the existence of
interactions. CBC, in contrast, can measure two-way interactions. Most conjoint experts seem to agree
that interactions are seldom a problem, but that they would be found more often if respondent values
were measured with more precision. We think that CBC provides a good way to produce relatively
precise results when there are few attributes and their interactions are of concern.
It can be demonstrated that interaction effects can be revealed through choice simulations using maineffect models that do not directly model interaction terms, if the source of the interactions between
attributes is principally due to differences in preference among groups or individuals. If Latent Class, ICE
or HB are used to model main-effects, the need to additionally model interactions may be lessened.
Either way, the randomized choice designs offered in CBC are appropriate for either aggregate or
disaggregate analysis; for main-effects or models that involve interactions.
iii.2.2 Number of Attributes/Levels/Tasks in CBC
With full-profile presentation, respondents see concepts that are described on all the attributes. With fullprofile presentation, there is a limit to how many attributes one should include in choice-based conjoint
analysis. It is our opinion that concepts described by more than about six attributes are likely to confuse
respondents, and may cause them to respond superficially.
If you must study more attributes than respondents can comfortably deal with in a CBC interview, you
might consider some other method of conjoint analysis that does not require “full profiles.” The Advanced
Design Module for CBC/Web can create partial-profile designs, where only a subset of the attributes is
ever presented within a single choice task. Another conjoint technique, ACA (Adaptive Conjoint
Analysis), can also handle larger numbers of attributes.
Number of Attribute Levels in CBC
The base CBC/Web system lets you measure up to 15 levels for any one attribute (though the Advanced
Design Module expands that number to 100 levels per attribute). Most projects will probably involve five
or fewer levels per attribute, although attributes such as Brand (or SKUs reflecting brand x package
combinations) may easily require many more levels for the purposes of a project. For typical CBC
studies, it is usually better to have fewer levels on which attributes are described, along with approximate
balance in the number of levels across attributes. With packaged-goods and beverage research, it may
be reflective of real-world conditions to include dozens of levels to reflect brand or brand x package size
combinations, but considerably fewer levels for variations in price.
We caution against using more than about five levels to define quantitative functions such as Price or
Speed if utilities will be estimated for each discrete level (CBC’s standard approach). CBC’s market
simulator permits interpolation between levels, so many additional points along a function may be
analyzed. Defining a quantitative attribute on too many levels spreads the data thin and increases the
likelihood of getting reversals (out-of-order) utilities that are counterintuitive and problematic in
simulations.
However, if you are planning to use the Latent Class or CBC/HB software, you can fit linear terms to
quantitative attributes. In that case, and if you believe the appropriate function is approximately linear, it
may be justified to include more than five levels for quantitative attributes.
Number of Tasks in CBC
Because each respondent can receive a unique set of CBC tasks (questions) under randomized designs,
as few as just one task may be asked per respondent while still allowing estimation of part worths. Not
surprisingly, most CBC questionnaires include multiple tasks, since it seems a waste not to collect more
information from each respondent. With multiple observations per respondent, one can model
heterogeneity in preferences, which leads to more accurate choice simulators.
In a recent meta-analysis of 21 CBC data sets, we found that multiple observations per respondent are
quite valuable, and that respondents can reliably answer up to at least 20 questions, and perhaps even
more. However, we should point out that respondents process earlier tasks differently from later
questions. Respondents pay more attention to brand in the first tasks, and focus more on price in later
questions. (See Johnson and Orme’s article entitled “How Many Questions Should You Ask in ChoiceBased Conjoint?” available for downloading from the Technical Papers section of our home page:
http://www.sawtoothsoftware.com).
If Sawtooth Software’s ICE module will be used to estimate individual-level part worths, respondents
should answer as many choice tasks as they comfortably can, and preferably more than 20 (see ICE
documentation for guidelines).
iii.2.3 Specifying Fixed or Holdout Tasks
CBC/Web lets you specify a single or multiple “fixed” tasks. Fixed refers to the fact that every respondent
is shown the same choice task, with the product concepts defined in exactly the same way. You must
define your own fixed tasks; CBC/Web does not design them for you. (By default, all fixed tasks are
initialized to level “1” for each attribute.)
Most CBC/Web users will opt for a randomized design, since they are quite efficient, automatic, and
permit great flexibility in analysis. Some CBC/Web users with design expertise may choose to implement
a fixed design. A fixed design can be slightly more efficient than a randomized design in measuring the
particular effects for which it was designed.
For most CBC/Web users we recommend using randomized tasks for part worth estimation, and
specifying one or more fixed holdout tasks that are not used for utility estimation. We think it is wise to
include holdout choice tasks in conjoint interviews, even though they may not appear to be needed for the
main purpose of the study. They almost always turn out to be useful, for these reasons:
They provide a proximal indication of validity, measured by the utilities’ ability to predict
choices not used in their estimation.
They provide a check on the scaling of the utilities. If the most popular concepts are overpredicted, then the scale parameter should be reduced. If the predictions are too flat, then
the scale parameter should be increased.
They permit identification and removal of inconsistent respondents (if using ICE or HB).
They can be used for testing specific product configurations under consideration. Much value
can be added by direct measurement of these concepts.
They can be used for testing the accuracy of market simulators. They aid considerably in
comparing alternative models (logit, Latent Class, ICE, or HB) and choice simulation
strategies. (Note: if comparing the ability of different models to predict holdout choices, it is
critical to adjust the scale parameter to maximize the fit of each model prior to making
comparisons.)
If holdout concepts have been defined with differing degrees of product similarity, they can be
used for tuning the appropriate correction for product similarity in Randomized First Choice
modeling.
It’s hard to design good holdout concepts without some prior idea of respondent preferences. There’s no
point in asking people to choose among concepts where one dominates in the sense that everyone
agrees it is best. And, similarly, it’s good to avoid presenting concepts that are equally attractive, since
equal shares of preference would be predicted by a completely random simulator. If you present triples of
concepts, it’s probably best if their shares of choices are somewhere in the neighborhood of 50/30/20.
When conducting CBC studies, if you plan to do segmentation with latent class analysis, it’s wise to
consider the kinds of groups you expect to get, and to design products in holdout choice sets so that one
alternative will be much more preferred by each group. This maximizes your ability to confirm the validity
of the multi-group Latent Class simulator.
If you plan to use the Randomized First Choice simulation model, it is helpful to include holdout tasks that
reflect severe differences in product similarity. For example, in a holdout choice set featuring four product
alternatives, two products might be identically defined on all except one or two attributes. By including
products with differing similarities, the appropriate adjustment for product similarity can be tuned in the
Randomized First Choice Model.
It isn’t necessary to have many holdout sets to check the validity of your utilities, or their scaling.
However, if you want to use those choices to identify and eliminate inconsistent respondents, you need
several choice sets.
Finally, if you do have several choice sets, it’s useful to repeat at least one of them so you can obtain a
measure of the reliability of the holdout choices. Suppose your conjoint utilities are able to predict only
50% of the respondents’ holdout choices. Lacking data about reliability, you might conclude that the
conjoint exercise had been a failure. But if you were to learn that repeat holdout tasks had reliability of
only 50%, you might conclude that the conjoint utilities were doing about as well as they possibly could,
and that the problem lies in the reliability of the holdout judgments themselves.
Analyzing Holdout Concepts
If you have specified fixed holdout choice tasks within the CBC questionnaire, you can analyze the results
using the “Counts” technique offered within SMRT. Counts lets you specify which choice tasks to include
in the analysis, so you can isolate the results for a single task at a time.
Assume you had included a fixed holdout choice task (every respondent saw the same question), where
the task displayed four product concepts. Further assume that each concept reflected a unique brand.
You can tally the percent of the time each brand was chosen for that one task using the Marginals
program in the Online Data Management Module. These percentages will sum to 100%, and reflect the
share proportions for the respective product concepts.
Some researchers repeat choice tasks to achieve a measure of test-retest reliability. This type of analysis
often is done at the individual level. If you plan to analyze holdout choice tasks at the individual level, you
should export the data for analysis using another software program.
iii.2.4 Generating/Testing the CBC Design
In CBC/Web, a design refers to the sum total of the task descriptions across all respondents. The design
contains information about the combinations of attribute levels that make up the product concepts within
the tasks. The design is saved to a design file that you upload to your web server. Optimally efficient
CBC designs can estimate all part worths with optimal precision; meaning that the standard errors of the
estimates are as small as possible, given the sample size (respondents x tasks), the number of product
concepts displayed per task, and respondent preferences.
CBC/Web’s random design strategies generally result in very efficient designs. These designs are not
optimally efficient, but are nearly so. In the case of large sample sizes, a large number of questionnaire
versions in the design file, and no prohibitions, one can confidently field a questionnaire without testing
the design.
However, there are conditions that can result in inefficient designs. Sometimes, a design can be so
inefficient as to defy all attempts to compute reasonable part worth utilities. We have heard of entire data
sets with hundreds of respondents going to waste because the user neglected to test the design.
Therefore, it is imperative to test your design whenever any of the following conditions exist:
any prohibitions are included
sample size (respondents x tasks) is abnormally small
the number of versions you plan to use is few
CBC/Web’s Test Design capabilities only test the efficiency of main effects. It provides a good
approximation of the efficiency of the CBC design. Most researchers using standard CBC plans find the
Test Design capability adequate for their purposes.
If your prohibitions result in unacceptably low design efficiency under the Complete Enumeration or
Balanced Overlap Methods, you should try the Shortcut or Random design strategies. These latter two
methods are less constrained than the more rigorous former ones, and will sometimes result in higher
design efficiencies in the case of many prohibitions.
Testing the Efficiency of Your Design
When you choose Generate/Test Design from the Specify CBC Interview Parameters dialog, CBC/Web
automatically tests the design and displays the results within the results window (the results are also
saved to a file named STUDYNAMEtest.txt). CBC/Web automatically generates a data file appropriate
for design testing. Respondent answers do not factor into the method CBC/Web uses for testing the
design, so real data are not needed.
Following is a sample report, as it appears in the results window:
CBC - Web
A Priori Estimates of Standard Errors for Attribute Levels
Choice Tasks Included: All Random
Task Generation Method: Complete Enumeration
Design Seed: 1
Number of versions: 10
Total Choice Tasks: 100
Att/Lev
1 1
1 2
1 3
1 4
Freq. Actual
Ideal
Effic
75 (this level has been deleted)
75
0.2890
0.2887
0.9981
75
0.2841
0.2835
0.9958
75
0.2936
0.3062
1.0873
Computer
Computer
Computer
Computer
A
B
C
D
2 1
2 2
2 3
100
100
100
(this level deleted)
0.2207
0.2182
0.2275
0.2182
0.9776
0.9200
486 DX
386 SL
386 SX
3 1
3 2
3 3
100
100
100
(this level deleted)
0.2297
0.2182
0.2235
0.2182
0.9022
0.9533
3 lbs
5 lbs
8 lbs
4 1
4 2
4 3
100
100
100
(this level deleted)
0.2234
0.2182
0.2204
0.2182
0.9543
0.9806
Active color
Passive color
Monochrome
5 1
5 2
5 3
100
100
100
(this level deleted)
0.2199
0.2182
0.2203
0.2182
0.9850
0.9809
200 Meg hard disk
80 Meg hard disk
40 Meg hard disk
6 1
6 2
6 3
100
100
100
(this level deleted)
0.2237
0.2182
0.2222
0.2182
0.9516
0.9648
$1,500
$2,000
$3,000
For each level, the number of times it occurs within the design is counted and provided under the column
titled “Freq.”
For each attribute and level, an approximation is made of the relative standard error of each main effect.
Test Design uses ordinary least squares (OLS) rather than multinomial logit for this purpose, and it uses
only the information about the design of the choice tasks, rather than respondents’ answers. (A
multinomial logit model is used in CBC’s analysis modules.) It gives relative standard error estimates
similar to (but not identical to) those of multinomial logit. What is important for testing the design is not a
precise estimate of each standard error, but rather the pattern of their relative magnitudes with respect to
one another.
The Sample Output
We’ll describe the output, using fragments of the file and showing the parts described in bold.
Each line is labeled with the attribute and level in the first columns:
Att/Lev
1 1
1 2
Freq. Actual
Ideal
Effic
75 (this level has been deleted)
75
0.2890
0.2887
0.9981
Computer A
Computer B
The number of times each level occurs in the design is displayed under the column labeled “Freq.”
Att/Lev
1 1
1 2
Freq. Actual
Ideal
Effic
75 (this level has been deleted)
75
0.2890
0.2887
0.9981
Computer A
Computer B
For estimation, it is necessary to omit one level from each attribute. The first level of each attribute is
automatically deleted from this analysis:
Att/Lev
1 1
1 2
Freq. Actual
Ideal
Effic
75 (this level has been deleted)
75
0.2890
0.2887
0.9981
Computer A
Computer B
The column labeled “Actual” gives estimated standard errors for the data file analyzed:
Att/Lev
1 1
1 2
Freq. Actual
Ideal
Effic
75 (this level has been deleted)
75
0.2890
0.2887
0.9981
Computer A
Computer B
1 3
1 4
75
75
0.2841
0.2936
0.2835
0.3062
0.9958
1.0873
Computer C
Computer D
The column labeled “Ideal” gives an estimate of what those standard errors would be if the design were
precisely orthogonal and had the same number of observations:
Att/Lev
1 1
1 2
1 3
1 4
Freq. Actual
Ideal
Effic
75 (this level has been deleted)
75
0.2890
0.2887
0.9981
75
0.2841
0.2835
0.9958
75
0.2936
0.3062
1.0873
Computer
Computer
Computer
Computer
A
B
C
D
The column labeled “Effic” gives the relative efficiency of this design, compared to the hypothetical
orthogonal design (it is the square of their ratio):
Att/Lev
1 1
1 2
1 3
1 4
Freq. Actual
Ideal
Effic
75 (this level has been deleted)
75
0.2890
0.2887
0.9981
75
0.2841
0.2835
0.9958
75
0.2936
0.3062
1.0873
Computer
Computer
Computer
Computer
A
B
C
D
When we consider the entire “Effic” column, we see that the randomized design had a median efficiency
of about 97 percent, relative to a hypothetical orthogonal design. The estimates of standard errors for
orthogonal designs are only approximate, and with a very small data file such as this there can be quite a
lot of variability in estimation.
Notice that the standard error estimated for attribute 1, level 4 is actually smaller than the value estimated
for a hypothetical orthogonal design:
Att/Lev
1 1
1 2
1 3
1 4
Freq. Actual
Ideal
Effic
75 (this level has been deleted)
75
0.2890
0.2887
0.9981
75
0.2841
0.2835
0.9958
75
0.2936
0.3062
1.0873
Computer
Computer
Computer
Computer
A
B
C
D
Anomalies such as this are likely to occur when using small samples of test respondents, and shouldn’t
be of concern.
It is important to use the Test Design option, since if too many prohibitions are included it is possible to
develop designs that do not permit estimation of desired effects. When that occurs, the estimated
standard errors for those effects will be infinite and their estimated efficiencies will be zero. Your attention
is called to such occurrences by the presence of asterisks instead of numbers, and/or by error messages,
that signal that additional thought is needed about what can be prohibited from occurring. If you see a
warning stating that your design is deficient or if you see asterisks listed for the standard errors,
your design is deficient.
Especially if using few designs, you will find that the quality of the design is affected by the design seed.
You may want to try different design seeds to obtain slightly better designs.
Advanced Design Testing Methods
Aside from the Test Design diagnostics available in CBC/Web software, there are more advanced means
for testing CBC designs, especially for testing the efficiency of interaction terms. Our Test Design
approach does not take into account that the design efficiency for CBC studies depends on how the
concepts are grouped in sets. The level contrasts within sets are a critical aspect of design efficiency.
Furthermore, the utility balance among the concepts within those sets affects overall design efficiency,
and thus respondents’ preferences need to be known (or reasonably assumed) to fully assess the quality
of a design.
Advanced users may find it useful to create a “dummy” respondent answer file (using CBC/Web’s paperand-pencil process) to build simulated data sets and estimate the logit effects, including any interactions.
By holding sample size constant and examining the size of the standard errors for different designs (e.g.
with and without prohibitions, with fewer or more versions, etc.) one can assess the relative quality of
different designs and ensure that part worths can be estimated with reasonable precision.
In a paper available in the Technical Papers library on our website entitled “An Overview and Comparison
of Design Strategies for Choice-Based Conjoint Analysis,” computing the D-efficiency of a design is
discussed. The output from our CBC logit analysis module can be used to estimate D-efficiency,
following the instructions provided in the Appendix of that white paper. D-efficiency takes level contrasts
within sets into account, as well as utility balance.
iii.3.1 Specify CBC Interview Parameters
(General Tab)
Please see the following topics for more in-depth background information regarding CBC study designs
and the controls on this tab:
CBC Questionnaires and Design Strategy
Number of Attributes/Levels/Tasks in CBC
Specifying Fixed or Holdout Tasks
Prior to adding CBC choice tasks to your questionnaire, you must create your list of attributes and levels
under Compose | Conjoint Settings | Add/Edit Attributes...
Number of Random Choice Tasks
“Random” choice tasks are those designed automatically by CBC/Web software for part worth
estimation. “Random” refers to the fact that respondents are assigned at random to receive different
versions of carefully designed choice questions. Most CBC/Web projects you do probably will include
between 6 to 25 random choice tasks (questions).
By specifying the number of random choice tasks in this dialog, CBC/Web adds this number of
questions to your questionnaire when you click the Generate/Test Design button, or when you return
to the Write Questionnaire dialog and click to add CBC - conjoint questions. Once you have added
CBC questions to your survey, they appear automatically in the List of Questions window named (for
Random questions) CBCRAN1 to CBCRANn and (for Fixed questions) CBCFIX1 to CBCFIXn.
When you preview these questions from this dialog, the first task of the first questionnaire version is
displayed.
Number of Fixed Choice Tasks
“Fixed” choice tasks are CBC questions that are asked the same way (fixed) for each respondent.
CBC/Web automatically adds fixed choice questions to your CBC/Web questionnaire, and they
appear automatically in the List of Questions window named CBCFIX1 to CBCFIXn. (Fixed tasks are
optional, though we highly recommend them.) As a default, fixed tasks are initialized to display level
1 for all attributes. You should, of course, edit your fixed tasks to reflect your desired combinations of
levels.
You define the combinations of levels that make up each product concept in a fixed task when you
edit the Fixed tasks individually from the Write Questionnaire dialog. These fixed tasks are
presented in exactly the same way for every respondent. These choice tasks look just like the other
random choice tasks in the survey to the respondent. The question is phrased in the same way, the
same number of product concepts are shown per task, and the same number of attributes are
represented.
You will most likely use the Fixed choice tasks option to specify Holdout Choice Tasks. Holdout
choice tasks are often used to check the validity of the utilities estimated from the randomized choice
tasks. They are thus “held out” of the utility estimation. Holdout choice tasks are important to include
in most every CBC study. They provide a “sanity check” to ensure that the model is working properly.
They can help you determine which of many methods of utility estimation (logit, Latent Class, HB)
seems to work best for your study. They can also help you identify inconsistent respondents.
We generally suggest you include between 2 to 4 fixed tasks in your study. We suggest you spread
them evenly throughout the questionnaire. For example, if you are including 12 random tasks and 3
fixed holdout tasks (15 total choice tasks), you might place the holdout tasks in the 5th, 9th, and 13th
positions). Since people’s utilities can change in the course of a survey due to learning effects, it is
probably not wise to bunch all of your holdout choice questions together. You can move them around
within the list of questions using the drag-and-drop feature or the “up arrow” and “down arrow” icons.
Number of Concepts per Choice Task
Concepts refer to the product alternatives shown within a choice task (question).
From a statistical viewpoint, choice tasks are not a very efficient way to learn about preferences.
Respondents evaluate multiple concepts, but only tell us about the one they prefer. We don’t learn
how strong that preference is relative to the other product concepts. Showing more product concepts
per screen increases the information content of each task. Recent research has shown that
respondents are quite efficient at processing information about many concepts. It takes respondents
considerably less than twice as long to answer choice tasks with four concepts as with two concepts.
In general, we recommend showing around three to five concepts per task (particularly if your
attributes also have about three to five levels each). However, there may be instances (e.g. a
beverage study with 25 brands plus price), where showing more product concepts per screen is
appropriate (does not overwhelm respondents) and more realistically portrays the actual buying
situation. With the Advanced Design Module for CBC/Web, you can display up to 100 product
graphics on the screen, presented as if they were resting on “store shelves.”
Most CBC/Web studies you conduct probably will include between 3 to 5 product concepts per task.
Number of Columns
This controls the layout of the product concepts on the screen. If you plan to show a total of three
product concepts (without a “None” alternative), specifying three columns will arrange those concepts
side-by-side horizontally. Specifying one column will arrange those one on top of the other in a
vertical stack. If you are showing six total product concepts and specify three columns, this will result
in two rows of three product concepts.
The “None” Option
Choice-based conjoint questions can be designed to have a “None” option, sometimes referred to as
the constant alternative. It is argued that the None option in CBC tasks better mimics the real world,
since buyers are not required to choose products that don’t satisfy them. The None option can also
be used to reflect a status quo choice, such as “I’d continue to use my current long-distance service
provider.”
CBC/Web gives you the option of including a None option in the questionnaire. If the None option is
present, a separate logit utility weight is computed for the None parameter (if using logit analysis).
The more likely respondents are to choose None relative to product concepts, the higher the utility of
the None option. The None parameter can be used in market simulations to estimate the proportion
of respondents that would not choose any of the simulated product concepts.
In general, we recommend including None in CBC questionnaires, but paying less attention to (or
completely ignoring) it in market simulations. The propensity to choose None can be a telling
measure when comparing groups or individuals. We suggest that percent of Nones in Counts or
simulations be viewed in a relative rather than an absolute sense.
The use of the “None” concept in partial-profile CBC studies (only supported by the Advanced Design
Module for CBC/Web) is problematic. The None weight varies significantly depending on how many
attributes are displayed in the partial-profile task.
Justification for the None concept refers to vertical justification of the None text within its concept
area.
Generate/Test Design
When you click the Generate/Test Design button, CBC/Web automatically creates a design file
(STUDYNAMEdsgn.cgi) using your current CBC settings that you upload to the server (for web-based
data collection), or install on PCs (for CAPI-based collection), or that is used in generating
questionnaire files for paper-and-pencil collection. It also produces a test design report, letting you
know whether the design can efficiently estimate the main effect utilities pertinent to your study. You
should pay close attention to the design report to make sure the design efficiencies for the attribute
levels in your study are acceptable and that the report does not state “Design Deficient” or have any
asterisks in place of the estimated standard errors.
Please note that in some cases it may take a long time to generate a design if using the Complete
Enumeration or Balanced Overlap methods. A typical study with few attributes and levels will face
little delay (a few seconds) to generate hundreds of designs. CBC/Web gives an estimate of the time
remaining to generate the design. For testing purposes you may wish to generate few designs for the
sake of speed, but once you are ready to post the survey to the Web (or use CAPI data collection)
you may request that hundreds of designs be generated and saved to the design file
(STUDYNAMEdsgn.cgi).
Conditional Relationships…
The conditional relationships button lets you associate conditional prices or graphics with one or more
attributes in your study.
Conditional Prices: It is common in pricing research studies to customize the price ranges for
different brands or product classes. Conditional pricing tables accomplish this without the
damaging effects of prohibitions. Although conditional pricing is robust from a design standpoint,
the analysis and interpretation of results is more challenging than with typical CBC studies.
Conditional Graphics: Sometimes, you may want to associate a graphic with an attribute or
multiple attributes. For example, if a combination of brand and form factor make up a graphic,
one can specify the two attributes separately and associating a conditional graphic with their
combined levels. This permits estimation of separate effects for brand and form factor.
iii.3.2 “Shelf-Facing” Display
Display Tab (Advanced Design Module Only)
Display Type
If you own the Advanced Design Module for CBC/Web, you can use the “Shelf Facing” Display option.
The shelf display option is particularly useful for beverage and packaged goods research. Here is an
example of a shelf facing display:
To achieve this look, the user supplies graphics in which each product is situated on a shelf “segment,”
such as:
When the graphics are placed side-by-side (with no space between them) and CBC/Web by default
automatically places a border (usually black for most browsers) underneath the graphic, the resulting
display looks like a continuous shelf.
Note: when using the “shelf display” option, you will probably not want the brands to “shuffle” around
randomly from task to task, but will want them always located in the same position. Make sure to select
Sort Concepts Using the Natural Order for a Specific Attribute option on the Design tab.
Shelf Facing Settings
To create a shelf display look, you must tell CBC/Web how many total rows (shelves) are to be used, and
how many product concepts to display per row. For example, if you have 45 total products to display, you
might choose to display them on 5 rows (shelves). The number of products shown per row might be
something like:
Row #
1
2
3
4
5
Number of Concepts on Row
7
8
10
9
11
Note that the widths of the shelves are entirely controlled by the width of the graphics used in the product
concepts, together with the number of graphics you have chosen to display per row. CBC/Web doesn’t
re-size your graphics--they are always displayed in their natural size.
Shelf Facing Format Settings
CBC/Web controls the way the HTML tables are specified for shelf facing display. If you want to specify
your own formatting, using CSS (Cascading Style Sheets), you can un-check the Use Shelf Facing Style
button. (We have provided CSS “hooks” within the HTML for shelf facing display.)
There are a number of settings that control how the “shelf display” functions work. The following picture
illustrates the area each setting affects:
For Top Attribute Background Color, we suggest you pick the same color as is used in the background of
your graphics (Note: this color will only show on the screen if the graphics you develop have different
heights, or if you chose a transparent background color in your graphics). It is usually a good idea to pick
the same background color as if used for the Task Background Color, which is selected on the Format
tab.
Separation between Shelves adds a blank line between the shelves.
None Option Position in Shelf Facing Tasks
In typical CBC task displays, the “None” option is always shown as the last concept in each task. You
can also use that option with shelf facing. However, shelf facing display permits more flexibility. You can
move the “none” text and the radio button associated with the text into the CBC tasks’ header or footer.
This is not the “global” HTML header, but is the regular text header or footer that you edit under Write
Questionnaire | Edit.
To place the “None” concept in the header or footer of the task, type the system variable
%%CBC_NONE% in either the header or footer question text boxes of the CBC choice task, accessed
through Write Questionnaire | Edit. You may want to supply some HTML to format this “None” concept
exactly as you want it.
iii.3.3 Specify CBC Interview Parameters
(Format Tab)
CBC Question Colors
Alternating Concept Colors
A “concept” refers to a product or service alternative shown on the screen. It is often useful to use
different background colors to unify the product concepts and distinguish between them. The colors
can be directly specified as their RRGGBB color codes, or by using the Change... button, which
provides a color picker.
Task Background Color
This color controls the color of the background used in the CBC task header, footer, and the area
directly behind the product concepts.
Format Options
Task Alignment on Page
This controls the position of the entire choice task on the screen: whether left, center, or right justified.
Task Width (in Pixels)
This controls how wide the choice tasks appear on the screen. We generally suggest you use no
wider than about 600 pixels, because some people may be using 640 x 480 pixel screen resolution.
If you know that the screens used to view the questionnaire will all be set to higher resolutions, you
may decide to enter a larger value.
Spacing between Levels (Cell Padding in Pixels)
This controls how much white space is added between the attribute level text, and separating the
attribute text from the immediate concept border.
Concept and Level Border (in Pixels)
You can add grid lines between all attribute levels in your product concepts by specifying a pixel
width. A typical width to specify would be between 2 and 5 pixels.
Add Separation between Concepts
You can create more space between your product concepts by clicking this box. You cannot control
the amount of spacing added, as it is preset.
Generate/Test Design
When you click the Generate/Test Design button, CBC/Web automatically creates a design file
(STUDYNAMEdsgn.cgi) using your current CBC settings that you upload to the server (for web-based
data collection), or install on PCs (for CAPI-based collection), or that is used in generating
questionnaire files for paper-and-pencil collection. It also produces a test design report, letting you
know whether the design can efficiently estimate the main effect utilities pertinent to your study. You
should pay close attention to the design report to make sure the design efficiencies for the attribute
levels in your study are acceptable and that the report does not state “Design Deficient” or have any
asterisks in place of the estimated standard errors.
Please note that in some cases it may take a long time to generate a design if using the Complete
Enumeration or Balanced Overlap methods. A typical study with few attributes and levels will face
little delay (a few seconds) to generate hundreds of designs. CBC/Web gives an estimate of the time
remaining to generate the design. For testing purposes you may wish to generate few designs for the
sake of speed, but once you are ready to post the survey to the Web (or use CAPI data collection)
you may request that hundreds of designs be generated and saved to the design file
(STUDYNAMEdsgn.cgi).
Conditional Relationships…
The conditional relationships button lets you associate conditional prices or graphics with one or more
attributes in your study.
Conditional Prices: It is common in pricing research studies to customize the price ranges for
different brands or product classes. Conditional pricing tables accomplish this without the
damaging effects of prohibitions. Although conditional pricing is robust from a design standpoint,
the analysis and interpretation of results is more challenging than with typical CBC studies.
Conditional Graphics: Sometimes, you may want to associate a graphic with an attribute or
multiple attributes. For example, if a combination of brand and form factor make up a graphic,
one can specify the two attributes separately and associating a conditional graphic with their
combined levels. This permits estimation of separate effects for brand and form factor.
iii.3.4 Specify CBC Interview Parameters
(Design Tab)
Experimental Design Creation Settings
Random Task Generation Method
This section lets you define what design method CBC should employ in determining the combinations
of attribute levels to be shown in the random choice tasks for your survey. We encourage you to read
the section on CBC Questionnaires and Design Strategy to learn more about CBC’s design
strategies. For standard CBC studies, we generally suggest using either the Complete Enumeration
or Balanced Overlap design strategies. (Note: these methods are computationally intensive, and for
very large numbers of attributes and levels may require a long time to generate a design file.)
When using Advanced Design Module designs such as alternative-specific and partial profile designs,
only the Shortcut and Random methods are available. We strongly encourage you to use the
Shortcut method for such designs.
Number of Questionnaire Versions
CBC/Web automatically creates a design file that is a) uploaded to the server if using web-based data
collection, b) is copied to other PCs if using CAPI-based data collection, or c) is used to generate a
questionnaire file if using paper-based data collection. During the process, you are asked to specify
how many versions of the design should be written to that design file. For computer-based
interviewing, we generally recommend that you include at least 100 designs, as there are benefits to
fielding many versions of the questionnaire. Optimally, each respondent would receive his/her own
design. However, at some point, the benefit of increasing the size of the design file is not worth the
decreases in questionnaire performance for the server to manage so many designs. For that reason,
we limit the number of designs that can be placed in the design file to 999. If you have more than 999
respondents, this means that some respondents will by necessity receive the same designs. For all
practical purposes, once there are so many unique designs in the total design pool, this poses no
harm.
Design Seed
This is the seed value used in the random number generator for the design algorithm. By using a
different starting seed, a different design will result. With very many versions of the design, the
overall design efficiency should be nearly identical. With only a few versions of the design, using a
different starting seed can have a meaningful effect on the overall efficiency.
Randomize Attribute Position within Concepts
Randomize Attribute Order
Randomize Attribute Order specifies whether to present the attributes in random presentation order
within a concept. If this is selected, the attribute list is randomized once per respondent, and all tasks
within a respondent’s interview will display the attributes in that given order. This can be useful to
control order effects across respondents.
First Randomized Attribute
The first attribute in the range of attributes to be shown in random presentation order. Specify a “1" if
you wish all attributes to be randomized. If, for example, you want the first and second attributes
always to appear as the first two attributes in a product concept, specify a “2.”
Last Randomized Attribute
The last attribute in the range of attributes to be shown in random presentation order. Specify the last
attribute number if you wish all attributes to be randomized. If, for example, you had a total of five
attributes in your study and you wanted the fourth and fifth attributes always to appear as the last two
attributes in a product concept, specify a “3.”
Concept Sorting within Task
Concept sorting refers to the order in which the product concepts are displayed in a task. For
example, consider a choice task with three product alternatives. The default behavior in CBC/Web
software is to randomize the order in which these product concepts are displayed. Thus, a given
brand may appear in the first concept in one task, but in the last concept position in the next task.
This is Full Randomization.
However, researchers do not always want the levels appearing in random concept positions within
the task. There are two options for researchers to control concept ordering:
Sort Concepts Using a Randomized Level Order for a Specific Attribute. With this option,
the researcher selects a specific attribute that is used to control the concept presentation order.
For example, the researcher may choose the “brand” attribute. The levels for brand are
randomized once per respondent, and then the product concepts are displayed in that new order
for all choice tasks for that respondent. For example, respondent #1 may see brands in the order
2, 3, 1 for the three concepts in the task (left, middle, right), for all tasks. Then, respondent #2
may see the brands in the order 3, 2, 1 for the concepts in his questionnaire.
This strategy helps control order bias, without making respondents reorient themselves to new
positions for, say, brands in each new task.
Sort Concepts Using the Natural Level Order for a Specific Attribute. With this option, the
concepts are shown in the natural level order for a selected attribute, such as “brand.” Brand 1
will always appear in the first concept, brand 2 in the second concept, etc.--assuming that the
researcher uses a Complete Enumeration or Shortcut design strategy (which enforces minimal
overlap), has no prohibitions, and shows as many concepts on the screen as there are brand
levels in the study.
This strategy is often used in “shelf facing” displays (Advanced Design Module only), or in cases
in which the researcher doesn’t want to make respondents reorient themselves to new positions
for, say, brands in each new task. However, using a fixed concept order may introduce order
biases.
Test Design Report Settings
Include Two-Way Frequencies in Test Design Report
Check this to include an additional analysis within the Test Design Report of joint frequencies of
levels. These joint frequencies reflect the number of times each attribute level appeared with each
other attribute level within the same product concept across all concepts and tasks of your design.
Generate/Test Design
When you click the Generate/Test Design button, CBC/Web automatically creates a design file
(STUDYNAMEdsgn.cgi) using your current CBC settings that you upload to the server (for web-based
data collection), or install on PCs (for CAPI-based collection), or that is used in generating
questionnaire files for paper-and-pencil collection. It also produces a test design report, letting you
know whether the design can efficiently estimate the main effect utilities pertinent to your study. You
should pay close attention to the design report to make sure the design efficiencies for the attribute
levels in your study are acceptable and that the report does not state “Design Deficient” or have any
asterisks in place of the estimated standard errors.
Please note that in some cases it may take a long time to generate a design if using the Complete
Enumeration or Balanced Overlap methods. A typical study with few attributes and levels will face
little delay (a few seconds) to generate hundreds of designs. CBC/Web gives an estimate of the time
remaining to generate the design. For testing purposes you may wish to generate few designs for the
sake of speed, but once you are ready to post the survey to the Web (or use CAPI data collection)
you may request that hundreds of designs be generated and saved to the design file
(STUDYNAMEdsgn.cgi).
Conditional Relationships…
The conditional relationships button lets you associate conditional prices or graphics with one or more
attributes in your study.
Conditional Prices: It is common in pricing research studies to customize the price ranges for
different brands or product classes. Conditional pricing tables accomplish this without the
damaging effects of prohibitions. Although conditional pricing is robust from a design standpoint,
the analysis and interpretation of results is more challenging than with typical CBC studies.
Conditional Graphics: Sometimes, you may want to associate a graphic with an attribute or
multiple attributes. For example, if a combination of brand and form factor make up a graphic,
one can specify the two attributes separately and associating a conditional graphic with their
combined levels. This permits estimation of separate effects for brand and form factor.
iii.3.5 Specify CBC Interview Parameters
(Advanced Design Module Tab)
If you own the Advanced Design Module for CBC/Web, this tab is available to you. We strongly
encourage you to read the following sections related to Advanced Design Module designs.
Alternative-Specific Designs
Partial-Profile Designs
Traditional Full-Profile CBC Design
By default, Traditional Full-Profile CBC Design is chosen. Full-Profile refers to a conjoint analysis design
in which a level from every attribute in your study is used to define a product concept. If you have six
attributes in your design, each product concept will be described using six level descriptors.
Alternative-Specific CBC Design
Sometimes, some attributes just don’t seem to apply to some brands, drug treatments, or modes of
transportation. For example, consider ways to get to work. The attributes describing buses are quite
different from those describing trains, cars, or walking. Alternative-Specific designs let you specify that
certain attributes are conditional upon different modes of travel. You can also use these specialized
designs to define customized price ranges for different brands, and this accomplishes nearly the same
thing as using the more commonly used Conditional Pricing approach.
Align Common Attributes on Same Row
Use this option to horizontally align any common attributes within the choice task. You will discover
that the positioning of common attributes in the attribute list you defined under Compose | Conjoint
Settings | Add/Edit Attributes will have different effects on the layout of the choice task when
aligning common attributes.
Partial-Profile CBC Design
It is well known that asking respondents to evaluate too many attributes can result in information overload
and confusion in full-profile conjoint techniques such as CBC. Some researchers have advocated a new
kind of CBC design that randomly shows a subset of the attributes within each choice question. These
are called Partial-Profile CBC designs. For example, a study might include a total of 10 attributes, but the
respondent only sees four different attributes within each choice task. The next choice task shows a
different random subset of four attributes, so that after a number of choice tasks each respondent has
made evaluations involving all of the attributes.
Rotate Attributes into Concepts Starting with Attribute ___ and Ending with Attribute ___
This control lets you force some attributes to be present in every partial-profile choice task. For
example, assume a CBC study with 10 attributes, where the tenth attribute was price. Further
assume the researcher wanted to always have price represented in the choice task, but the others to
rotate in sets of four attributes. You would place a “1" and a “9" in the blanks: Rotate Attributes into
Concepts Starting with Attribute “1" and ending with Attribute “9". You would also specify a 5 for
Number of Attributes Shown in each Concept.
If one of your attributes had significantly more levels than the other attributes, you can potentially
achieve better overall design efficiency by forcing that attribute to be present in the choice task. Care
should be taken that this does not somehow bias respondents’ consideration of the attributes. At this
point, the biases potentially caused by holding some attributes present in all partial profile choice
tasks are not well known.
Number of Attributes Shown in Each Concept
Determines how many attributes should be shown in a partial-profile concept. We generally suggest
no more than about 5 attributes be shown at a time in partial-profile CBC studies. Recent research
by Patterson and Chrzan suggests that between 2 to 4 attributes may be optimal.
Generate/Test Design
When you click the Generate/Test Design button, CBC/Web automatically creates a design file
(STUDYNAMEdsgn.cgi) using your current CBC settings that you upload to the server (for web-based
data collection), or install on PCs (for CAPI-based collection), or that is used in generating
questionnaire files for paper-and-pencil collection. It also produces a test design report, letting you
know whether the design can efficiently estimate the main effect utilities pertinent to your study. You
should pay close attention to the design report to make sure the design efficiencies for the attribute
levels in your study are acceptable and that the report does not state “Design Deficient” or have any
asterisks in place of the estimated standard errors.
Please note that in some cases it may take a long time to generate a design if using the Complete
Enumeration or Balanced Overlap methods. A typical study with few attributes and levels will face
little delay (a few seconds) to generate hundreds of designs. CBC/Web gives an estimate of the time
remaining to generate the design. For testing purposes you may wish to generate few designs for the
sake of speed, but once you are ready to post the survey to the Web (or use CAPI data collection)
you may request that hundreds of designs be generated and saved to the design file
(STUDYNAMEdsgn.cgi).
Conditional Relationships…
The conditional relationships button lets you associate conditional prices or graphics with one or more
attributes in your study.
Conditional Prices: It is common in pricing research studies to customize the price ranges for
different brands or product classes. Conditional pricing tables accomplish this without the
damaging effects of prohibitions. Although conditional pricing is robust from a design standpoint,
the analysis and interpretation of results is more challenging than with typical CBC studies.
Conditional Graphics: Sometimes, you may want to associate a graphic with an attribute or
multiple attributes. For example, if a combination of brand and form factor make up a graphic,
one can specify the two attributes separately and associating a conditional graphic with their
combined levels. This permits estimation of separate effects for brand and form factor.
iii.3.6 CBC Random and Fixed Questions
The CBC questions display product alternatives and ask respondents to choose among them:
“Random” choice questions (tasks) and “Fixed” tasks share the same template, and therefore have a
consistent look and feel for all CBC-type questions. The Header and Footer, and any other settings you
choose within this dialog are reflected across all CBC questions.
Header
Used to specify the text to appear above the product concepts.
Level Text
Controls the formatting of the attribute level text displayed within product concepts.
Attribute Labels
If you check Include Attribute Labels in Tasks, the labels you typed for the attributes are placed within
the choice task on the left-hand side of the concepts. Some researchers prefer to specify the
attribute labels alongside the task so that the level descriptions can be made very short. For
example, the attribute label may state “Miles Per Gallon” and the level labels “15", “20" and “25".
Fixed Task Designs...
This button only appears when editing an individual Fixed CBC task. It is used to access the CBC
Fixed Choice Task Settings dialog, where you specify the levels that are to appear in each separate
fixed choice task.
Footer
Used to specify the text to appear below the product concepts.
CBC Settings...
Accesses the Specify CBC Interview Parameters dialog.
Edit Attributes...
Accesses the Specify Attributes and Levels dialog.
Preview
Displays the current question in a browser window. When you preview CBC Random tasks, the task
number matching the task number you are editing is shown from the first questionnaire version in
your design.
When viewing Fixed CBC tasks, the attribute levels you specified for the task are displayed.
iii.3.7 CBC Fixed Choice Task Settings
CBC Fixed tasks are those that are usually used in CBC/Web questionnaires as “Holdout Choice Tasks”.
“Random” choice tasks are used for estimating utilities, whereas Fixed tasks are usually “held out”
(omitted) from that estimation. The part worth utilities developed from the Random tasks can be placed
into a market simulator (SMRT) to predict respondents’ choices to the Fixed tasks. The degree to which
the model developed from the Random tasks can predict the holdout tasks provides an indication of the
quality of the model and the internal reliability of the data.
As the study author, you are required to specify the composition of each of the Fixed tasks by choosing
the attribute levels that make up each product concept. You do this using the drop-down items within this
dialog. It is important that you have a strategy for creating these tasks. Please see the section on
Holdout Choice Tasks within this documentation.
You can toggle between all fixed tasks within your study using the Fixed Choice Task drop down box in
the upper-left corner of this dialog.
iii.4.1 Alternative-Specific Designs
(The capabilities described in this section are only available if you own the CBC/Web Advanced Design
Module.)
Some CBC projects require more flexible designs than the standard approach used with our CBC system.
With standard CBC, if there are six attributes in the study, all six appear within each product concept.
Researchers have employed a specialized type of CBC design wherein some or all product alternatives
have their own unique sets of attributes. Conditional Pricing is one example of such a design, where
each brand might have its own set of prices. But, designs can be much more complex than that. For
example, consider modes of transportation: cars versus buses. Each option has its own set of features
(beyond just price) uniquely associated with that mode of transportation.
Consider the following attributes for cars and buses:
Car
Bus
Parking fee $5.00/day
Parking fee $8.00/day
Parking fee $10.00/day
Picks up every 20 min.
Picks up every 15 min.
Picks up every 10 min.
Picks up every 5 min.
25 cents per one-way trip
50 cents per one-way trip
75 cents per one-way trip
$1.00 per one-way trip
Assume we wanted to measure three other (constant) alternatives: walking, biking, or “I’d choose another
way to get to work.”
The CBC question might look like the following:
If you lived 1 mile away from your workplace in downtown New York and these were your choices
for getting there, which would you choose?
Bike
Drive my car
Walk
I’d choose another
Ride the bus
way to get to work
Parking fee
Picks up every 10
$8.00/day
min.
$1.00 per one-way
trip
To implement this design using CBC/Web’s Advanced Design Module, you define the attributes as
follows:
Attribute 1:
Walk
Bike
Ride the bus
Drive my car
Attribute 2:
Picks up every 20 min.
Picks up every 15 min.
Picks up every 10 min.
Picks up every 5 min.
Attribute 3:
25 cents per one-way trip
50 cents per one-way trip
75 cents per one-way trip
$1.00 per one-way trip
Attribute 4:
Parking fee $5.00/day
Parking fee $8.00/day
Parking fee $10.00/day
Before proceeding, it is useful to introduce some terminology to describe the different roles attributes can
play in alternative-specific designs.
Primary attribute: an attribute whose levels appear in every product concept and upon which
conditional attributes (described below) depend. In the example above, attribute 1 is the primary
attribute.
Conditional attribute: attributes only displayed with a particular level or levels of the primary attribute.
Attributes 2 through 4 are conditional attributes.
Common attribute: attributes displayed with all levels of the primary attribute. (The example above
doesn’t include a common attribute.) In traditional CBC designs, all attributes are common.
Specifying an Alternative-Specific Design
Choose Compose | Conjoint Settings | CBC Settings... to bring up the Specify CBC Interview
Parameters dialog. From that dialog, click the Advanced Design Module Tab and then check the
Alternative-Specific CBC Design box.
To create the “None” alternative, check the Include None Option box under the General tab and specify
the text: “I’d choose another way to get to work.” Additional constant alternatives are defined as levels
within the primary attribute.
To complete the design, set the following attribute level prohibitions (make sure to check “alternativespecific design” prior to setting these prohibitions):
Attribute 1, Level 1
Attribute 1, Level 2
Attribute 1, Level 3
Attribute 1, Level 4
Attribute 1, Level 4
with
with
with
with
with
Attributes 2 through 4 (all levels)
Attributes 2 through 4 (all levels)
Attribute 4 (all levels)
Attribute 2 (all levels)
Attribute 3 (all levels)
Additional Guidelines for Alternative-Specific Designs
There are two random design strategies available for alternative-specific designs: Shortcut and Random.
For most purposes, we recommend the Shortcut method. The Shortcut method leads to efficient
measurement of main effects. Recall that within our paradigm for alternative-specific designs, main
effects for conditional attributes are measured specific to particular levels of the primary attribute. If the
measurement of higher-level effects is the primary concern, the purely Random design method can work
well given a large sample size.
The “None” alternative is always displayed in the last concept position (or in the question’s header or
footer). Other constant alternatives you define (with exhaustive prohibitions) within the primary attribute
are positioned randomly within the task by default, but you can control their order of presentation on the
Design tab of the Specify CBC Interview Parameters dialog. Depending on how many concepts are
displayed per task and the design method, it is possible that a constant alternative may not appear in
some tasks. If you want the constant alternatives to appear in every task, you should display at least as
many concepts as levels in the primary attribute and then choose the Shortcut design strategy.
It is possible to have multiple primary attributes and multiple conditional “branches” in an alternativespecific design. Conditional attributes can be shared (common) between two or more levels of the
primary attribute. You can choose to have those common attribute aligned in the same row within your
choice task by checking Align Common Attributes on Same Row from the Advanced Design Module tab.
Fixed alternative-specific tasks (or entire designs) may be specified.
Primary attributes must appear in the attribute list before conditional attributes.
Design Efficiency and Alternative-Specific Designs
Alternative-specific designs usually require a complex set of prohibitions. Therefore, we strongly
encourage you to test the design to ensure acceptably stable estimation of the part worths. Testing
design efficiency for alternative-specific designs is an advanced topic. The suggestions offered here are
basic and practical.
When using CBC/Web’s Test Design program, you should note that the design efficiency for attribute
levels is computed with respect to an “ideal” hypothetical orthogonal design under full-profile traditional
CBC design assumptions, where all attributes are “common.” Therefore, for all but the common
attributes, the ideal will be unobtainable, and design efficiencies will be understated.
As with testing typical CBC designs, we recommend that the design efficiencies be interpreted in a
relative sense. We also suggest you generate dummy data and estimate effects using logit. For testing
purposes, we suggest you use the paper-and-pencil data collection capability within CBC/Web. You can
create a file of simulated respondent answers to the CBC tasks. You should create a file containing as
many respondents as you plan to eventually obtain. It is not necessary that these simulated respondents
pick concepts in a rational manner when generating test data. However, make sure that each level of
each attribute (including the constant alternative) is chosen at least once (and not chosen every time), or
logit will not converge. After accumulating the data, export the .CHO file. Import the .CHO file within the
SMRT software, and compute part worths using logit. When running logit, the first thing you should check
is that it doesn’t return the message: “WARNING - THE DESIGN IS DEFICIENT.” You should pay
particular attention to the standard errors of the part worths. Lack of convergence or very large standard
errors are an indication that your design is deficient.
If your design is like those described in our transportation example where prohibitions are between all
levels of the conditional attribute(s) with a particular level (or levels) of the primary attribute, you will likely
be on safe ground. If you define additional prohibitions, you may encounter problems during analysis.
You can specify two-way interactions for alternative-specific designs, though in our example it doesn’t
make sense to specify two-way interactions between primary and conditional attributes, since the effects
measured are already “specific” to the reference levels of the primary attribute. However, for this example
design, it is possible to study the interaction between the frequency of bus pick-up and the cost per bus
trip, since there were no prohibitions between those two attributes. If a common attribute were defined
(applying to all levels of the primary attribute), it is possible to specify other interactions, such as between
the primary and common attribute.
Analysis proceeds as with any other CBC study. The only difference is that when specifying product
concepts in the market simulator, you type “N/A” for conditional attributes that aren’t applicable.
Logit (limited to 30 attributes), Latent Class (limited to a maximum of 100 attributes), ICE (limited to 10
attributes) or HB (limited to 1000 attributes) estimation are possible with alternative-specific designs.
iii.4.2 Partial Profile Designs
(The capabilities described in this section are only available if you own the CBC/Web Advanced Design
Module.)
In spite of their advantages, CBC questions can overwhelm some respondents, particularly if there are
many attributes. The base CBC system can measure up to ten attributes (shown in full profile), but in
general we suggest limiting CBC designs to six or fewer attributes.
Some researchers have proposed “partial-profile” choice experiments as a way to estimate preferences
for a large set of attributes. With partial-profile designs, each choice task includes a subset of the
attributes (typically around five). Across all tasks and respondents, a much larger list of attributes is
evaluated. The CBC/Web Advanced Design Module permits up to 30 attributes.
Partial-profile choice designs are relatively new to the industry, but are becoming more mainstream over
the last few years. A growing body of research suggests that they can be valuable for some situations.
We haven’t yet formed a definite opinion about their performance versus other methods like ACA for
dealing with large numbers of attributes. We hope that including partial-profile in the Advanced Design
Module encourages further research and experimentation.
With partial-profile designs, we assume respondents can evaluate the product concepts holding all
attributes not represented constant. If respondents cannot maintain this ceteris paribus mind set, the
resulting data may be incorrect. Therefore, when asking the choice question, we suggest including
language such as, “Please assume that all features not shown are alike in all other ways,” or “Please
assume that these toasters differ only on the features shown.” This may help respondents answer
appropriately, but it is still no guarantee.
The use of the “None” concept in partial-profile CBC studies is problematic. The None weight varies
significantly depending on how many attributes are displayed in the partial-profile task.
Analysis methods for partial-profile include logit (limited to 30 attributes), Latent Class (limited to 100
attributes), ICE (limited to 10 attributes) and HB (limited to 1000 attributes). The success of the methods
(particularly for Latent Class, ICE and HB) hinge upon how much information is available from each
respondent on each attribute. We caution that ICE estimation may not be stable for partial-profile designs
if the information available from each respondent relative to the number of parameters to be estimated is
low. Given enough information per respondent relative to the number of parameters to be estimated,
Hierarchical Bayes estimation may provide useful individual-level utilities for use in market simulations
where the results are summarized across respondents. Still, choice data are not as rich in terms of
statistical information content as ratings-based data. The individual-level estimates for large partial-profile
designs may contain a significant amount of noise and counter-intuitive relationships (reversals). If the
goal of the research is to estimate stable utilities for individual-level profiling and predictions, the partialprofile approach alone may be insufficient.
Specifying Partial-Profile Designs
Choose Compose | Conjoint Settings | CBC Settings... to bring up the Specify CBC Interview
Parameters dialog. From that dialog, click the Advanced Design Module Tab and then check the PartialProfile CBC Design box.
There are a number of control parameters governing partial-profile designs. For purposes of illustration,
let’s assume there are 12 total attributes in the study. The researcher wants to display 5 attributes per
task with attributes 1 and 12 to appear in every choice task.
Rotate Attributes into Concepts Starting with: In this example, attribute 1 appears in every choice
task. Therefore, attributes rotate in and out of the choice tasks starting with attribute 2. (If you want all
attributes to rotate into the tasks, always specify 1). Note that, all else equal, attributes displayed in every
task are measured with greater precision than those rotated into the tasks.
and Ending with Attribute: In this example, attribute 12 appears in every choice task. Therefore,
attributes rotate into choice tasks ending with attribute 11. (If you want all attributes to rotate into the
tasks, specify the last attribute number.)
Number of Attributes Shown in each Concept: In this example, 5 attributes are displayed in each
choice task, so we specify a 5. Recent research suggests that between 2 to 4 attributes may be optimal
to use in partial-profile studies. The length of the level text has a bearing on the appropriate number,
along with the familiarity and interest respondents have for the product category.
If the Randomize Attribute Order box is checked, the attributes appear in random order (held constant
within respondent) within the concept. If unchecked (default), the attributes appear in their natural order.
Design Strategies for Partial-Profile Designs
There are two design strategies available with partial-profile. In either case, the design selection has two
stages. The first stage involves choosing the subset of attributes displayed in the choice task. To
formulate the first task, a subset of attributes is randomly chosen. For all subsequent tasks, the two-way
joint frequency table of attribute presentation within choice tasks is examined. Attributes are chosen so
that the off-diagonal cells in this table remain approximately balanced. This strategy is identical for the
two methods.
The second stage of the design selection involves deciding which levels of the selected attributes are
displayed for each concept. There are two different strategies:
Random: Levels are randomly chosen, with replacement. Sampling with replacement permits level
overlap within tasks. The random method permits an attribute to have identical levels across all
concepts, but it does not permit two identical concepts (on all attributes) to appear within the same
task.
Shortcut: This strategy attempts to build each concept by choosing attribute levels used least
frequently in previous concepts for that respondent. If two or more levels of an attribute are tied for
the smallest number of previous occurrences, a selection is made at random. An attempt is made to
keep the concepts in any task as different from one another as possible (minimal overlap). When
there is more than one less-frequently-used level for any attribute, an attempt is made to choose one
that has been used least in the same task.
For partial-profile designs, we generally suggest the Shortcut strategy. This leads to more efficient
estimation of main effects than the Random method. In our opinion, the only reason to use the Random
strategy is if the measurement of interactions (through aggregate analysis) is the primary goal, the
attribute set is relatively small and the sample size is quite large.
Notes for Partial-Profile Designs
Estimation of main effects and especially interaction terms for attributes rotated into choice tasks is
significantly less precise than with manageable full-profile CBC designs.
We strongly suggest you test your design prior to fielding to ensure you can estimate reasonably stable
part worths. The Test Design module provided with CBC may be used. When using this module, you
should recognize that the design efficiency is estimated with respect to a hypothetical full-profile
orthogonal design. The efficiency of part worths for attributes rotated into the design is naturally lower
than with a full-profile design (where the attribute is always present), since comparatively less information
about that attribute is available.
Fixed partial-profile tasks (or entire designs) may be specified.
iii.4.3 Conditional Pricing
In pricing research, it is sometimes very useful if prices for product concepts are made to depend on other
attribute levels (such as brands). The first versions of CBC could handle this in a limited way by using
prohibitions. For example, one could prohibit high prices from being shown for some products, and
prohibit low prices from being shown with others. Such prohibitions, however, can lead to very inefficient
designs.
There are some work-arounds for dealing with situations where prohibitions with price seem to be
needed. For example, if package size is an attribute, prices for the 64 oz package would be much higher
than prices for the 16 oz size. One approach is to express price as a percentage above or below the
“normal price” for that package size, such as “10% above the average price.” Another is to show unit
prices, rather than prices per package. For example, with detergent we might study 16, 32, and 64 ounce
packages, but present prices in terms of “cents per ounce.” These are ways of making a single set of
price levels work for all package sizes without specifying any prohibitions, but it would be preferable to
display actual prices appropriate for each package size.
The conditional pricing option lets you create a look-up table to determine the prices to show for specific
combinations of attributes. Drawing upon the example above, assume we have three attributes as
follows to describe detergents:
BRAND
Brand A
Brand B
Brand C
PACKAGE
16 oz. package
32 oz. package
64 oz. package
PRICE
“Low price”
“Medium price”
“High price”
The text for the price levels above is not shown during the interview, and serves as a place-holder only.
The price levels displayed during the interview vary according to the package sizes as follows:
Conditional Pricing Lookup Table
Low Price
Medium Price
High Price
16 oz. package
$1.49
$2.09
$2.69
32 oz. package
$2.79
$3.99
$5.19
64 oz. package
$4.89
$6.99
$9.09
To construct this lookup table, we started with average prices for the three package sizes (middle
column). To determine the prices in the “Low Price” column, we decreased the average price by 30%
(and rounded to the nearest 9 cents). The “High Price” was calculated by increasing the average price by
30% (and again rounding to the nearest 9 cents).
The example above illustrates how to make price dependent on a single variable (package size).
CBC/Web lets you make price conditional on up to 3 attributes.
You will need to deal with an additional level of complexity when analyzing the results of studies that
involved conditional pricing. In the example above, even though nine unique prices were shown to
respondents, CBC’s analysis programs by default still regard these as just three levels of price: Low,
Medium and High. However, we interpret the results keeping in mind that larger package sizes were
displayed with higher prices on average. We interpret the part worth of the 32 oz. package taking into
account that it was shown at an average price roughly $1.90 cents higher than the 16 oz. package. The
main effect contains information not only about how desirable one package size is versus another, but
also about the relative price levels at which they are offered. Therefore, it is perfectly legitimate if the
count proportion or logit effect for the 32 oz. package is lower than the 16 oz. package. This would
indicate that respondents on average did not feel the larger size was worth the extra $1.90.
The conditional pricing table must also be taken into account when using the market simulator. Unless
the conditional pricing table is explicitly used in the market simulator, to simulate a 32 oz. package at
$3.99, level 2 for price is specified, etc. However, with the most recent releases of SMRT, when you
export the data for analysis in SMRT (the .CHO and .ATT files), the conditional pricing table is also
exported for use in the SMRT market simulator. The conditional prices are automatically read from a
special footer section in the .ATT file into the Assign Level Values area of the SMRT market simulator
(please double-check them to verify that the information was interpreted correctly, as we parse text
strings to extract the price values). Once the conditional prices are within the SMRT market simulator,
you simply specify the actual conditional price during simulations. For example, to simulate a 32 oz.
package at $3.99, instead of specifying level 2, you specify a level value of 3.99.
In our previous example, we used (near) constant deviations from average prices to build a proportional
conditional table. Some researchers choose to use constant absolute price differences between levels of
the conditional attribute(s) instead of proportional changes. In either case, the tables have symmetric
properties, which are desirable from an analysis standpoint. Proportional or symmetric conditional
pricing tables sometimes make it possible to adequately model the data using main effects only (should
the interaction effects turn out to be not significant.) Otherwise, specification of interactions may be
necessary to properly fit the data, and the additional parameters estimated are done so with relatively
less precision than main effects.
If your conditional pricing table reflects significant deviations from symmetry, you must specify appropriate
interaction effects to correctly fit the data, even if the data don’t justify the additional parameters added to
the model. A two-way interaction can correctly fit data collected with an asymmetric conditional pricing
table where price is dependent on one other attribute. If price was dependent on more than one attribute
and the table is not proportional, a three-way interaction would be required to correctly model the effects.
Our logit, latent class and CBC/HB programs are limited to modeling only main effects and two-way
interactions. Therefore, to analyze three-way interactions, you would need to reformat the .CHO file and
collapse the two conditional attributes (using your own data processing techniques) prior to specifying the
interactions with price. Up to 100 levels per attribute are permitted in CBC, so this affords some flexibility
in this area.
Specifying Conditional Pricing
Before you can specify conditional prices, you first must have defined your list of attributes and levels.
Instead of specifying the text for actual prices for your price levels, you should have used placeholder text
such as “low price,” “medium price,” and “high price.” When the questionnaire is administered, the
placeholder text you assigned is overwritten by the text string you specify in the conditional pricing table.
When you create a conditional price lookup table in CBC/Web, you must specify the number of attributes
that participate in the relationship (there should be at least two). If price is dependent on one other
attribute (such as package size), two attributes participate in the relationship: package size and price.
Click Compose | Conjoint Settings | Conditional Relationships… (or click the Conditional
Relationships… button from the Specify CBC Interview Parameters dialog). A dialog appears in which
you specify attributes involved in the relationship. First, select your price attribute. The level text for this
attribute is overwritten by the prices that you specify in the conditional pricing table. Then, select the
attribute(s) upon which price is dependent.
The conditional pricing table is displayed, with as many rows in the table as required to completely define
the prices for all combinations of price and the dependent attribute(s). You can cut and paste conditional
pricing information directly from a spreadsheet program into the conditional pricing grid.
There can be a maximum of one conditional pricing table per study.
iii.4.4 Conditional Display
Conditional display if often used for showing a graphic when levels from one or more attributes appear
together within a concept in a CBC task. For example, you may have two separate attributes for brand
and package size, but a single picture reflects the joint brand/package size concept. Where the text for a
particular brand and package size might appear in a CBC concept, you may wish to replace that with a
graphic.
But conditional display is not limited to just graphics, as it simply adds (or replaces, depending on your
Attribute Visibility setting) the existing level text. Other possibilities include:
Multi-media files
Text
JavaScript
Attribute Label (Optional)
If you are using Conditional Display and have chosen to show the graphic (or other conditional display
“element”) at the top or bottom of the choice task, you can optionally specify an attribute label to be
shown in the choice task (on the left-hand side of the task). This attribute label is only used (and is only
applicable, otherwise it is “grayed out”) if you have specified that attribute labels are to appear in the
choice task. You specify that attribute labels should appear within the choice task by clicking Compose |
Write Questionnaire… and editing one of your choice tasks from the List of Questions. Then, check the
Include Attribute Label in Tasks box.
Note: if in Step 3 of the Conditional Relationships Wizard, you specified that the conditional display
element is to replace the text of a certain attribute within the task, then the attribute label used for the
conditional element is the original attribute label for the replaced attribute text (that you originally typed
within the Specify Attributes and Levels dialog).
iii.4.5 Customized CBC Questions Using "Free Format"
Warning: This is an advanced feature that if not executed properly can result in errors in design,
data collection, and part worth estimation. It requires careful, manual checking of Free Format
HTML syntax, and additional data processing (modification of the .CHO file) at the back end prior
to analyzing data.
Introduction
This section describes how to create custom CBC/Web surveys that go beyond the standard layout
formatting provided in SSI Web. To customize your CBC/Web survey, you specify HTML in Free Format
questions that build the custom look and “tap into” CBC/Web’s experimental design plans using system
variables.
Designing/Programming the Questionnaire
1) Create a CBC Web study the standard way by entering attributes and levels, generating a design, and
adding the CBC/Web questions within the SSI Web questionnaire.
2) To add custom HTML, create a Free Format question (one for each CBC task) and place each Free
Format question on the same page as its corresponding CBC question (preferably above it). We suggest
you start from a standard shelf-display CBC questionnaire and copy the relevant source HTML for that
layout as produced by SSI Web (this is a good starting point, but will require additional modifications).
3) In the HTML that you create for the Free Format question, insert system variables that “tap into” the
attribute text and the CBC experimental design. Use the following notation to return the text for levels in
the CBC design:
Random Tasks: %%CBCRAN[ith task]C[ith concept]A[ith attribute]%
For example %%CBCRAN3C2A1% returns the text for the 1st attribute of the 2nd
concept of the 3rd random task.
Fixed Tasks: %%CBCFIX[ith task]C[ith concept]A[ith attribute]%
For example %%CBCFIX3C2A1% returns the text for the 1st attribute of the 2nd concept
of the 3rd fixed task.
We would suggest creating just one Free Format question and testing it to ensure it is right before
copying that structure and creating the other custom CBC questions. One useful way to test is to
compare (preferably in Local Web testing mode) your custom Free Format question with the regular CBC
question on the same page. The text and graphics for the standard CBC concepts and the custom CBC
concepts should match.
Once you are confident that all is working as expected, you can then copy the structure, creating a similar
Free Format question corresponding to each CBC question in the survey. Search and replace to change
the special system variables to the correct indices. For example you would need to change all the
occurrences of %%CBCRAN1 to %%CBCRAN2 etc.
You must carefully check your HTML syntax for each custom CBC question to ensure that you have
referred to the variable names correctly for each choice task. Warning: a mistake can invalidate a task, or
potentially your entire CBC study. You can also visually inspect each custom CBC task, running under
Local Web Server, comparing each standard CBC task with each custom built task. The attribute levels
presented in each task should match.
4) After you are through testing, you “suppress” the regular CBC/Web questions. You don’t actually
delete them from the questionnaire. We have created a fast way to “toggle” them to a suppressed mode.
To suppress the regular CBC/Web questions, add a TEXT/HTML filler question (with no contents) named
“manualcbc” to your question list. If the TEXT/HTML filler question named “manualcbc” is present, the
regular CBC/Web questions will not appear. They will, however, be recorded in the data file as if the
respondent has answered “1” to each question (the responses to the actual custom CBC questions
shown are saved in the free format variables). This is needed to produce a *.cho file in a later step.
Data Processing Procedures
When you first export the .CHO data file from SSI Web, all respondent answers to the standard CBC
choice tasks are set to “1”. In the .CHO file, there is a design section for each task. You must perform
additional data processing using your own means (SSI Web doesn’t provide automatic data preparation
for this step) to merge the answers from the custom CBC tasks with the design information in the .CHO
file, to produce a new, final .CHO file for analysis.
We strongly encourage you to test your custom CBC questionnaire thoroughly, including checking the
final data, to ensure that the attribute levels are presented correctly in the custom choice tasks, that the
corresponding design information is being correctly saved to the .CHO file, and that respondent answers
to the custom choice tasks are also being stored correctly.
iii.5.1 Prepare CBC Data Files (*.cho & *.att)
To analyze the conjoint-related results of your CBC/Web study, you must prepare (export) the basic data
files for use within the SMRT software system. You received a copy of SMRT for analyzing CBC/Web
data with your purchase of CBC/Web. The two files that CBC/Web will automatically create for you for
use in the SMRT software are:
STUDYNAME.CHO file (ASCII data file)
STUDYNAME.ATT file (ASCII text file of attribute level labels)
Prior to preparing the .CHO and .ATT files, you should have downloaded your data from the Web server
to your study directory on your hard drive. When you click File | Export Data | Prepare CBC Data Files
(*.cho & *.att), the STUDYNAME.CHO and STUDYNAME.ATT file are written to your study directory.
Please see the SMRT documentation (hard copy) for instructions related to analyzing the data within
SMRT.
The STUDYNAME.CHO and STUDYNAME.ATT files can also be used within our Latent Class and
Hierarchical Bayes estimation software modules for CBC.
Question Types
This section lets you filter which types of choice tasks will be exported to the .CHO file (Random, Fixed, or
both). Most users will wish to export both types of choice tasks (Random and Fixed). All Sawtooth
Software analytical systems for analyzing CBC results (SMRT, Latent Class, ICE, CBC/HB, HB-Sum)
include a way to select which choice tasks will be used for utility estimation. Therefore, even if you
include both Random and Fixed choice tasks in the .CHO file, you can set within the analytical software
filters to selectively use the Random tasks for estimation.
Respondent Numbering
You can choose from five options for assigning respondent numbering. This lets you specify how the
respondent numbers (case IDs) are to be generated. User Name is one of the passwords from the
passwords module. You should only choose this if these values were purely numeric (no letters).
Password is also one of the passwords from the passwords module. You should also only use this if the
passwords were purely numeric (no letters). Internal Interview Numbers are those that the system
assigns when respondents complete the survey over the Web. There can be gaps in the sequence if
some respondents started a survey but didn’t finish. Sequentially Assigned Numbers (default) are
integers 1 through n. Other lets you choose another numeric variable to use for respondent numbering.
This last option is useful if you are linking surveys from another source and need to merge the data sets
by a “passed-in” variable.
Include
All Respondents
Choose this option to export all respondents: both complete and incomplete. Data for questions that were
not completed are left blank.
Qualified/Completes Only
Choose this option to export only complete data records. A survey is considered qualified complete when
respondents reach a terminating question that is marked with a qualified/complete status.
Disqualified and Incompletes Only
Choose this option to export only disqualified and incomplete data records. A survey is considered
disqualified when respondents reach a terminating question that is marked with a disqualified status. A
survey is considered incomplete if respondents have not reached a terminating question.
iii.5.2 .CHO File Layout (Text-Only delimited file)
You can export CBC data from SSI Web to a .CHO format. The .CHO format files are used in many other
Sawtooth Software programs for analyzing CBC data, including Latent Class, and Hierarchical Bayes
(HB) applications. The format is as follows:
8960 2 6 12 1
6 1
3 1
2 1 2 3 2 3
3 3 3 1 3 1
4 2 1 2 1 2
2 32
.
.
.
(lines 3-7 repeated, for 11 more tasks for this respondent)
Line 1: Respondent record information
8960
2
6
Respondent
“Extra”
Number of
Number
Variables
Attributes
12
Number of
Choice Tasks
1
None option
0=N, 1=Y
Line 2: “Extra” Variables (may include as many numbers as specified on Line 1)
6
1
Interview
Segmentation
Duration
Variable
Lines 3-7 are repeated for each choice task
Line 3: Task 1 format
3
1
Number of concepts
Depth of preference
in first task
in first task (always 1)
Line 4: Levels displayed in first concept
2
1
2
Level 2 of
Level 1 of
Level 2 of
Attribute 1
Attribute 2
Attribute 3
3
Level 3 of
Attribute 4
2
Level 2 of
Attribute 5
3
Level 3 of
Attribute 6
Note: Missing levels (partial profile and alternative specific designs) are indicated with “0”’s.
Line 5: Levels displayed in second concept
3
3
3
Level 3 of
Level 3 of
Level 3 of
Attribute 1
Attribute 2
Attribute 3
1
Level 1 of
Attribute 4
3
Level 3 of
Attribute 5
1
Level 1 of
Attribute 6
Line 6: Levels displayed in third concept
4
2
1
Level 4 of
Level 2 of
Level 1 of
Attribute 1
Attribute 2
Attribute 3
2
Level 2 of
Attribute 4
1
Level 1 of
Attribute 5
2
Level 2 of
Attribute 6
Line 7: Choice information
2
32
Concept
Task duration
Chosen
* In this example, the none option, if chosen, would be represented as concept 4.
** Missing answers are indicated by Concept Chosen set to “0.”
iii.6.1 CBC/Web Paper-and-Pencil Studies
SSI Web lets you run your CBC studies via Web, over PCs (CAPI installation), or as paper-and-pencil
questionnaires. The process of running paper-based CBC/Web studies is very similar to computer-based
CBC/Web studies. You should set up your attributes, levels, and compose your choice-tasks in the same
manner as if you were going to run a computerized study. However, you should probably choose plain
black (text) on white (background) for questionnaire colors, unless you plan to print the questionnaires in
color.
Even though you will not have a typical “randomized” design with paper-based questionnaires (with, say,
999 questionnaire versions), you should still use CBC/Web’s random choice tasks. Rather than having
so many possible versions of the questionnaire (perhaps a unique version for each respondent), you will
create just a few versions (design blocks) of the questionnaire and assign respondents (randomly) to take
one of the different questionnaire versions.
We generally recommend that you include enough versions of the questionnaire so that the number of
random choice tasks times the number of questionnaire versions is greater than or equal to 80 (assuming
no prohibitions, and typical attribute level specifications). However, we urge you to spend extra time
looking at the design efficiency report that was created when you generated your paper-and-pencil design
plan. With very few questionnaire versions, the design seed can have an effect on design efficiency.
Also, you may want to take further steps to generate “dummy” response data (holding the number of
respondents constant) under different design plans (e.g. three versions vs. four versions, etc.). Please
see more information on testing your design in this documentation.
Saving the Questionnaire
When you click the Field | Create Paper & Pencil Interviews… button, each version of your
questionnaire is saved to a separate file within the Paper-and-Pencil folder within your study directory.
The files are named STUDYNAME_V#.htm, where STUDYNAME is the four-character or less study
name for your project, and # is the version number of the questionnaire (always beginning with version
#1). These files are in HTML format and may be opened with most any recent word processing software.
You should do any additional formatting you like to make the questionnaire appear as you wish prior to
printing.
Note: It is helpful to print the version# and task# with each task in a paper-and-pencil study. This gives
an additional reference while performing data entry, helping ensure that you do not introduce errors when
specifying the version numbers seen by respondents. Any easy way to automatically label your choice
tasks is to add a system variable %%CBC_Version% in the header or footer of the choice task. When
you do this, each task includes a label such as D - 1, T - CBCRAN1, meaning “Design #1, Task
CBCRAN1.”
Fielding the Questionnaire
Make sure when fielding the questionnaire that you are able to match the different questionnaire versions
correctly with respondent answers. You should also try to randomly distribute the questionnaire versions
among respondents, so that roughly equal numbers of respondents complete each version.
Data Entry
Use any spreadsheet program, data processing software, or word processing software that you like for
recording the data, but you must save it as a text-only file. No other questions from the questionnaire can
be specified in the file; these must be merged separately into the market simulator in SMRT from a
different file using the File | Merge Variables wizard.
When you have prepared the data, you are ready to Accumulate Paper & Pencil Data. Click here for
details regarding the accumulating paper-and-pencil data procedure and the file layout for respondent
answers.
iii.6.2 Accumulate Paper & Pencil Data Step 1 of 4
When you click Field | Accumulate Paper & Pencil Data, CBC provides a merge wizard to help you
merge the respondent answers in the text file you create into the CBC system. There are various steps to
the process.
In the first step, you specify the file that contains the respondent answers. This must be a text-only file
that includes just respondent number, version number, and responses to the CBC tasks.
Each respondent can occupy a single or multiple lines of text. The file should have no labels (e.g. as the
first line). The respondent number and questionnaire version must be included in the record. Then,
responses to the choice tasks are found in the same order as listed in the questionnaire. Missing values
(if any) may be set to 0 (zero) or any other character (number or letter) that is unique from actual
responses (you’ll specify the missing character you’re using within the Accumulate Paper & Pencil Data
wizard).
Assume a questionnaire with the following 10 choice tasks:
Task1 (Choice Task, this task through Task10)
Task2
Task3
Task4
Task5
Task6
Task7
Task8
Task9
Task10
An example of a space-delimited text file of respondent answers for that questionnaire (for the first two
respondents) is as follows:
1001 1 3 2 4 2 3 3 2 1 2 4
1002 2 3 2 3 1 1 3 2 0 0 0
In the above example, respondent #1001 received questionnaire version 1 (field 2). Responses for 10
choice tasks (fields 3-12) follow in the same order as specified in the questionnaire list. Respondent
#1002 received questionnaire version 2. This respondent did not provide answers to choice tasks 8, 9,
10. Missing values in this example are set to 0 (zero).
Step 2
iii.6.3 Accumulate Paper & Pencil Data Step 2 of 4
Delimiter: Specify the character you used as the delimiter in your file. The delimiter is the character used
to separate the different fields. For example, the following data records are space-delimited:
1001 1 2 4 3 2 4 2 3 3 2 1 2 4 1
1002 2 1 1 3 2 3 1 1 3 2 0 0 0 0
Here are those same records, this time comma-delimited:
1001,1,2,4,3,2,4,2,3,3,2,1,2,4,1
1002,2,1,1,3,2,3,1,1,3,2,0,0,0,0
Missing Value: If your file of respondent answers included missing data, you should specify the character
(number or letter) that indicates a missing value. Note: consecutive delimiters (except in the case of
spaces) are also considered as missing values.
Step 3
iii.6.4 Accumulate Paper & Pencil Data Step 3 of 4
This dialog lets you specify information about the layout of your file.
Respondent Record Information
One line per respondent record
If each respondent’s data is described by a line (row) of data followed by a hard return, check this
radio button. An example of data formatted as one line per record is as follows:
1001 1 2 4 3 2 4 2 3 3 2 1 2 4 1
1002 2 1 1 3 2 3 1 1 3 2 0 0 0 0
Respondent #1001 and #1002 each occupy a single line.
Fields per respondent record
If a respondent’s data are spread across multiple lines (rows), you must let CBC/Web know how
many fields to expect per respondent record. A field is a value or letter separated from other fields by
the delimiter.
Respondent number location
Specify the field containing the respondent number.
Version number
Specify the field containing the version number of the questionnaire. It is critical that this be correct.
The version number cannot be “missing.” The design number is used to match the respondent’s
answer to the information displayed in that choice task.
Text Fields
For consistency with CBC for Windows software (SMRT), we have included this field on this dialog.
However, it is not supported at this time in SSI Web (as you cannot include other information such as
text variables in the file of respondent answers for paper-and-pencil CBC questionnaires).
Step 4
iii.6.5 Accumulate Paper & Pencil Data Step 4 of 4
Using the settings provided in the previous three steps, CBC/Web reads the file of respondent answers
and displays the fields of the first record encountered. This lets you visually verify that your settings are
correct and that CBC/Web is correctly reading the data. To view additional records, click the > button. To
return to the first record, click the |<< button.
Click Finish to merge the file of respondent answers with the design and create a data file. A
studynamedat.dat file is created. If a studynamedat.dat file already exists, you may choose whether to
overwrite or append (add new records) to that file.