In: Economics
1. Please describe what Symmetric and Nonsymmetric
Scale. Indicate their differences and provide examples.
2. What are secondary data, and how do they differ from primary
data?
3. What is cluster sampling and when would you use it?
4. What is “protocol analysis”?
5. What are the advantages of person-administered surveys over
computer -administered ones?
Question 1)
A symmetric scale is sometimes called “balanced,” as it has equal amounts of positive and negative positions. The neutral point is not considered zero or an origin; instead, it is considered a midpoint along the continuum” . An example of a symmetric synthetic scale is a Likert scale. The Likert scale format is commonly used by marketing researchers, using a scale, respondents are asked to indicate their degree of agreement or disagreement on a symmetric agree-disagree scale for each of a series of sentences. A nonsymmetric or unbalanced scale is also known as a one-way labeled scale, “where the researcher is measuring some construct attribute with the use of labels that restrict the measure to the “positive” side”. Two examples of a nonsymmetric include an unanchored n-point scale and an n-point scale. The anchored n-point scale uses two anchors, to indicate both high and low ends. The anchors are important as they tell the respondent the context of the scale; that is, they indicate how to translate the range of the scale into a frame of reference to which the respondent can relate.
Question 2)
Primary data are fresh (new) information collected for the first time by a researcher himself for a particular purpose. It is a unique, first-hand and qualitative information not published before. It is collected systematically from its place or source of origin by the researcher himself or his appointed agents. It is obtained initially as a result of research efforts taken by a researcher with some objective in mind. It helps to solve certain problems concerned with any domain of choice or sphere of interest. Once it is used up for any required purpose, its original character is lost, and it turns into secondary data.
Secondary data, on the other hand, are information already collected by others or somebody else and later used by a researcher to answer their questions in hand. Hence, it is also called second-hand data. It is a ready-made, quantitative information obtained mostly from different published sources like companies' reports, statistics published by government, etc
Question 3)
Cluster sampling is a method that makes the most of groups or clusters in the population that correctly represent the total population in relation to the characteristic that we wish to measure.
The most common cluster used in research is a geographical cluster. For example, a researcher wants to survey academic performance of high school students in Austria. One can divide the entire population (population of Austria) into different clusters (cities). Then the researcher selects a number of clusters depending on his research through simple or systematic random sampling. Then, from the selected clusters (randomly selected cities) the researcher can either include all the high school students as subjects or he can select a number of subjects from each cluster through simple or systematic random sampling.
Question 4)
A protocol is defined as a standard procedure for regulating data transmission between computers. Protocol analysis is the process of examining those procedures. Protocol analyzers decode the stream of bits flowing across a network and show you those bits in the structured format of the protocol.
Protocol Analysis is widely used in usability testing, educational psychology, and interview/survey design. It has more recently been used as a corollary method to Discount Usability Engineering to allow rapid design and testing of user interfaces.
The central assumption of protocol analysis is that it possible to instruct subjects to verbalize their thoughts in a manner that doesn’t alter the sequence of thoughts mediating the completion of a task, and can therefore be accepted as valid data on thinking.
Protocol Analysis allows researchers to gather data about cognitive processes "on the spot"; it does not rely on the subject's recollection of a process. When combined with video-taping, Protocol Analysis can be used to rapidly evaluate user-interfaces and to pinpoint areas of cognitive dissonance between the user's preferred workflow and the on-screen metaphor of the interface.
Question 5)
the advantages of person-administered surveys over computer -administered ones