Ch. 3 Sociology Research Methods


Primary, Secondary, Quantitative and Qualitative Data
Primary Data: Information collected personally by a researcher.

  • The researcher has control over how the data is collected, the purpose, and who it's collected for.
  • The researcher has control over the reliability, validity, and representativeness.


  • Time-consuming and expensive to conduct.
  • There may be difficulty gaining access to the target group, in terms of danger, and availability (may be dead).

Secondary Data: Data already existing (official statistics, reports, personal letters, and diaries – at archives, libraries).

  • Saves time, money, and effort, when investigating crime, marriage, divorce, and suicide.
  • Official statistics are seen as being highly reliable.
  • Secondary data is helpful for historical & comparative purposes.


  • Official definitions of concepts being studied by researchers may differ from sociological definitions.
  • Official statistics may not reflect all incidents, but only those reported to the authorities.

Official Statistics – Government-generated secondary sources of data on areas such as crime, marriage, and employment.

Quantitative Data – Data that is numerically expressed.

  • Allows for easy comparison of results between categories & time.
  • Allows conclusions to be easily drawn.
  • Reliable as it is easier to replicate the study.
  • Personal biases are less likely to affect the findings and the researcher can stay more objective & value-free.


  • Collects only limited information, limits respondents to a few lines. Hence, we do not find the reasons & meaning for behaviours.
  • Difficult to collect the natural responses of people as quantitative data is usually collected in artificial settings. Hence, results are likely to be low in validity due to demand characteristics or SDB.
  • Issues can only be investigated if the variables being measured are initially defined.

Qualitative Data – Non-numerical data that expresses the quality of a relationship.

  • Subjects are allowed to talk and act freely which allows researchers to collect data and find the reason behind behaviours.
  • Researchers are able to build a rapport with respondents, hence they are likely to be give answers which are highly valid.


  • Data is difficult to generalise, as qualitative research is usually based on small groups.
  • It’s difficult to compare qualitative research across time and location as no 2 groups will be the same.
  • Data may lack objectivity as it is affected by the researcher’s view.
  • As findings are in-depth, they are difficult to replicate; they are less reliable.

2 studies which aimed to find the quality of people’s behaviour:
Sudhir Venkatesh studied a Chicago gang from the viewpoint of the members.
Goffman covertly studied the patients in a mental institution to understand how nurses labelled patients.

Quantitative and Qualitative Research Methods
1. Questionnaires: Consists of a list of written questions.

  • Highly reliable data is collected as everyone answers all questions.
  • Respondents are anonymous, so responses are likely to be valid (less biased answers).


  • Low response rate, and this can result in the sample being unrepresentative.
  • Participants may ignore questions or select numerous answers.
  • No way to know whether respondents understood the question properly because of ambiguous and leading questions.

Questionnaire methods: Postal questionnaire, online questionnaire.

Close-ended questions have sets of pre-coded responses.

  • Quantitative data is collected.
  • Results are quick and easy to interpret and compare.
  • This research method is preferred by positivists.
  • Highly reliable data is collected.
  • Lack of detailed information (reasons for behaviour/response) can limit the validity of responses.
  • Types of questions: Likert-type scales, multiple-choice, checklists, rank order, and rating scales.

Open-ended questions allow participants to respond in a freestyle manner.

  • Participants can include their opinions, feelings, and their complete knowledge about the topic.
  • Qualitative data is collected.
  • This research method is preferred by the interpretivists
  • Highly reliable data is collected.
  • Highly valid data is collected.

➔ A few of the below & above issues can be eliminated using a pilot study.
➔ The following are (unintentionally) biased questions:

  • Ambiguous questions.
  • Leading questions.
  • Questions with unbalanced options.

Callendar and Jackson: Investigated whether the fear of debt deters students from higher education. They had a low response rate as only half of the distributed questionnaires were returned.
Eileen Barker Conducted a 41-page questionnaire to study the Moonies.

2. Social Surveys: Obtains information in a standardised form from a large group of people.

  • Objective and quantitative data is collected
  • There is minimal involvement of the researcher.
  • The data is reliable and representative.
  • There is low validity as there are chances of social desirability bias and demand characteristics affecting results.
  • No way to know whether respondents understood the questions.

➔ Example: UK National Consensus and The British Social Attitudes Survey.

3. Content Analysis: Research method which systematically analyses media texts and communication. Collects both quantitative and qualitative data.

Quantitative data is collected by counting the frequency of people’s behaviour in media texts.
Strength: Content analysis helps identify underlying patterns in society.
Weaknesses: Not replicable, low reliability.
Hogenraad: used content analysis to identify the recurring themes & words which lead to conflicts.

4. Experiments: Involves changing different variables to test their effect on behaviour. The IV is manipulated to see its effect on the DV. They try to find causal relationships.
Correlations are statistical relationships that suggest a probability of a true relationship.
Causation is when one action occurs, another always follows. They help researchers to predict future behaviour.

  1. Laboratory Experiment — an experiment conducted in artificial settings where conditions are controlled by the researcher. There’s usually a control group and an experimental group.
    • Easy to replicate as the situations are controlled. Standardisation results in a high level of reliability.
    • Helps establish causal relationships and social facts.

    • Rarely used in sociology, because of ethical issues and impracticality.
    • The Hawthorne effect can lead participants to show demand characteristics or social desirability bias.
    Milgram: investigated destructive obedience.
  2. Field Experiment — an experiment conducted in natural settings, but the conditions of the study are controlled.
    • Behaviour is more likely to be ecologically valid.
    • There is less control over extraneous variables.
    Rosenhan: studied how staff in mental hospitals labelled people.

Longitudinal Survey: A form of comparative analysis which track changes of a sample participating in research over time.
Strength: The vast amount of data collected allows comparison to be made over time. Data is high in validity.
Weakness: Not possible to replicate.

Cross-sectional Survey: Involves identifying groups which share broad similarities and measuring difference in a single variable.
Durkheim’s study on suicide used cross-sectional surveys to build a comparative analysis.

5. Official Statistics: Includes government-generated data on crime, marriage, employment. Patterns of behaviour are understood.

  • Official statistics may be the only data source available to cover a particular topic, i.e., crime.
  • They are highly representative as data is collected via a national survey.

  • Validity is an issue because governments & coroners choose what to include & exclude. Not all people crimes are reported.
  • Data that would be expensive and time-consuming for researchers to collect would be readily available.
  • Does not reveal reasons for people’s behaviour.

Comparative Analysis — Compares different situations to understand the similarities and differences between the two situations.
Durkheim: Looked at suicide rates amongst the Protestants and Catholics.
Atkinson: Challenged the objectivity of crime statistics by looking at preconceptions of coroners.

6. Interviews

  1. Structured Interviews — A set of standard questions are asked by the researcher to the respondent.
    • Consistent comparable results are gained as the same questions are asked in the same order.
    • Respondents’ misunderstandings can be cleared.
    • The response rate is likely to be 100%.
    • Lack of anonymity can cause demand characteristics due to the researcher effect, halo-effect, or social desirability bias.
    Goldthorpe: Conducted a structured interview where he studied the attitude of high-wage earners in 3 Luton-based companies.
  2. Unstructured Interviews — a free-form interview where the respondents talk freely about a broad topic.
    • Highly valid data is collected.
    • A strong rapport needs to be established to encourage respondents to talk.
    • The researcher has little control over the direction of the conversation.
    • It is difficult to generalise, interpret, and analyse the data.
    Becker: Studied Chicago schoolteachers' stereotypes.
  3. Semi-Structured Interviews — An interview involving both open and closed questions.
    • Respondent is allowed freedom to talk about what they want and this helps gain valid data.
    • Researcher has to build a rapport and think of relevant questions quickly.
    • Large amounts of data are collected, and they need to be studied, which can be time-consuming.
    Myhill and Jones: Conducted a semi-structured interview to understand students’ perspectives on teachers’ treatment of students according to gender.
  4. Group Interviews — Respondents discuss a topic as a group.
    • The researcher can control the direction of the conversation.
    • Data is collected quickly and efficiently.
    • Encourages respondents to speak up.
    • The researcher must control the behaviour of the group to allow people to speak freely.
    Paul Willis: Observed how ‘lads’ developed anti-school culture.

7. Observation
Overt — The subjects being studied are aware that they are being studied.
Covert — The subjects are unaware that they are being studied.

Non-Participant Observation: The researcher observes the participants' behaviour from a distance without participating in that behaviour.

  • It allows people that to do not want to participate (i.e., criminals) in research to be studied.
  • Researcher gets to observe objectively the subject’s natural behaviour however, there may be demand characteristics.
  • Observation from a distance can produce invalid data. Researchers are not able to ask questions as a way of gathering data.
  • Difficulty in gaining access to some groups.
  • There are ethical issues when people are observed without their consent.

Flanders studied interaction in the classroom.

Participant Observation: The researcher participates in the behaviour that they are studying. It allows researchers to demonstrate Verstehen, which is the ability to take the viewpoint of the subject.
There are 2 types of participant observations:

  1. Overt Participants Observation
    Involves the researcher openly participating in the behaviour of people who are aware that they are being studied. Membership is often required to get access to some groups.
    • The researcher is free to ask participants questions. This allows in-depth reasons for the behaviour to be collected, hence the data is high in validity.
    • If a group refuses to give permission, then the research cannot be carried out.
    • An awareness of the researcher’s presence may make participants behave unnaturally – demand characteristics & SDB. This leads to data with low validity.
    • It is impossible to replicate. And as the researcher cannot be documenting everything that happens, the bias from reconstruction of events is likely to occur.
    Eileen Barker studied the Moonies, a religious group, for 6 years to build rapport.
  2. Covert Participants Observation
    The researcher studies participants undercover so that the subjects are unaware that they are being studied.
    • Participants would show their natural unbiased behaviour & actions; hence the researcher is able to collect in-depth data that’s high in validity.
    • Highly valid data which explains the meanings behind behaviour is gathered, hence it’s insightful in sociology.
    • It’s difficult to gain access to these groups.
    • If a researcher lacks ‘insider knowledge’ they would risk exposure. They should also be able to blend in easily in terms of characteristics & behaviour.
    • In a covert observation, there are ethical issues of invading privacy.
    • There is danger posed to the researcher, especially if the subject group uncovers that the researcher was working undercover.
    • It can be difficult to stop participating. Further, there’s the ethical issue of deserting people who came to trust you.
    James Patrick learnt the ways of a Glasgow gang.
    Venkatesh studied a black American gang from Chicago. He required sponsorship to get into the gang.
    Goffman covertly studied the patients in a mental institution to understand how nurses labelled patients.

8. Case Studies (research technique): In-depth qualitative study of a particular group or person.

  • Provides great in-depth detail on how people view the world, thus it has high validity.
  • Uncovers meanings & reasons behind the behaviour.
  • Consumes a lot of time, effort, and money.
  • Data is difficult to generalize results, so it is not representative.

Westwood conducted a 12-month study on female workers in a stitching factory.

9. Semiology: Involves the analysis of language and cultural signs from media texts to uncover hidden meanings within texts.

10. Documents
Historical: Newspapers, reports, and books. They are free to access but can be outdated.
Personal: Letters and diaries. They are free to access but, can be outdated and biased (validity is in question).

Research Design

  1. The research problem — Decide the research topic.
  2. Research hypothesis — A testable statement that predicts the outcome of a research.
  3. Data Collection
    Sampling frame (similar to target population) — A list of names of all those people included in the survey population from which the sample is selected.
    Sample — A group of people, representing a target population, that are taking part in the research.

    Sampling techniques:
    • Random Sampling – People are randomly selected.
    • Systematic Sampling – People are selected at regular intervals from the sampling frame.
    • Stratified Random Sampling – A random sample is chosen from a subdivided group of people.
    • Stratified Quota Sampling – The sample frame is divided into categories and people are selected until the size is reached.
    • Snowball Sampling – The researcher contacts one member of a group, gains trust and connects the researcher to others.

    • A pilot study is conducted to realize the practical and financial risks and feasibility of the research.
  4. Data analysis
    Private/internal analysis – Uses validity/reliability concepts to ensure data is logical and consistent.
  5. Presenting results
    Analyse related researches to discover trends. Reflect on the research and its hypothesis. Data can be represented in the following forms: findings, conclusions, limitations, suggestions, and improvements.