Pieces of Academic Research Journal Articles: Methods, Results, Discussions, Conclusions


Hello and welcome to yet another video breaking down the parts of academic journal articles. If you have not already, watch my previous videos on thesis statements, research questions, hypotheses, abstracts, introductions, and literature reviews. This one covers a lot of ground. I will first contextualize scientific research methods before discussing the purpose of the methods section. Then I will break down the results , discussion, and conclusion sections. Finally, as usual, I will run through an example—same as last time—Dolinski et al. on replicating Stanley Milgram’s infamous experiments on obedience to authority. Research methods are essential to understand. You will take entire classes in your undergraduate and graduate careers dedicated to qualitative and quantitative research methods. I took a general sociological research methods course as an undergrad, then during my PhD, I took a statistics and quantitative methods course and another advanced qualitative methods course. While it is way beyond the scope of this video to go in-depth into the philosophy of science or epistemological debates undergirding research methods or any specific method, we can start with one overarching question that all researchers are concerned with: “How do I collect good data?” In order to explain reality, you have to observe reality. Your observations become data that describe reality. If your observations or data are bad, then your explanation of reality will be bad. For example, have you ever looked at American political platform polls from the major parties? They are horrendous examples of data collection. All parties and campaigns do this to some extent, but Donald Trump’s campaigns have outdone everyone else for the bad survey award. Check out the “Official 2020 Trump vs. Democrat Poll” on his presidential campaign website. The first question is, “Who would you rather see fix our Nation’s shattered immigration policies?” Your choices are “President Trump” or “A MS-13 Loving Democrat.

” The person who wrote this question should be ashamed of themselves. Your choice is not between Trump and a Democrat. It is between Trump and a Democrat who loves an international criminal gang. This is a biased question because, while the Trump answer is neutral, the Democrat answer creates a clearly negative association. Very few people would like anyone who “loves” MS-13 to become President, regardless of their political party. Other questions ask you if you would trust President Trump or a “Sleazy Democrat” to always put America first, if you believe that President Trump or a “Lyin’ Democrat” will keep their promises, if you believe that President Trump or a “Low IQ Democrat” is better for America, and finally if you will vote for President Trump or a “Radical Socialist Democrat” in 2020. Again, no one wants a sleazy, lying, or low-IQ candidate of any party to become President! But because those negative associations are attached only to the Democrat, then most people will choose Donald Trump, even if they might prefer a non-lyin’ Democrat over him, thus skewing the results. And, just so everyone is clear, these questions are aimed at the Democratic candidate, Joe Biden, who Trump often says is a radical socialist. Biden is a moderate Democrat by any measure; he is neither radical nor a socialist. The questions, then, not only create a negative association, but includes false claims, which make the data and the explanation of reality even worse. What is the point of this bad survey? It is written to collect purposefully skewed data so that the Trump campaign can use “a scientific poll” to argue that the American public overwhelmingly supports President Trump. But the conclusion—the explanation of reality—is wrong because the data collection methods were bad. This whole thing—the purposefully biased survey and using invalid findings to give the public an inaccurate picture of reality—is unethical. And also, this poll suffers from selection bias, which means that the results will be skewed because the majority of the people going to Trump’s website are Trump supporters.

So the answers to these questions—even if individual questions were not biased—will already be in favor of Trump because the sample of the population who takes the poll is not representative of all American voters. Research methods are important for you to understand so that you are scientifically literate in your daily life and understand how research can be manipulated. But more to the point here, because we are talking about academic research, the methods section of research articles is where the authors describe the what, who, why, when, where, and how of their research design so that you, dear reader, know what they did and how they arrived at their conclusions. In the methods section, you will determine once and for all whether the study was qualitative or quantitative. Remember that qualitative methods involve the collection and analysis of verbal, textual, and other descriptive data. We are not measuring amounts of things, like in quantitative studies; rather, we are interested in describing the characteristics, or qualities, of things. Common qualitative methods include observations, participant observations, interviews, surveys with open-ended questions, ethnographies, grounded theory, and phenomenology to name some of the ones you will come across. Quantitative methods involve the collection and analysis of numerical data. Numbers. Statistics. Quantitative research methods include close-ended surveys . Other quantitative methods include experiments, data mining, and other statistical analyses of large data sets. Methods sections describe the research participants and the research setting . They discuss key variables, including how variables were operationalized . They also might discuss strengths and weaknesses of their method. If that is not discussed here, it will likely be in the discussion or conclusion section. Another important reason that authors spend time writing detailed methods sections is so that other researchers can re-do their studies.

This is called replication, where someone re-does a study to see if they get the same results, and replication is an important part of the scientific process. Now, without extensive training, you will not fully understand everything—even most things—in the methods section, and that is okay. I teach this stuff and I do not know every type of statistical analysis and every type of methodological theory that I come across. You want to get the gist of it. Next, let’s talk about the Results section, often called the Findings. This may or may not get its own section, depending on the article. Sometimes, the results will be in or directly after the methods section and other times it is merged with the discussion. In qualitative pieces, findings may be reported thematically throughout the article. In a non-empirical, or theoretical, article there may be no results . The purpose of the results section is to present data to the reader as objectively as possible, that is, with minimal interpretation. Especially in quantitative articles, you will see lots of charts, tables, and graphs in the results section. Authors here will often state whether their hypotheses were supported by the data or not. The discussion section is where authors interpret their data. They will often restate the hypothesis, thesis, or research question and write about how or why these were supported or not based on the data. Authors bring the literature back, entering that conversation again, and discuss how their findings contribute to the field. The discussion includes implications of the findings, not just the impact on the field, but on society as whole! It explains why people should care about what the researchers found. Finally, the discussion section often mentions limitations of the study or areas of further research. This is useful because it helps readers understand what the study did not do and how future researchers could improve upon it.

If you are looking for ideas for your own research project, find articles on your topic and read their discussions sections. The authors will tell you what they hope people will do next! Discussion sections and conclusion sections are often combined. Certain things, if not said in a discussion section, will appear in the conclusion. But all papers conclude! They end. They restate the main point and overall purpose of the paper. They summarize the main findings again. Was the hypothesis, thesis, or research question supported or answered? If so, why and how? . They offer recommendations for researchers, policy makers, or some other target audience, further implications not mentioned in the discussion section, and some final considerations for the reader, like Jerry Springer’s final thoughts at the end of his episodes . Authors answer the important “So what?” question. Readers need to know why they should care about your research, and it is good practice to leave them with a strong reason at the conclusion. And that is that! Let’s get into some examples. Dolinski et al. is up first. You know the drill. Pause the video and read the methods, results, discussion, and conclusion. Okay, first question hotshot: Where did you find the methods section in this article? Correct, it is called “Procedure” here. Let’s go through the methods, or procedure, section and talk about what is important, and I will point out some questions to guide you through the second article later. The first thing the authors do is describe how participants were recruited. They were incentivized with some cash, always useful. Participants were recruited via convenience sampling and snowball sampling, two common sampling methods. Grabbing someone off the sidewalk? That is convenient. Asking students to find other people they know? That is snowballing. You could imagine that the researchers then ask those new participants to ask their friends too, and that would be another layer of snowball sampling.

You might think that asking people on the street is simple random sampling, but it is not because in random sampling, everyone has an equal chance to be selected. If I stand on a street corner, not everyone has an equal chance to be selected. Only those people walking by that street corner have a chance, and even then, I will choose to ask some and not others. The authors then discuss elimination criteria, that is, why some of the recruits did not complete the study. If they were psychology students, they would surely have guessed what this study was about—and therefore would have skewed the results. I suppose I should apologize to you now for ruining your chance at being recruited for a study “dedicated to memory and learning.” You would spoil it. You could have invested that $50, and by the time you were my age, you could have been rich. Oops. Or if recruits had some trauma that this experiment might trigger, then the researchers could have violated ethics guidelines governing doing undue harm to participants. So some key questions are answered here: Who were the participants and how were they selected ? The next paragraph explains in detail how the experiment proceeded from the time the participant entered the lab to when they exited. It discusses methodological decisions the researchers made to comply with ethical standards, mentions important similarities and differences to the original Milgram experiments, and most importantly, operationalizes some key variables and shows how many independent variables were controlled. Let’s read through it. A confederate, by the way, is a fake participant who is part of the research team, but the real participant thinks they were recruited just like them. The participant is briefed, and the authors reveal how they deceived the participant. This sounds bad—deceiving someone—but it is a necessary part of the experiment and illustrates a control variable—all participants will play the same role, that of the teacher.

The informed consent form is important because human subjects have to agree to be part of laboratory experiments. You cannot force someone to shock someone else against their will. This consent form also gives the participant a way out if they become uncomfortable. There is a long discussion of setting up the experiment, showing the participant slash teacher how to use the shock generator, how to read instructions to the confederate slash learner, and so on. Note that the learner gets specific questions wrong every time. This controls for when and how many mistakes are made, or when the teacher might pull a shock switch. If the teacher wavers on pulling the shock switch, the researcher says the exact same thing to them: “Please continue.” “The experiment requires that you continue.” And so on. Do you see the importance of controlling for all these variables? Each participant is put in a situation that is as exact to other participants as possible. So any differences in their behaviors should be a result of their own internal differences and cannot be explained by differences in the environment. The authors note at the end of this paragraph that they collected key data when the participant refused further participation in the experiment or when their doubts over the harm they were inflicting on the learner required the authority figure to tell them to keep going. These are dependent variables! The participant’s ability to resist obedience to authority and defining the learner . Finally, at the end of this section, the authors describe the debriefing, which is when they tell the participants what the study was really about. Debriefing is especially important in a study like this that can cause psychological harm to the participant. Imagine these people lying awake at night thinking, “Oh my gosh, I can’t believe I shocked someone until they screamed, and I kept doing it just because a man in a white lab coat told me to!” Who am I?! In the original Milgram experiments, the participants grappled with the fact that they could have literally shocked someone to death.

How do you live with that? But that is part of the point of these studies. We are far more capable of inflicting fatal harm to a stranger because someone in authority tells us to than we think. And these experiments are just for some cash. What would we do when the stakes are real? I said earlier that you would determine whether the study was qualitative or quantitative in the methods section. Are you sure which it was yet? Hint: experiments are almost always quantitative. If you are still not sure, the Results section unequivocally shows quantitative data—data in the form of numbers—to describe what happened. I also said that the methods section might discuss strengths and weaknesses of the study. It does not describe either here, so we will have to wait for the results and/or discussion section. So, on to the results section! This section objectively reports the findings—quantitatively, in this case. What did they find? 90% of participants pulled all 10 switches. But look at what all variables they say had no significant effect in this study! The first sentence, “Because…” The age of the authority figure did not affect obedience. I thought participants might be more likely to listen to the older man, but it seems they listened to the younger one just as often, so age was not a factor. And look, the sex of the learner did not matter either! On page 931, second paragraph: “We also examined the impact of the learner’s sex on obedience.” And they show you a graph so you know this was an important relationship. But check it out, they said that “It is worth remarking…” They found a difference, but the sample was so small that they cannot say with much certainty that sex mattered. Here is a weakness and a way that you could improve this study! Do it again but with a larger sample to try to replicate the effect with higher confidence.

The authors were also struck by how few people expressed doubt or stopped during the experiment. Milgram’s original studies, and many of the replications, found that about 65% of people pulled all the switches, but here it was 90%. Granted, as they discuss earlier, they only used 10 switches instead of Milgram’s 30, so that explains the difference. And, remember that in Milgram’s study, 85% of participants pressed the first 10 switches, so this is actually close. But from looking at Table 1, we can see a couple things: a little less than half of the participants who expressed doubt stopped; more females than males stopped ; all but one person only expressed doubt on one switch!; And about half the participants who continued only needed to receive pressure from the authority figure one time to do so. What this says to me is that people generally just need one firm push from an authority figure to keep doing something they are uncomfortable doing. Anyway, the discussion will probably repeat some of the stuff I just said! I cannot help but interpret this data. In the discussion, the authors are going to interpret the data. They will discuss the findings in the context of the literature and talk about the implications of their findings. These are important questions to ask yourself about the discussion section: How do the findings contribute to the literature? What are the implications of this study? Note that this discussion section also concludes the paper. Remember, these sections may be separate or together. Since they are merged here, the discussion should also mention any strengths or weaknesses, suggest further areas of study, restate the purpose or the “So what?” of the paper, and provide some summary. So what do we have here? The first paragraph compares the results of this study and Milgram’s first 10 switches. Cool. The second paragraph says, “…participants demonstrated such a total obedience that we achieved a ceiling effect, making it exceptionally difficult to demonstrate the influence of any moderators of the dependent variable.

” This means that since 90%nearly everyone—pulled all 10 shock switches, the researchers could not tell which variables caused them to do so or prevented the others from not doing so. If you line up 10 comedians, 5 men and 5 women, and everyone thinks they are equally funny, then you cannot tell what effect the comedians’ gender has on how funny audiences perceive them to be. You need a lot more people not to think they are funny. It is the same here. You need a lot more people not to pull all 10 switches. But to do that under these conditions, you need a larger sample size. The authors then again point out the lack of definitive sex variable influence on obedience, effectively toss their hands up, and say that the results “may provide inspiration for further studies in the paradigm.” They clearly state that the hypothesis was not supported. They contextualize their findings in the larger literature on this topic, which they say focuses on seeking factors affecting the level of obedience. They note that Milgram’s original findings remain remarkably consistent no matter what researchers try to do to find alternative explanations or ways of changing the level of obedience. They conclude with a quick summary of the study’s significance and a bit of a prophetic story about Milgram. Neat stuff. I did more armchair analysis of their findings than they did, but that is why authors report the findings objectively! They do not always have room to elaborate on every possible or interesting thing in the results or discussion; that is up to the reader and is another reason why you need to develop scientific literacy in research methods, so that you can effectively read data like this and draw your own conclusions. Okay, I was going to review a second article, Rosenberg et al. on participants’ perspectives on drug treatment programs in the criminal justice system, but this video is long enough.

If I get a bunch of requests for another one, I will do it, but hopefully Dalinski et al. was enough. If you do read Rosenberg et al., note the main difference right off the bat, namely that that is primarily a qualitative study . Note how differently the methods and results sections read because it is qualitative instead of quantitative. Instead of reporting numbers in the findings, the authors report textual pieces of interview transcripts. And they organize these pieces of data by conceptual, or thematic, categories. These categories were derived from qualitative coding, as you can read about in the methods, which involves painstakingly going through your interview transcripts and tagging the text with codes, or asking yourself, “What is this about?” for each line in the interview. So, I hope this video was useful in understanding the content and functions of methods, results, discussion, and conclusion sections of academic journal articles. Remember, the methods section explains how the study was conducted. The results objectively report the data. The discussion interprets the data and places findings in the larger context of the literature in the field. And the conclusion sums up the main points, mentions implications, additional considerations, or ideas for future research, and concludes the paper.