Modern ethical research has a lot of different ethical codes and guidelines, which we’ll talk about in a little bit. But the one thing that all these different ethical research codes are based on is the thing that all of us personally have, our own moral compass. But morality is not a very simple thing if you actually think about it. Morality refers to those principles concerning the distinction between right and wrong, or good and bad behavior. And as you grow older and you further develop your moral code you start to realize that it’s never a simple thing. There is no such thing as a simple dilemma. Any sufficiently detailed quote code morality differentiates between varying degrees of righteousness. It’s never yes or no; it’s never right or wrong; it’s never black or white, it’s always shades of grey. And when you get into these kind of moral dilemmas with somebody else you’re really just trying to figure out who is the most righteous between the two of you. These principles that we all have can be derived from many places. For a lot of people it comes from their philosophies or their personal beliefs. Other people, they get their morality from their culture or their religion. But when it comes to scientists, obviously they have their own codes of morality; their own moral principles. But they can’t be allowed to just act on those. There has to be something more for a scientist. And that’s what ethical research is all about. Ethical research is the study of proper action; it’s the study of how moral principles should be applied in the real world. This discussion of proper behavior originated in ancient times with philosophers like Socrates and Plato. But the main point that I’m trying to make here is, morals and ethical research are not the same thing. Morals are these personal beliefs regarding what is right and wrong. Whereas ethical research is socially defined guidelines of proper behavior under particular circumstances. I’m sure you can think of many examples where morals and ethical research don’t always match. Maybe it’s a person who is strongly opposed to homosexual behavior.
But they’re in a position where they need to treat those individuals fairly. Like maybe make them a cake or something like that. Clearly there’s a lot of conflict in the world and sometimes these conflicts can be very problematic. But the example I just mentioned that is a real world example but that’s kind of a silly one. There’s much more serious ones out there. For example, in the United States as a country we have developed ethical codes and we’ve written them into our constitution with regard to how people should be treated. So it doesn’t matter if you’re younger old. It doesn’t matter if you’re male or female. It doesn’t matter what your ethnicity is, or anything. In our country all people should be treated fairly. So we have those kinds of ethical codes. Even if you’re strongly opposed to a certain kind of person for some reason you’re not allowed to act on those moral beliefs you have because we have ethical codes determining how you should behave in those situations towards those people. But this is on the country scale. What we’re talking about mostly is science. So what are the ethical codes of conduct for scientists? How far can scientists go when it comes to testing their hypotheses? People ask these kinds of questions all the time. Is it acceptable for a scientist to lie to their participants, or intentionally afflict pain or emotional suffering, or risk their participants health and well-being? Or maybe use ethical research methods that haven’t been fully tested yet, or take advantage of resources or groups of individuals that are scarce or endangered? And there is no easy answer to this kind of stuff. As I already mentioned, these ethical dilemmas; these moral issues, are shades of grey. It’s kind of hard to figure out what the best course of action is in many cases. But one thing that does seem very clear is that psychologists need to be especially ethical; they need to keep ethical research much moreso in mind than many other kind scientific communities.
Because if you think about it we have the potential to do massive harm to others. We are using human subjects and our results are helping to determine human behavior. So there’s great potential for harm, but there’s also great potential for benefit here. And that’s the thing I love so much about psychology. It teaches us about ourselves and helps us to change ourselves for the better, hopefully. What we’re talking about here are ethical research ethical research. The responsibility of ethical researchers to be honest and respectful to all individuals who may be affected by their ethical research studies or the reports of their studies results. Ethical researchers are usually governed by sets of ethical guidelines that assist in the proper decisions and help them to choose proper actions. Such as the American Psychological Association. The APA. There’s tons and tons of examples I could give you as to how morals and ethical research come together in ethical research. But here’s one simple thing that I’d like to show you. Maybe it’s not so simple but this is the best I could do to try to convey the general idea of ethical research ethical research. This is a framework for thinking about ethical research ethical research. There’s two things you’ll notice here. First of all, there’s three different kinds of people that could be affected by your ethical research. The participants in your study, the greater scientific community, and just society at whole. And then there’s four different basic moral principles that we need to consider. Just to mention a couple of examples… When it comes to the moral principle of weighing risks against benefits, regarding the ethical research participants, those participants must receive some kind of benefit from participating in your study. Whether it’s some kind of compensation for their time and effort, or maybe they could actually be physically improved. Maybe the treatment you’re giving them could actually help their symptoms or something like that. But whatever those benefits are they must outweigh any potential risk for participation.
Regarding the scientific community, this weighing of risk against benefit largely refers to wasting other people’s time and money. We only have so much time and money to do science. And if you’re putting out science that is no good, is going to just get ignored, or it’s bad science, then you are definitely a harming the scientific community in that regard. So you want to make sure you’re putting out good quality stuff that will actually help to advance scientific understanding. And then when it comes to society at large, when regarding the weighing of risk against benefit, we have to think about how society might be affected in the long run. They’re probably going to hear about your study from secondhand sources and maybe get influenced in various ways that are in some cases hard to predict, but… you have to try to think about those ways that the greater community could be affected by you publishing the study. Like, is the society potentially going to be misled? Are they going to misunderstand what you are trying to show in your study? Could this potentially do some harm? There’s tons of examples I can give you for how this kind of thing has done a lot of harm to society. But let me just get back to that in a moment. This first moral principle, weighing risk against benefit, this is obviously a pretty simple one, at least conceptually simple. You want to try to make a list of all the different things that are good about your study that benefit other people and compare that to the risks against your study. It’s definitely like a balancing act. And scientific ethical research is only considered ethical if the benefits outweigh those risks. Some classic examples of how this kind of balancing didn’t work out or things like the MMR vaccine studies, and the Stanley Milgram studies. So I’m sure you’ve heard about all this stuff about there being a link between children getting the MMR vaccine and children developing autism.
And the ethical researcher who did this study… , the ethical researcher who did the study clearly did not understand or did not care about the risks to publishing this study. This study scared people, this study is still scaring people and it’s been many years since the study was published. This was junk science and the study was quickly rejected and people showed that the data was fabricated, but the damage has been done and is still being done and will probably continue to be done well into the future. This bad study has dealt a major blow to science as a whole. People just have tuned out to science because of stuff like this. So clearly this person did not know or did not care about what they were doing to society as a whole. But here’s an example you’re probably much more familiar with, if you’ve taken Psych 101, is Stanley Milgram’s obedience studies. Just to remind you, Stanley Milgram was interested in how far people would go when it comes to obeying an authority. So what he did was he brought people in the laboratory and he… just by simply having a person in the room telling them what to do, would get people to believe that they were inflicting pain and death on another participant. Nobody was actually being hurt or killed, but the participants believed they were doing this. They showed various signs of emotional distress and trauma. Like sweating, trembling, stuttering, biting their lips, groaning, digging their fingernails into their skin, and so on. Clearly these people were being harmed, substantially harmed, but, Stanley Milgram didn’t really see this as a risk as much, as we would today. If you try to do the study today there’s no way they’re gonna let you do this kind of stuff to people. They would force you to find some way to minimize that potential for harm. The second moral principle I mentioned, that “acting responsibly and with integrity”, all I was referring to there was that you need to be a competent scientist. You need to do stuff by the book. You need to follow the rules.
You need to do good science. And you need to act the part too. People have certain expectations of you as scientist and you need to meet those expectations. Whether these people are the participants in your study or other scientists or just the community, People expect you to know what you’re doing and to do a good job and you have to meet those kinds of expectations. The third moral principle I mentioned is seeking of justice. Obviously, this just means that as ethical researcher you need a treat people fairly. Participants should all get equal compensation. Participants should all get… they should be treated in the way that you would hope to be treated. You don’t want to potentially do any harm towards one group or another. The potential for harm should be spread out evenly across all. There’s tons of examples where this kind of stuff didn’t work out in the past, but maybe the most blatant, the most terrible example of how this went wrong was this Tuskegee syphilis study. If you haven’t heard about this, this is a study that went from 1932 to 1972. These ethical researchers working for the US government were trying to develop a treatment for syphilis. But during the course of their study penicillin became the standard for treating syphilis, and it was a very effective treatment. But because that wasn’t part of their original study design they didn’t include it. Not only did they not include penicillin in their treatment for syphilis but they denied penicillin to their participants, and they even rejected it and repressed that information because like I said, that wasn’t part of their study design. They were testing something else. To make a long story short, in order to test their hypothesis they denied life-saving treatments to their dying participants, and many of them did die as a result. This was a huge controversy that caused a big, big, problem. You can see what I mean here. This kind of stuff should never happen, obviously. Then the next moral principle I mentioned was “respecting people’s rights and dignity”.
That basically refers to a couple of things. It refers to allowing people to be autonomous. Allowing people to make their own decisions. And in order to best make those decisions they need to have adequate information. You want them to make an informed decision so you have to give them something that we call informed consent. That’s a document, typically a paper document, listing all the potential risks and benefits of participating in your study and all the reasons why you’re doing the things you’re doing in the study, and all the other details. And then the participants will sign it and you’ll keep that on record. So that’s just a nice little document that just proves that the participant knows what they’re getting into and consents to it. So that’s that’s the first big thing, autonomy. The second one is privacy. It’s just a common expectation that if you go into a laboratory and you do ethical research, that the ethical researcher isn’t going to start talking about you when you’re gone, isn’t going to start sharing the data they’ve collected with their friends and co-workers. It would be a nightmare if you were to participate in some study of a sensitive thing, like beliefs on homosexuality, or drug use, or something. And then, the ethical researcher just goes on the news and starts talking about you. That’s horrible. That kind of stuff should not, and thankfully does not, happen very often. Because we have expectations of things like confidentiality. This is just often an unspoken agreement that you should not disclose the participants’ personal information. So you can talk about your data of course, but you shouldn’t talk about individual participants. So, all this ethical stuff… there’s there’s far too much to cover in this video. There’s entire courses on just ethical issues. But all this stuff can obviously be a bit overwhelming. And sometimes there is no real solution to an ethical problem. There are definitely going to be unavoidable ethical conflicts that you’ll come across.
Because, the fact is, there’s very little if any psychological ethical research that is completely risk-free and whenever there’s risk there’s always a problem. There’s always some kind of balancing that needs to be done. And that balancing may never be perfect. For example, when you do this balancing, you might notice that the that the benefits do outweigh the costs… but the benefits only apply to the scientific community and the costs only apply to the participants. Clearly, that’s an issue. How you resolve that issue I can’t really say because it’s nuanced. It requires a lot of in-depth discussion typically amongst committees, ethical research ethical research committees. We’ll talk about those committees in a later video. But the general idea is well, it’s not possible to eliminate all ethical conflicts. It’s generally thought to be possible to deal with them in a responsible and constructive way.