Connect on Whatsapp : +1 206 673 2541, Get Homework Help 24x7, 100% Confidential. Connect Now

Methodology and Analytical Strategies | Best Assignment Writers

Methodology and Analytical Strategies

Data Collection Methods

A series of strategies are employed to test the research hypothesis that Black people are more likely to have a greater intrusion. If you are black, you are more likely to be arrested, summoned, or detained for a longer period. Specifically, if you are a young black male being accused, you are more likely to have a higher degree of intrusion. The research’s secondary data will comprise of all stops and frisks conducted by the New York City Police Department and recorded during its peak in 2011. Following a stop, polices are required to fill a stop and frisk UF-250 form and record various things of stops such as the suspect demographic features, the location and time of the stop, the crime and rationale for the stop of the suspect, among others (MacDonald and Braga, 2019).

The dataset’s significant limitations are that there are no demographic features or other recognizing data is available about the police officers. When an individual is stopped, the police officers are supposed to conduct a frisk, and if they sensibly believe the persons are dangerous and armed, the police officers can additionally conduct a search (Levchak, 2017). In the searches and frisks, the police officers can decide whether to make an arrest of the suspected individuals or issue a summons, which is all recorded on the UF-250 form. As police officers are required to finish the UF-250 form for examinations started founded on sensible suspicion, they may not always do so, and so a possibly large number of stops go unrecorded. It is evident that police officers may follow scripts of suspicion while recording out forms to justify stops. Lastly, as these UF-250 forms are finished by hand, there are possible mistakes in filling and transcribing stop information (Morrow, White, and Fradella, 2017). Thus, the research data is not complete and fully accurate of all conducted stops and frisks.

The use of secondary data sources in research is advantageous because it is cost-effective in most cases. Since the data has already been collected, the researcher does not require to use any cost, time, or effort into the data collection stages of the study. According to Foley (2018), sometimes secondary data can be purchased by a researcher looking to use it to inform a study they are working on, but the costs are nearly always lower than what the expenses would be if the researchers were to create the same data set from scratch.

Additionally, the data from a secondary source is usually already prepared and stored in an electronic format. So, the only duty of the researcher is to spend time analyzing the data instead of spending time having to prepare the data for analysis. Through the use of secondary data sources, the researcher will have enough time to analyze collected data and utilize available data collected by the government that other institutions cannot collect. Secondary data saves time and resources for the researcher. When using the observational method, it is significant to evaluate the quality of the data before use, which depends on several factors, including its completeness and accuracy, and what data was collected. The observation method is best in this research as it is simpler and does not require going into technical details of research that can compromise results. Observing the occurrence of stop and frisks will enable acquainted with the trend of activities. Through keen observation of the New York City Police Department UF-250 forms, one can identify a problem by making an in-depth analysis of the problems. The data collected through observation is very accurate in nature and also very reliable. The method improves the precision of the research results. The observation method will also enable a direct check for the accuracy from the UF-250 forms.


Exploratory longitudinal and quantitative study of secondary data will be employed to identify the likelihood of the three different levels of intrusions, such as arrests, summons, and detainment, based on race. Arrests are the most intrusive outcome of stop and frisks. When police officers arrest individuals, they are forced to dedicate large amounts of time to the criminal justice system. When individuals are arrested, there are two possible outcomes. One is a person may be issued a Desk Appearance Ticket. This means that you will be sent home with a ticket instead of having to wait in jail for up to twenty-four hours to see a judge. The individual will then be instructed to appear in court for arraignment at a later date. This ticket is only granted for minor offenses and is given rarely. In most cases, individuals are fingerprinted, processed, and forced to wait 12-24 hours for their judge to determine bail. (Kane, 2018). Summonses are the second most intrusive outcome of stop and frisks due to the fact the individual summoned must show up for court on a later date. If the individual pleads not guilty, then they must then show up to another court date to be tried. Additionally, if a person does not appear in court, they will have a warrant set on them.

Lastly, a stop and detain is the least intrusive outcome. This results in the least amount of time dedicated to the criminal justice system. To test the research hypothesis, logistic regressions will be used to check for the likelihood of a person being arrested/ summoned by race. Additionally, linear regression will be employed to find the average length of detention by race. Since this is a longitudinal study, the research will have multiple independent variables. The dependent variable to this study is the degree of intrusion (arrests, summons, detainment) and the independent variables are the factors that lead to the degree of intrusion. More specifically on independent variables, the study’s primary focus is individuals’ gender, potential crime, age, time of day, and most notably, race.

The longitudinal study employed will allow following the subjects in real-time. This implies you can better authenticate the real sequence of actions, giving an intuition into cause-and-effect relationships. Also, Longitudinal studies can allow recurring observations of the same character over time. There are three forms of validity in a longitudinal study and one form of reliability that should be addressed. Three forms of validity include time unit validity, time boundary validity, time period validity, and the reliability of the case study is longitudinal reliability. Time unit validity entails the questions of how to segment the timeline. Time unit boundary entails the question of how long of a timeline, and time period validity entails the question of which period. Longitudinal reliability entails the question of whether an alternate judge would have appointed for the same events to the same sequence, categories, and periods. According to Street and Ward (2012), to solve the issue of longitudinal timeline validity, various techniques need to be employed. First and foremost, it is the harmonizing the unit of time to the pace of change to solve time unit validity. Time boundaries validity is addressed through the use of member checks and formal case study protocol. Time boundary validity is solved through the analysis of archival data, and lastly, to solve timeline reliability through the use of triangulation.

Data Management of Research

Data management, a process of feeding, storing, arranging, and preserving the data created and collected, is essential in this research (Rouse, 2019). Several technologies, tools, and techniques can be used as part of the data management process. However, to manage data collected from the secondary sources, the Relational Database Management System type of Database Management Systems will be employed for this research. Relational Database Management System permits operators to create, update, manage, and interact with a relational database, which stores data in a tabular form. The relational database capability to control a wide range of data formats and process queries effectively will be efficient in ensuring gathered data is managed well. Further, a Relational database arranges gathered data into tables that can be connected within depending on common data, which can allow retrieval of one or more tables easily with just one query. A relational database management system works by creating several tables, and each table is arranged into columns and rows. These tables are connected in different ways as a record in one table can link with another record in a different table.

Also, a record in one table is linked to several records in another table, and lastly, several table records can be connected to numerous records in a different table. These relationships in relational database management systems help in easy retrieval of data stored when required. Relational Database Management System ensures data security, data consistency, better flexibility and scalability, easy maintenance, and reduced risks of errors (Naeem, 2020). With the research data organized by Relational Database Management System, it will be easier to maneuver through, retrieve needed data as the tool tables connect records with others in different tables. Thus, data management in research is essential as it saves time, increases research efficiency, facilitates new discoveries, increases research impacts, improves accessibility, and safeguards research data.

Analysis of Data

The research data collected from UF-2590 forms will be analyzed using SPSS statistical analysis software since the UF-250 forms have a section dedicated to the duration of the detention of each stop. The identified dependent variables are the degree of intrusion (arrests, summons, detainment), while the independent variables include several factors that lead to the degree of intrusions such as gender, potential crime, age, time of day, and most notably, race. The collected raw data will be imported to SPSS through the excel file for analysis. Depending on the variables you want to analyze, SPSS allows for the use of specific commands to assist analyze data. The software tool has procedures on how it should be used, and you can feed in all the options to get the most accurate results. Performing commands in SPSS is easy and simple to understand, making it an easy task for users.

The findings from the software are given effectively and correctly, offering scholars a better idea of appropriate future studies and a direction for moving forward. Thus, Binomial logistics regression analysis of data using SPSS software will be performed to check for the likelihood of a person being arrested/ summoned by race. Binomial logistics regression is easier to implement and interpret. It makes no assumptions about distributions of classes in feature space, good accuracy for many simple data sets, and performs well when the dataset is linearly separable and is very fast at classifying unknown records. While linear regression analysis will be used to find the average length of detention by race. For linear regression, it is easy to implement, it fits linearly separable datasets almost perfectly and is often used to find the nature of the relationship between variables, and overfitting can be reduced by regularization.

Ethical Concerns of Research

According to Paolatubaro (2015), it is believed that the use of secondary data in research relieves researchers from the weight of applying for ethical approval. However, the entire process of research entails ethical concerns, whether it is primary or secondary sources of data. Using ethical sources of data in itself is ethical, as it reduces the burden on respondents, maximizes the value of any investment in data collection, ensures replicability of study findings, and hence, more transparency of research procedures and integrity of research work. Although the value of secondary data is only completely fulfilled if the advantages outweigh the risks, particularly in disclosing sensitive information and re-identifying individuals.

For this to occur, the use of secondary data requires to meet some crucial ethical conditions such as de-identified of data before releasing to the researcher, study subjects’ consent can be reasonably presumed, analysis of outcome must not allow re-identifying participants, and data use must not result in any damage or distress. Thus, this research will re-identify human subjects in secondary data and protect their privacy, confidentiality, and personal security. Since the research will use secondary data, the informed consent process of the participants is unnecessary.

Cite this Page

Methodology and Analytical Strategies | Best Assignment Writers . (2021, December 22). Essay Writing . Retrieved March 21, 2023, from https://www.essay-writing.com/samples/methodology-and-analytical-strategies/
“ Methodology and Analytical Strategies | Best Assignment Writers .” Essay Writing , 22 Dec. 2021, www.essay-writing.com/samples/methodology-and-analytical-strategies/
Methodology and Analytical Strategies | Best Assignment Writers . [online]. Available at: <https://www.essay-writing.com/samples/methodology-and-analytical-strategies/> [Accessed 21 Mar. 2023].
Methodology and Analytical Strategies | Best Assignment Writers [Internet]. Essay Writing . 2021 Dec 22 [cited 2023 Mar 21]. Available from: https://www.essay-writing.com/samples/methodology-and-analytical-strategies/
Get FREE Essay Price Quote
Pages (550 words)
Approximate price: -