In this paper we extended the work of Makel et al. (2012) by finding the replication rate for the years 2012 till 2020. We found a replication rate of 1.39%. Compared to the 1.07% replication rate...Show moreIn this paper we extended the work of Makel et al. (2012) by finding the replication rate for the years 2012 till 2020. We found a replication rate of 1.39%. Compared to the 1.07% replication rate found by Makel et al. (2012) this is roughly 0.22% higher, which is roughly a 30% increase. This is a smaller increase than what we had hoped to find after the release of their paper and the expected effects of this release. With this we can conclude that the research community still undervalues the creation of replication papers. This finding implicates the importance of figuring out what is keeping researchers from performing replications. In the rest of this study we’ve made a head start at answering the question of why this might be the case. We expected to find that researchers undervalue replications due to them having too little impact on the scientific success of papers. We measured this by comparing the Mean Normalized Citation Scores (MNCS) of papers, depending on the type of replication they received and the success rate of the replication, to the average MNCS of papers. We found no significant differences for papers that received direct successful, direct unsuccessful and failed conceptual replications. We found significantly lower MNCS for papers that received successful conceptual replications. The effects of conceptual replications were inverted to our expectations. This is something that needs further attention in future research. Finally we also looked at if direct replications had more impact on the MNCS of original papers than conceptual replications. We did not find a significant difference in effects, but our findings of this are inconclusive due to the inverted relation of conceptual replications and the MNCS of their original papers. Our results show that replication studies do not appear to be significantly impacting the success level of their original papers (except for successful conceptual replications). The insignificant impact of replications papers may therefore play a role in the undervaluation of replications in the scientific research community. We invite other researchers to further explore the reasons for the undervaluation. Hopefully, by this, we will rather sooner than later get to a world with reliable and validated research.Show less
Human Factor research in automation suggests that trust strongly affects how drivers interact with Level 2 technology. Understanding its capabilities and limitations is important for calibrating...Show moreHuman Factor research in automation suggests that trust strongly affects how drivers interact with Level 2 technology. Understanding its capabilities and limitations is important for calibrating trust and overall safety on the roads. In the present study we examined how drivers’ self-reported trust develops before having had experience (pre-test), immediately after having had experience (post-test), and five to seven days after having had experience (follow-up) with a Level 2 (Partial Automation) vehicle in a driving simulator. Additionally, we investigated the possibility for video procedure effect on self-reported trust. Results were against our expectations and showed that self-reported trust decreased after having had more experience with the Level 2 (Partial Automation) vehicle and different for each of the two videos. This study also investigated the role of sense of presence in a simulated driving experience. Analysis of the results and drivers’ feedback showed that generally low scores on sense of presence could be possibly explained for by the lack of involvement and predictability.Show less