Faculty Research Symposium
Social Sciences Faculty Research Symposium
The Social Sciences Faculty Research Symposium is a semesterly event designed to introduce attendees to a specific research topic, the scholarship of our faculty, and the process of social-science research, more generally.
Juvenile Justice Policy Goals: A Qualitative Inquiry into Purpose Clauses
Dr. Emily Pelletier, Assistant Professor of Criminal Justice, QCC-CUNY
March 9, 2022 12:00 PM - 1:00 PM
Dr. Emily Pelletier's Biography
Click Here to watch the Recorded Event
In this presentation, Dr. Emily Pelletier, Assistant Professor of Criminal Justice at Queensborough Community College, will discuss juvenile justice systems across the US. While these systems have a common history of rehabilitative ideals and Constitutionally required due process protections, each state maintains the responsibility to create and amend state statutes governing its juvenile justice system. Differences among state statutes and statutory changes over time give rise to the inquiry of whether juvenile justice systems in the US hold similar goals and what these goals specifically entail. This presentation will identify thematic goals of state juvenile justice systems in the United States using data from a qualitative content analysis of the purpose clauses in state juvenile justice legal codes.
Weakening the Immigrant Children’s Rights During the Trump Administration
Dr. Gabriel Lataianu, Assistant Professor of Sociology, QCC-CUNY
October 20, 2021 12:10 PM - 1:00 PM
Dr. Gabriel Lataianu's Biography
Click Here to watch the Recorded Event
What Could a Robot Know About? The Discovery of the Mind in Language
Dr. Patrick Byers, Assistant Professor of Psychology, QCC-CUNY
March 17, 2021 12:00 PM - 1:00 PM
Dr. Patrick Byer's Biography
Click Here to watch the Recorded Event
The development of deep neural networks (DNNs) has significantly advanced artificial intelligence, with machines now able to carry out complex tasks that, in some cases, appear to exceed human ability. However, the underlying operation of DNNs is opaque (not readily interpretable), and—despite being generally reliable--prone to somewhat unpredictable and possibly serious errors. This has resulted in significant efforts to develop systems that can provide meaningful explanations of their functioning, so-called “explainable AI”. Such systems would account for their behavior like human beings do, i.e., in terms of reasons involving beliefs/knowledge, attitudes and/or desires. Discourse analysis work reveals profound challenges facing these efforts. Ascriptions of what others know/believe, think, feel or want have a clear meaning only in relation to certain assumptions about how people can be expected to behave (and not behave). These assumptions are prominently reflected in judgments about whether a person’s behavior reflects genuine understanding, or merely rote training. A number of recent cases of DNN behavior suggest that the assumptions in question do not hold for these systems.