Today, applications of artificial intelligence (AI) are everywhere: in transportation, security, medicine, manufacturing, and social media feeds.
While much study has been done on the engineering and ethical dimensions of pervasive new technologies, said , a Klarman Postdoctoral Fellow in the 麻豆视频 and 麻豆视频, empirical social science research is also needed to understand how people react to tech鈥攁nd how governments create policy around it.
鈥淲e live in a democratic society and I think a lot of these technologies are being pushed onto users without their fully understanding what鈥檚 happening,鈥 Zhang said. 鈥淚 think we should value the input of the users and the general public so we don鈥檛 get what some people worry about鈥攕urveillance capitalism or the surveillance state.鈥
Based in the Department of Government, Zhang is researching trust in digital technology and the governance of AI. For her, the three-year Klarman Postdoctoral Fellowship is an opportunity to research technology policy, working closely with , the John L. Wetherill Professor of government.
鈥淚鈥檇 followed Baobao鈥檚 work for years,鈥 Kreps said. 鈥淪he and I were at the same workshop when she was a first year Ph.D. student [at Yale] and she was already incredibly impressive. She was working on experimental methods, but then started to work on artificial intelligence, which was an emerging interest of mine.鈥
Zhang uses social science techniques, such as surveys, to measure perceptions of technology among general and specific populations, such as machine learning researchers.
鈥淏ased on my existing work, the public is not a monolithic whole. There鈥檚 a lot of heterogeneity in people鈥檚 trust in these emerging technologies,鈥 Zhang said. 鈥淲hat I鈥檓 hoping to do in collaboration is to study what trust looks like among the people who are using the technology [and among] members of the public.鈥
In a current project with collaborators from Oxford University and the University of Pennsylvania, Zhang surveyed AI and machine learning researchers to get their perspective on AI ethics and governance. In other, , the team found perception gaps between what technical experts expect for AI development timelines, compared to the public.
鈥淚nterestingly, the public thinks that 鈥 very advanced AI systems will arrive much sooner compared to the machine learning researchers,鈥 Zhang said.
Zhang also researches the increasing automation of labor and traditional topics in international relations like interstate conflicts. She is a co-author on a recent paper on 鈥溾 in The Journal of Conflict Resolution.
With the onset of the COVID-19 pandemic, Zhang pivoted her research to focus on attitudes toward COVID-19 surveillance technology. Her first collaboration with Kreps was on digital contact tracing. An about this research was cited in a United States Senate report in July.
鈥淲e鈥檙e both very interested in emerging technologies, and these COVID-19 related projects (we also have one on vaccines) are a natural space for our collaborative efforts,鈥 Kreps said.
Zhang plans to write a book during her Klarman Fellowship on citizens鈥 trust in emerging digital technologies, including ones deployed in response to COVID-19.
Zhang also has an affiliation with the Department of Information Science, where she is a member of the (AIPP) group.
鈥淏aobao brings unique disciplinary knowledge to the AIPP group, and it鈥檚 fantastic to hear her perspectives on both technical and social science work,鈥 said , assistant professor in the Department of Information Science, who is co-sponsoring Zhang鈥檚 Klarman fellowship. 鈥淏aobao鈥檚 research addresses critical questions about the governance of artificial intelligence in an inherently interdisciplinary way.鈥
鈥淐ornell is an ideal place for me to do this research,鈥 Zhang said. 鈥淲hen I found out about the Klarman Fellowship, I was excited to apply because I knew there were a lot of folks here doing the type of research I wanted to be doing.鈥