Addressing Age-Related Bias in Sentiment Analysis

international joint conference on artificial intelligence, pp. 1-14, 2018.

Cited by: 21|Bibtex|Views60|Links
EI
Keywords:
significant ageold adultagingolder adultssocial biasMore(3+)
Weibo:
We find significant age-related bias among a variety of tools and commonly-used word embeddings and successfully reduce bias in a custom-built classifier

Abstract:

Computational approaches to text analysis are useful in understanding aspects of online interaction, such as opinions and subjectivity in text. Yet, recent studies have identified various forms of bias in language-based models, raising concerns about the risk of propagating social biases against certain groups based on sociodemographic fa...More

Code:

Data:

Introduction
  • The concept of ageism was identified several decades ago [12], negative attitudes and stereotypes about growing older are only receiving worldwide attention.
  • Age discrimination and age bias are topics that have begun to receive attention within HCI where work highlights the ways that researchers and designers tend to treat aging as a “problem” with technology as a solution, rather than viewing aging as a complex and natural part of the lifespan [74].
  • To help counter age-related stereotypes around technology use, prior work has emphasized cases of older adults going online to actively.
Highlights
  • Although the concept of ageism was identified several decades ago [12], negative attitudes and stereotypes about growing older are only now receiving worldwide attention
  • In doing so we find significant age bias in algorithmic output, for example sentences with “young” adjectives are 66% more likely to be scored positively than the same sentences with “old” adjectives; (2) a nuanced understanding of how the technical characteristics of various sentiment analysis methods impact bias in outcomes – that tools validated against social media data exhibit increased bias; and (3) a case study in attempting to reduce bias in training data where, with a relatively straightforward approach, we successfully reduce age bias by an order of magnitude
  • To investigate whether age-related bias might be present in sentiment analysis methods, and to understand how various characteristics of sentiment methods influence this form of bias, we study several lexicon-based and corpus-based tools, the type of data they were validated against, as well as word embedding models upon which many algorithmic tools are built
  • The opportune context in which we study age bias stems from research that examined a community of older adult bloggers to understand blogging as a form of online participation among older adults [10] and analyzes online blog-based discussions of age discrimination in the U.S and U.K. [47]
  • In line with the results from phase one, which found significant differences in the sentiment of explicit agerelated keywords, we found significant differences in the sentiment of implicitly coded age-related keywords generated through word embeddings
  • Sentences with implicitly “young” adjectives were 1.09 times more likely to be scored positive (p<0.01, 95% CI [1.075, 1.101])
  • We find significant age-related bias among a variety of tools and commonly-used word embeddings and successfully reduce bias in a custom-built classifier
Methods
  • The authors test the sentiment tools for age-related bias by examining the sentiment output scores using multinomial log-linear regressions.
  • The authors build two types of multinomial log-linear regressions: 1) a single full model for each phase of analysis that includes the data from all of the sentiment analysis tools in order to test for the presence of age-related bias across the models (Table 2), and, 2) individual models for each sentiment analysis tool (15 in total) in order to assess which specific tools demonstrate agerelated bias (Table 4).
  • Exponentiated coefficient values greater than one indicate that the regression model’s sentiment is more likely and a neutral sentiment prediction is less likely, and exponentiated coefficient values less than one indicate that the regression model’s sentiment is less likely and a neutral sentiment prediction is more likely
Results
  • In this first stage of the analysis the authors aim to understand whether sentences featuring keywords related to older age (“old”, “older”, “oldest”) are on average scored more negatively than the same sentences with words related to youth (“young”, “younger”, “youngest”), and whether this difference varies depending on the particular type of model and form of validation data used by the various sentiment analysis methods.
  • The custom classifier trained on the Original dataset produced significant bias with respect to the terms “old” and “young” (p<.0027) where sentences containing the terms “old”, “older”, or “oldest” were more likely to be classified as negative.
  • This result is in line with those of the phase one aggregated analysis.
  • The outputs of this classifier were more negative compared to the custom classifier trained on the full Sentiment140 dataset, indicating the age-related tweets in the training data were more negative than the overall corpus
Conclusion
  • This study demonstrates significant age-related bias across common sentiment analysis tools and word embedding models as well as one approach to diminishing bias in training data.
  • The findings have implications for how researchers interpret sentiment analysis results, the strategies the authors use to understand and mitigate bias, and the challenges of using these techniques to study online social movements.
  • In addition to understanding social bias in algorithms, the authors can use them as a lens to understand how unrecognized social bias operates at scale
Summary
  • Introduction:

    The concept of ageism was identified several decades ago [12], negative attitudes and stereotypes about growing older are only receiving worldwide attention.
  • Age discrimination and age bias are topics that have begun to receive attention within HCI where work highlights the ways that researchers and designers tend to treat aging as a “problem” with technology as a solution, rather than viewing aging as a complex and natural part of the lifespan [74].
  • To help counter age-related stereotypes around technology use, prior work has emphasized cases of older adults going online to actively.
  • Methods:

    The authors test the sentiment tools for age-related bias by examining the sentiment output scores using multinomial log-linear regressions.
  • The authors build two types of multinomial log-linear regressions: 1) a single full model for each phase of analysis that includes the data from all of the sentiment analysis tools in order to test for the presence of age-related bias across the models (Table 2), and, 2) individual models for each sentiment analysis tool (15 in total) in order to assess which specific tools demonstrate agerelated bias (Table 4).
  • Exponentiated coefficient values greater than one indicate that the regression model’s sentiment is more likely and a neutral sentiment prediction is less likely, and exponentiated coefficient values less than one indicate that the regression model’s sentiment is less likely and a neutral sentiment prediction is more likely
  • Results:

    In this first stage of the analysis the authors aim to understand whether sentences featuring keywords related to older age (“old”, “older”, “oldest”) are on average scored more negatively than the same sentences with words related to youth (“young”, “younger”, “youngest”), and whether this difference varies depending on the particular type of model and form of validation data used by the various sentiment analysis methods.
  • The custom classifier trained on the Original dataset produced significant bias with respect to the terms “old” and “young” (p<.0027) where sentences containing the terms “old”, “older”, or “oldest” were more likely to be classified as negative.
  • This result is in line with those of the phase one aggregated analysis.
  • The outputs of this classifier were more negative compared to the custom classifier trained on the full Sentiment140 dataset, indicating the age-related tweets in the training data were more negative than the overall corpus
  • Conclusion:

    This study demonstrates significant age-related bias across common sentiment analysis tools and word embedding models as well as one approach to diminishing bias in training data.
  • The findings have implications for how researchers interpret sentiment analysis results, the strategies the authors use to understand and mitigate bias, and the challenges of using these techniques to study online social movements.
  • In addition to understanding social bias in algorithms, the authors can use them as a lens to understand how unrecognized social bias operates at scale
Tables
  • Table1: The fifteen different sentiment analysis methods examined, and their corresponding type and validation data used when building the model. Validation data that is not social-media-based is predominantly based on movie or product reviews or news corpora
  • Table2: Regression results for explicit age analysis. The models include data from all sentiment analysis tools and are multinomial log-linear regressions, resulting in a model for positive sentiment and a model for negative sentiment. The reference categories are: neutral sentiment, “old” adjectives (i.e., “old” or “older”), lexicon-based approaches, and non-social-media validation data. Exponentiated coefficients (i.e., e^coef) provide relative risk (e.g., the sentiment analysis models were 1.66 times more likely to indicate positive sentiment when the adjective in a given sentence was changed from the “older” adjective to a “younger” adjective”). Note: *p<0.05; **p<0.01
  • Table3: Details on the 10 GloVe models. The first part of the name references the source, the second part of the name gives the number of tokens contained in the source (e.g., 6B = 6 billion), and the third part of the name gives the number of dimensions of the word vectors (e.g., 300D = 300-dimensional vectors for each word in the vocab). Further details at https://nlp.stanford.edu/projects/glove/
  • Table4: Individual regression results for explicit age analysis. The results from each sentiment analysis method were fit to a multinomial log-linear regression model, resulting in a model for positive sentiment and a model for negative sentiment for each sentiment analysis method. The reference categories for each model are: neutral sentiment and “old” adjectives. Coefficients that are not significant at p<0.05 are greyed out. Exponentiated coefficients (i.e. e^coef) provide effect sizes for relative risk (e.g. the EmoLex model was 3.18 times more likely to indicate positive sentiment when the adjective in a given sentence was changed from “old” (or “older” or “oldest”) to “young” (or “younger” or “youngest”) holding all else constant. Note: *p<0.05; ***p<0.01
  • Table5: Individual regression results for the implicit age analysis. The results from each sentiment analysis method were fit to a multinomial loglinear regression. The reference categories for each model are: neutral sentiment, and “control” adjectives. Exponentiated coefficients (i.e. e^coef) provide effect sizes for relative risk (e.g. the top right coefficient -0. the EmoLex model was 1.134 times more likely to indicate positive sentiment when the adjective in a given sentence was changed from the “control” adjective to an “older” adjective as determined by the word embeddings. Note: *p<0.05; **p<0.01
  • Table6: The increase in likelihood that a “young” sentence will be classified as “positive” compared to its “old” counterpart. Training the model on the full, original dataset, a “young” sentence was 13.26% more likely to be “positive” compared to its “old” counterpart. There were 169 “old” and “young” sentence pairs
  • Table7: T-test results for custom-trained classifiers. A likelihood above .50 produces a classification of “positive”
Download tables as Excel
Related work
  • There is a growing interest in issues of social justice in HCI, as evidenced by new frameworks and agendas [2,3,23,39,47,48,63,65] that attempt to shift power balances between researchers, society, and marginalized groups. These frameworks tackle diverse domains but converge on several points. One of these points is that science, technology, and design are not neutral or valueless; rather, they perpetuate certain points of views or ways of thinking. Work in critical algorithm studies embraces this view, and some have described algorithms as “the new power brokers in society” [22]. In addition to these critiques, a number of studies have focused on understanding the underlying mechanisms that drive bias in algorithms.

    Critical Algorithm Studies Critical algorithm studies is an emerging area of research that spans computer science, sociology, science and technology studies, communication, legal studies, and other fields. Much work in critical algorithm studies examines algorithmic bias, which can be defined as “systems that systematically and unfairly discriminate against certain individuals or groups of individuals in favor of others” [27]. Prior work analyzes algorithmic bias in search engines [36,38,53], surveillance systems (e.g., Facial Recognition Systems) [37], and social media [20,55,71]. For example, Introna and Nissenbaum describe the ways that biased search engines diminish access to information as well as individuals’ abilities “to be seen, and heard” [36]. While a growing body of work calls attention to algorithmic bias as an instance of technology embodying social, ethical, and political values [56], others have focused on understanding the sources of bias and identifying ways to diminish it.
Funding
  • This work was supported in part by NSF grant IIS-1551574
Reference
  • Paul Baker and Amanda Potts. 2013. “Why do white people have thin lips?” Google and the perpetuation of stereotypes via auto-complete search forms. Critical Discourse Studies 10, May 2015: 187–204. https://doi.org/10.1080/17405904.2012.744320
    Locate open access versionFindings
  • Shaowen Bardzell. 2010. Feminist HCI: Taking Stock and Outlining an Agenda for Design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10), 1301–1310.
    Google ScholarLocate open access versionFindings
  • Shaowen Bardzell and Jeffrey Bardzell. 2011. Towards a Feminist HCI Methodology: Social Science, Feminism, and HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11), 675–684.
    Google ScholarLocate open access versionFindings
  • Eric P.S. Baumer, Xiaotong Xu, Christine Chu, Shion Guha, and Geri K. Gay. 2017. When Subjects Interpret the Data: Social Media Non-use as a Case for Adapting the Delphi Method to CSCW. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’17): 1527–1543. https://doi.org/10.1145/2998181.2998182
    Locate open access versionFindings
  • Michael S Bernstein, Eytan Bakshy, Moira Burke, Brian Karrer, and Menlo Park. 2013. Quantifying the Invisible Audience in Social Networks. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13): 21–30.
    Google ScholarLocate open access versionFindings
  • Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Quantifying and Reducing Stereotypes in Word Embeddings. arXiv preprint.
    Google ScholarFindings
  • Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In 30th Conference on Neural Information Processing Systems (NIPS 2016).
    Google ScholarLocate open access versionFindings
  • Danah Boyd, Karen Levy, and Alice Marwick. 2014. The Networked Nature of Algorithmic Discrimination. Data and Discrimination: Collected Essays. Open Technology Institute.
    Google ScholarFindings
  • Robin Brewer, Meredith Ringel Morris, and Anne Marie Piper. 2016. “Why would anybody do this?”: Understanding Older Adults’ Motivations and Challenges in Crowd Work. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16): 2246–2257. https://doi.org/10.1145/2858036.2858198
    Locate open access versionFindings
  • Robin Brewer and Anne Marie Piper. 2016. “Tell It Like It Really Is”: A Case of Online Content Creation and Sharing Among Older Adult Bloggers. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), 5529–5542. https://doi.org/http://dx.doi.org/10.1145/2858036.2858 379
    Locate open access versionFindings
  • Alexander Budanitsky and Graeme Hirst. 2006. Evaluating WordNet-based Measures of Lexical Semantic Relatedness. Computational Linguistics 32, 1.
    Google ScholarLocate open access versionFindings
  • Robert N. Butler. 1969. Age-ism: Another form of bigotry. Gerontologist 9, 4: 243–246. https://doi.org/10.1093/geront/9.4_Part_1.243
    Locate open access versionFindings
  • Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora necessarily contain human biases. Science 356: 183–186. https://doi.org/10.1126/science.aal4230
    Locate open access versionFindings
  • Aylin Caliskan-islam, Joanna J Bryson, and Arvind Narayanan. 2016. Semantics derived automatically from language corpora necessarily contain human biases. arXiv:1608.07187v2 [cs.AI] 30 Aug 2016: 1– 14.
    Findings
  • Le Chen, Alan Mislove, and Christo Wilson. 2015. Peeking Beneath the Hood of Uber. Proceedings of the 2015 Internet Measurement Conference (IMC ’15): 495–508.
    Google ScholarLocate open access versionFindings
  • Kate Crawford. 2016. Can an Algorithm be Agonistic ? Ten Scenes from Life in Calculated Publics. Science, Technology, & Human Values 41, 1: 77–92. https://doi.org/10.1177/0162243915589635
    Locate open access versionFindings
  • Kimberle Crenshaw. 1991. Mapping the Margins: Intersectionality, Identity Politics, and Violence Against Women of Color. Stanford Law Review 43, 6: 1241–1299.
    Google ScholarLocate open access versionFindings
  • Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Enhanced sentiment learning using Twitter hashtags and smileys. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters (COLING ’10), 241–249.
    Google ScholarLocate open access versionFindings
  • Mark Davies. 2008. The Corpus of Contemporary American English (COCA): 520 million words, 1990present. BYE, Brigham Young University.
    Google ScholarFindings
  • Michael A Devito. 2016. From Editors to Algorithms. Digital Journalism: 1–21. https://doi.org/10.1080/21670811.2016.1178592
    Locate open access versionFindings
  • Michael Devito, Darren Gergle, and Jeremy Birnholtz. 2017. “Algorithms ruin everything”: # RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), In press. https://doi.org/10.1145/3025453.3025659
    Locate open access versionFindings
  • N Diakopoulos. 2014. Algorithmic accountability reporting: On the investigation of black boxes. Tow Center for Digital Journalism: A Tow/Knight Brief.
    Google ScholarFindings
  • Lynn Dombrowski, Ellie Harmon, and Sarah Fox. 2016. Social Justice-Oriented Interaction Design: Outlining Key Design Strategies and Commitments. Proceedings of the Designing Interactive Systems Conference (DIS ’16): 656–671.
    Google ScholarLocate open access versionFindings
  • Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First I “like” it, then I hide it: Folk Theories of Social Feeds. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), 2371–2382.
    Google ScholarLocate open access versionFindings
  • Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. “I always assumed that I wasn’t really that close to [her]”: Reasoning about Invisible Algorithms in News Feeds. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15): 153–162.
    Google ScholarLocate open access versionFindings
  • Ronen Feldman. 2013. Techniques and applications for sentiment analysis. Communications of the ACM 56, 4: 82–89.
    Google ScholarLocate open access versionFindings
  • Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Information Systems 14, 3: 330–347. https://doi.org/10.1145/249170.249184
    Locate open access versionFindings
  • Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford 1, 12.
    Google ScholarFindings
  • Anthony G Greenwald, Debbie E Mcghee, and Jordan L K Schwartz. 1998. Measuring Individual Differences in Implicit Cognition: The Implicit Association Test. Journal of Personality and Soclal Psychology 74, 6: 1464–1480. https://doi.org/10.1037/00223514.74.6.1464
    Locate open access versionFindings
  • Philip J Guo. 2017. Older Adults Learning Computer Programming: Motivations, Frustrations, and Design Opportunities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’17).
    Google ScholarLocate open access versionFindings
  • Dave Harley and Geraldine Fitzpatrick. 2009. YouTube and intergenerational communication: the case of Geriatric1927. Universal Access in the Information Society 8, 1: 5–20. https://doi.org/10.1007/s10209-008-0127-y
    Locate open access versionFindings
  • Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the 2004 ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’04), 168–177. https://doi.org/10.1145/1014052.1014073
    Locate open access versionFindings
  • Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1 (ACL ’12) Vol. 1.
    Google ScholarLocate open access versionFindings
  • Mary Lee Hummert, Teri A. Garstka, Laurie T. O’Brien, Anthony G. Greenwald, and Deborah S. Mellott. 2002. Using the Implicit Association Test to measure age differences in implicit social cognitions. Psychology and Aging 17, 3: 482–495. https://doi.org/10.1037//0882-7974.17.3.482
    Locate open access versionFindings
  • CJ J. Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth International AAAI Conference on Weblogs and Social Media, 216–225.
    Google ScholarLocate open access versionFindings
  • Lucas D. Introna and Helen Nissenbaum. 2000. Shaping the Web: why the politics of search engines matters. The Information Society 16: 169–185. https://doi.org/10.1080/01972240050133634
    Locate open access versionFindings
  • Lucas D Introna and David Wood. 2004. Picturing Algorithmic Surveillance: The Politics of Facial Recognition Systems. Surveillance & Society: CCTV Special Issue 2, 2/3.
    Google ScholarFindings
  • Lucas Introna and Helen Nissenbaum. 2000. Defining the Web: The Politics of Search Engines. Computer 33, 54–62.
    Google ScholarLocate open access versionFindings
  • Lilly Irani, Janet Vertesi, and Paul Dourish. 2010. Postcolonial computing: a lens on design and development. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10): 1311–1320. https://doi.org/10.1145/1753326.1753522
    Locate open access versionFindings
  • Isaac Johnson, Connor McMahon, Johannes Schöning, and Brent Hecht. 2017. The Effect of Population and “Structural” Biases on Social Media-based Algorithms – A Case Study in Geolocation Inference Across the Urban- Rural Spectrum. In Proceedings of the 35th Annual ACM Conference on Human Factors in Computing Systems (CHI ’17).
    Google ScholarLocate open access versionFindings
  • Matthew Kay, Cynthia Matuszek, and Sean a. Munson. 2015. Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15), 3819–3828. https://doi.org/10.1145/2702123.2702520
    Locate open access versionFindings
  • Rob Kitchin. 2017. Thinking critically about and researching algorithms. Information, Communication & Society 20, 1. https://doi.org/10.1080/1369118X.2016.1154087
    Locate open access versionFindings
  • Juhi Kulshrestha, Motahhare Eslami, Johnnatan Messias, Muhammad Bilal Zafar, Saptarshi Ghosh, Krishna P. Gummadi, and Karrie Krahalios. 2017. Quantifying Search Bias: Investigating Sources of Bias for Political Searches in Social Media. In Proceedings of the 2017 ACM Conference on Computer Supported
    Google ScholarLocate open access versionFindings
  • Nathan R. Kuncel, Deniz S. Ones, and David M. Klieger. 2014. In Hiring, Algorithms Beat Instinct. Harvard Business Review May.
    Google ScholarLocate open access versionFindings
  • Joanna N. Lahey. 2010. International Comparison of Age Discrimination Laws. Research on Aging 32, 6: 679–697. https://doi.org/10.1126/scisignal.2001449.Engineering
    Locate open access versionFindings
  • K.P. Lasher and P.J. Faulkender. 1993. Measurement of Aging Anxiety: Development of the Anxiety About Aging Scale. The International Journal of Aging and Human Development 37, 4: 247–259. https://doi.org/10.2190/1U69-9AU2-V6LH-9Y1L
    Locate open access versionFindings
  • Amanda Lazar, Mark Diaz, Robin Brewer, Chelsea Kim, and Anne Marie Piper. 2017. Going Gray, Failure to Hire, and the Ick Factor: Analyzing How Older Bloggers Talk about Ageism. In Proceedings of the ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW ’17).
    Google ScholarLocate open access versionFindings
  • Amanda Lazar, Caroline Edasis, and Anne Marie Piper. 2017. A Critical Lens on Dementia and Design in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), In press.
    Google ScholarLocate open access versionFindings
  • Becca Levy. 2009. Stereotype Embodiment: A Psychosocial Approach to Aging. Current directions in psychological science 18, 6: 332–336.
    Google ScholarLocate open access versionFindings
  • Q. Vera Liao, Wai-Tat Fu, and Markus Strohmaier. 2016. #Snowden: Understanding Biases Introduced by Behavioral Differences of Opinion Groups on Social Media. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), 3352–3363.
    Google ScholarLocate open access versionFindings
  • Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1 (HLT ’11).
    Google ScholarLocate open access versionFindings
  • Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS’13), 3111– 3119. https://doi.org/10.1162/jmlr.2003.3.4-5.951
    Locate open access versionFindings
  • Boaz Miller and Isaac Record. 2016. Responsible epistemic technologies: A social-epistemological analysis of autocompleted web search. new media & society: 1–19. https://doi.org/10.1177/1461444816644805
    Findings
  • Claire Cain Miller. 2015. Can an Algorithm Hire Better Than a Human. The New York Times.
    Google ScholarLocate open access versionFindings
  • Karine Nahon. 2015. Where there is Social Media there is Politics. In Forthcoming in Routledge Companion to Social Media and Politics, A. Bruns, E. Skogerbo, C. Christensen, O.A. Larsson and G.S. Enli (eds.). Routledge, NYC, NY.
    Google ScholarFindings
  • Helen Nissenbaum. How Computer Systems Embody Values. Computer 34, 3: 118–119.
    Google ScholarLocate open access versionFindings
  • Alana Officer, Mira Leonie Schneiders, Diane Wu, Paul Nash, Jotheeswaran Amuthavalli Thiyagarajan, and John R. Beard. 2016. Valuing older people: Time for a global campaign to combat ageism. Bulletin of the World Health Organization 94, 709–784. https://doi.org/10.2471/BLT.16.184960
    Locate open access versionFindings
  • Bo Pang and Lillian Lee. 2006. Opinion Mining and Sentiment Analysis. Foundations and Trends® in Information Retrieval 1, 2: 91–231. https://doi.org/10.1561/1500000001
    Locate open access versionFindings
  • Frank Pasquale. 2015. The Black Box Society: The Secret Algorithms That Control Money. Harvard University Press.
    Google ScholarFindings
  • Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing: 1532–1543. https://doi.org/10.3115/v1/D14-1162
    Locate open access versionFindings
  • Pew Research Center. 2014. Older Adults and Technology Use. April. https://doi.org/202.419.4500
    Findings
  • Filipe N. Ribeiro, Matheus Ara??jo, Pollyanna Gon??alves, Marcos Andr?? Gon??alves, and Fabr??cio Benevenuto. 2016. SentiBench - a benchmark comparison of state-of-the-practice sentiment analysis methods. EPJ Data Science 5, 1: 1– 29. https://doi.org/10.1140/epjds/s13688-016-0085-1
    Locate open access versionFindings
  • Jennifer A. Rode. 2011. A theoretical agenda for feminist HCI. Interacting with Computers 23, 5: 393– 400. https://doi.org/10.1016/j.intcom.2011.04.005
    Findings
  • Shilad Sen, Margaret E Giesel, Rebecca Gold, Benjamin Hillmann, Matt Lesicko, Samuel Naden, Jesse Russell, Zixiao “Ken” Wang, and Brent Hecht. 2015. Turkers, Scholars, “Arafat” and “Peace”: Cultural Communities and Algorithmic Gold Standards. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW ’15), 826–838.
    Google ScholarLocate open access versionFindings
  • Thomas Smyth and Jill Dimond. 2014. AntiOppressive Design. interactions 21, 6: 68–71.
    Google ScholarLocate open access versionFindings
  • Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), 1631–1642. https://doi.org/10.1371/journal.pone.0073791
    Locate open access versionFindings
  • Kate Starbird and Leysia Palen. 2012. (How) will the revolution be retweeted?: information diffusion and the 2011 Egyptian uprising. Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (CSCW ’12): 7–16. https://doi.org/10.1145/2145204.2145212
    Locate open access versionFindings
  • L Sweeney. 2013. Discrimination in online ad delivery. acmqueue 11, 3. https://doi.org/10.1145/2460276.2460278
    Locate open access versionFindings
  • Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. LexiconBased Methods for Sentiment Analysis. Computational Linguistics 37, 2: 267–307. https://doi.org/10.1162/COLI_a_00049
    Locate open access versionFindings
  • Antonio Torralba and Alexai A. Efros. 2011. Unbiased look at dataset bias. In 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
    Google ScholarLocate open access versionFindings
  • Zeynep Tufekci. 2014. Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency. Journal on Telecommunications and High Technology Law 13: 203–218.
    Google ScholarLocate open access versionFindings
  • Marlon Twyman, Brian C. Keegan, and Aaron Shaw. 2016. Black Lives Matter in Wikipedia: Collaboration and Collective Memory around Online Social Movements. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’17), 1400–1412. https://doi.org/10.1145/2998181.2998232
    Locate open access versionFindings
  • W. N. Venables and B. D. Ripley. 2002. Modern Applied Statistics with S. Springer, New York.
    Google ScholarFindings
  • John Vines, Gary Pritchard, Peter Wright, Patrick Olivier, and Katie Brittain. 2015. An Age-Old Problem: Examining the Discourses of Ageing in HCI and Strategies for Future Research. ACM Transactions on Computer-Human Interaction 22, 1.
    Google ScholarLocate open access versionFindings
  • Claudia Wagner, Eduardo Graells-Garrido, David Garcia, and Filippo Menczer. 2016. Women through the glass ceiling: gender asymmetries in Wikipedia. EPJ Data Science 5, 1. https://doi.org/10.1140/epjds/s13688-016-0066-4
    Locate open access versionFindings
  • Theresa Wilson, Paul Hoffmann, Swapna Somasundaran, Jason Kessler, Janyce Wiebe, Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. OpinionFinder: A system for subjectivity analysis. In Proceedings of hlt/emnlp on interactive demonstrations, 34–35.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Best Paper
Best Paper of CHI, 2018
Tags
Comments