References

Agre, Philip E. 1998. “Introduction.” In Technology and Privacy: The New Landscape, edited by Philip E. Agre and Marc Rotenberg, 1–28. Cambridge, Mass: MIT Press.
Agre, Philip E. 1998. “Beyond the Mirror World: Privacy and the Representational Practices of Computing.” In Technology and Privacy: The New Landscape, edited by Philip E. Agre and Marc Rotenberg, 29–61. Cambridge, Mass: MIT Press.
Agre, Philip E. 1997. “Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI.” In Social Science, Technical Systems, and Cooperative Work: Beyond the Great Divide, 131–57. New York, London: Psychology Press.
Agre, Philip E., and Marc Rotenberg, eds. 1998. Technology and Privacy: The New Landscape. Cambridge, Mass: MIT Press.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. Virtual Event Canada: ACM. https://doi.org/10.1145/3442188.3445922.
Bijker, Wiebe E. 1995. Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Sociotechnical Change. Inside Technology. Cambridge, Mass: MIT Press.
Bijker, Wiebe E., Thomas Parke Hughes, and Trevor J. Pinch, eds. 1987. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Cambridge, MA; London: MIT Press.
Boden, Margaret  A., and Ernest  A. Edmonds. 2009. “What Is Generative Art?” Digital Creativity 20 (1–2): 21–46. https://doi.org/10.1080/14626260902867915.
Born, Rainer, ed. 2018. Artificial Intelligence: The Case Against. New York: Routledge.
Born, Rainer, ed. 1987. Artificial Intelligence: The Case Against. New York: St. Martin’s Press.
Born, Rainer, and Ilse Born-Lechleitner. 1987. “Introduction.” In Artificial Intelligence: The Case Against, edited by Rainer Born, i–xxxv. New York: St. Martin’s Press.
Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” ArXiv:2005.14165 [Cs], July. http://arxiv.org/abs/2005.14165.
Bryson, Joanna J. 2010. “Robots Should Be Slaves.” In Natural Language Processing, edited by Yorick Wilks, 8:63–74. Amsterdam: John Benjamins Publishing Company. https://doi.org/10.1075/nlp.8.11bry.
Bryson, Joanna J., and Philip P. Kime. 2011. “Just an Artifact: Why Machines Are Perceived as Moral Agents.” In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence – Volume Volume Two, 1641–46. IJCAI’11. Barcelona, Catalonia, Spain: AAAI Press. https://doi.org/10.5555/2283516.2283669.
Bur, Tatiana. 2016. “Mechanical Miracles: Automata in Ancient Greek Religion.” Master’s Thesis, University of Sydney. https://ses.library.usyd.edu.au/handle/2123/15398.
Cave, Stephen, and Kanta Dihal. 2020. “The Whiteness of AI.” Philosophy & Technology 33 (4): 685–703. https://doi.org/10.1007/s13347-020-00415-6.
Cave, Stephen, Claire Craig, Kanta Sarasvati Dihal, Sarah Dillon, Jessica Montgomery, Beth Singler, and Lindsay Taylor. 2018. Portrayals and Perceptions of AI and Why They Matter. London: The Royal Society. https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf.
Cave, Stephen, Kanta Sarasvati Monique Dihal, and Sarah Dillon, eds. 2020. AI Narratives: A History of Imaginative Thinking about Intelligent Machines. Oxford: Oxford University Press.
Chandler, Daniel. 2017. Semiotics: The Basics. Abingdon Oxon ; New York, NY: Routledge.
Chesterman, Simon. 2021. We, the Robots? Regulating Artificial Intelligence and the Limits of the Law. Cambridge, United Kingdom ; New York, NY, USA: Cambridge University Press.
Cobley, Paul, ed. 2010. The Routledge Companion to Semiotics. Routledge Companions. London ; New York: Routledge.
Dale, Robert. 2021. “GPT-3: What’s It Good for?” Natural Language Engineering 27 (1): 113–18. https://doi.org/10.1017/S1351324920000601.
Davison, Joe. 2018. “No, Machine Learning Is Not Just Glorified Statistics.” Medium (blog). June 27, 2018. https://towardsdatascience.com/no-machine-learning-is-not-just-glorified-statistics-26d3952234e3.
Visser, Ewart J. de, Samuel S. Monfort, Ryan McKendrick, Melissa A. B. Smith, Patrick E. McKnight, Frank Krueger, and Raja Parasuraman. 2016. “Almost Human: Anthropomorphism Increases Trust Resilience in Cognitive Agents.” Journal of Experimental Psychology: Applied 22 (3): 331–49. https://doi.org/10.1037/xap0000092.
Waal, Frans B. M. de. 1999. “Anthropomorphism and Anthropodenial: Consistency in Our Thinking about Humans and Other Animals.” Philosophical Topics 27 (1): 255–80. https://doi.org/10.5840/philtopics199927122.
DoD. 2012. “Autonomy in Weapon Systems. Department of Defense Directive Number 3000.09.” United States. Department of Defense. https://www.hsdl.org/?abstract&did=726163.
Elkins, Katherine, and Jon Chun. 2020. “Can GPT-3 Pass a Writer’s Turing Test.” Journal of Cultural Analytics 2371:1–16.
Fragaki, Hélène. 2012. “Automates et statues merveilleuses dans l’Alexandrie antique.” Journal des Savants 1 (1): 29–67. https://doi.org/10.3406/jds.2012.6293.
Gardner, Nikolas. 2021. “Clausewitzian Friction and Autonomous Weapon Systems.” Comparative Strategy 40 (1): 86–98. https://doi.org/10.1080/01495933.2021.1853442.
Gell, Alfred. 1992. “The Technology of Enchantment and the Enchantment of Technology.” In Anthropology, Art, and Aesthetics, by Jeremy Coote and Anthony Shelton, 40–63. Oxford: Oxford University Press.
Geraci, Robert M. 2008. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal of the American Academy of Religion 76 (1): 138–66. https://doi.org/10.1093/jaarel/lfm101.
Glickman, Moshe, and Tali Sharot. 2022. “Biased AI Systems Produce Biased Humans.” OSF Preprints. https://doi.org/10.31219/osf.io/c4e7r.
Goddard, Kate, Abdul Roudsari, and Jeremy C Wyatt. 2012. “Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators.” Journal of the American Medical Informatics Association 19 (1): 121–27. https://doi.org/10.1136/amiajnl-2011-000089.
Goodman, C. P. 2003. “The Tacit Dimension.” Polanyiana 2 (1): 133–57.
GPT-3. 2020. “A Robot Wrote This Entire Article. Are You Scared yet, Human?” The Guardian, September 8, 2020. http://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3.
Gray, Chris Hables. 1997. “Artificial Intelligence at War: An Analysis of the Aegis System in Combat.” In Reinventing Technology, Rediscovering Community: Critical Explorations of Computing as a Social Practice, edited by Philip E. Agre and Douglas Schuler, 127–42. London: Ablex Publishing Corporation.
Greene, Tristan. 2022. “DeepMind’s New Gato AI Makes Me Fear Humans Will Never Achieve AGI.” TNW | Neural. May 13, 2022. https://thenextweb.com/news/deepminds-astounding-new-gato-ai-makes-fear-humans-will-never-achieve-agi.
Harbers, Maaike, Marieke M. M. Peeters, and Mark A. Neerincx. 2017. “Perceived Autonomy of Robots: Effects of Appearance and Context.” In A World with Robots: International Conference on Robot Ethics: ICRE 2015, edited by Maria Isabel Aldinhas Ferreira, Joao Silva Sequeira, Mohammad Osman Tokhi, Endre E. Kadar, and Gurvinder Singh Virk, 19–33. Intelligent Systems, Control and Automation: Science and Engineering. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-46667-5_2.
Horowitz, Michael C., Lauren Kahn, Julia Macdonald, and Jacquelyn Schneider. 2022. “COVID-19 and Public Support for Autonomous Technologies—Did the Pandemic Catalyze a World of Robots?” PLOS ONE 17 (9): e0273941. https://doi.org/10.1371/journal.pone.0273941.
Irvine, Martin. 2022. “Semiotics in Computing and Information Systems.” In Bloomsbury Semiotics. Volume 2: Semiotics in the Natural and Technical Sciences, edited by Jamin Pelkey and Stéphanie Walsh Matthews, 2:203–37. London: Bloomsbury Academic.
Johansen, Sigrid Redse. 2018. “So Man Created Robot in His Own Image: The Anthropomorphism of Autonomous Weapon Systems and the Law of Armed Conflict.” Oslo Law Review 5 (2): 89–102. https://doi.org/10.18261/issn.2387-3299-2018-02-03.
Johnson, James. 2022. “Delegating Strategic Decision-Making to Machines: Dr. Strangelove Redux?” Journal of Strategic Studies 45 (3): 439–77. https://doi.org/10.1080/01402390.2020.1759038.
Johnson, Deborah G., and Mario Verdicchio. 2017. “Reframing AI Discourse.” Minds and Machines 27 (4): 575–90. https://doi.org/10.1007/s11023-017-9417-6.
Julyk, David P. 2008. “‘The Trouble With Machines Is People.’ The Computer as Icon in Post-War America: 1946-1970.” Doctoral thesis, University of Michigan.
Kang, Minsoo. 2011. Sublime Dreams of Living Machines: The Automaton in the European Imagination. Cambridge, Mass: Harvard University Press.
Katz, Yarden. 2020. Artificial Whiteness: Politics and Ideology in Artificial Intelligence. New York: Columbia University Press.
Kennedy, John S. 1992. The New Anthropomorphism. Cambridge [England] ; New York: Cambridge University Press.
Kline, Ronald. 2010. “Cybernetics, Automata Studies, and the Dartmouth Conference on Artificial Intelligence.” IEEE Annals of the History of Computing 33 (4): 5–16. https://doi.org/10.1109/MAHC.2010.44.
Korngiebel, Diane M., and Sean D. Mooney. 2021. “Considering the Possibilities and Pitfalls of Generative Pre-Trained Transformer 3 (GPT-3) in Healthcare Delivery.” Npj Digital Medicine 4 (1): 1–3. https://doi.org/10.1038/s41746-021-00464-x.
Krishnan, Armin. 2016. Killer Robots: Legality and Ethicality of Autonomous Weapons. London, England ; New York, New York: Routledge.