Concepts glossary

  • autonomy

    Autonomy is an “intrinsically human term”: it is precisely due to the use of this concept that “we attribute [machines] with human-like behaviour that they are not likely to possess in the near future” (Johansen 2018, 95, 90).

  • black box

    “The conceptual metaphor ‘black box’ was originally an engineering term for any component designed to take in certain kinds of inputs (energy, signals, information, etc.) and convert them into specified outputs (e.g., a radio, a voltage transformer, a codec for converting digital into analog audio/video): the details inside the components can just stay ‘hidden’ (‘black-boxed’,…

  • electronic brain metaphor

    “The ‘electronic brain’ has proven to be a product of 20th century mythology. But, myths die hard. The appalling ignorance of computer functions evidenced by the editors of the daily press, combined with the affinity for science fiction headlines, have been chief factors in keeping a confused image of the electronic computer in the public…

  • Frankenstein Complex

    “Frankenstein complex” is Isaac Asimov’s concept for describing “the fear that the general public has towards human-made technologies when they invade the realm commonly considered to be God’s domain” (McCauley 2007: 42).

  • information processing

    « The first methods for representing human activities on computers were derived from the work-rationalization methods that industrial engineers had been developing since the 1910s. Hence the phrase ‘information processing’: the idea was that computers automate a kind of factory work whose raw materials happens to be information rather than anything physical or tangible. »(Philip…

  • Large Technical System

    The conception of AI as a Large Technical System (LTS) – system artefact, system technology, infrastructural technology (Vannuccini & Prytkova 2020) “The study of AI as LTS uncovers active mechanisms of control and coordination, helps to identify types of actors and their incentives and to detect critical nodes of the system.” (Vannuccini and Prytkova, 2020,…

  • Long Island overpasses

    The story of Long Island overpasses is one of the most typical examples brought about the societal impact of technology. Elaborated by Langdon Winner in his 1980 article Do artifacts have politics?, New York urban planner Robert Moses planned low overpasses on the parkways to Long Island so that the roads were inaccessible by any…

  • Robots, public perception of

    “[m]ost people’s conception of what a robot is appears to be largely based on the way robots are depicted in fiction” + “robots in fiction are largely presented as independent, autonomous actors that have a ‘mind of their own’, with a humanoid or anthropomorphic appearance” (Harbers, Peeters, and Neerincx 2017, 20).

  • social loafing

    “Contrary to conventional wisdom, having a human in the loop in decision-making tasks does not appear to alleviate automation bias. Instead, human-machine collaboration in monitoring and sharing responsibility for decision-making can lead to similar psychological effects that occur when humans share responsibilities with other humans, whereby ‘social loafing’ arises – the tendency of humans to…

  • sociotechnical blindness

    “Absence of discussion of the role played by programmers and other human actors in creating AI is another problem in current AI discourse that leads to misunderstanding and fear. What we call sociotechnical blindness, i.e. blindness to all of the human actors involved and all of the decisions necessary to make AI systems, allows AI…