Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2020
Biopolitics in Future and the Arts: AI, Robotic, Cities, Life - How Humanity Will Live Tomorrow Mori Art Museum Tokyo 2019-2020 Post-Internet - After the Internet Michael Foucault's - Biopolitics
The meaning of AI has undergone drastic changes during the last 60 years of AI discourse(s). What we talk about when saying “AI” is not what it meant in 1958, when John McCarthy, Marvin Minsky and their colleagues started using the term. Take game design as an example: When the Unreal game engine introduced "AI" in 1999, they were mainly talking about pathfinding. For Epic Megagames, the producers of Unreal, an AI was just a bot or monster whose pathfinding capabilities had been programmed in a few lines of code to escape an enemy. This is not "intelligence" in the Minskyan understanding of the word (and even less what Alan Turing had in mind when he designed the Turing test). There are also attempts to differentiate between AI, classical AI and "Computational Intelligence" (Al-Jobouri 2017). The latter is labelled CI and is used to describe processes such as player affective modelling, co-evolution, automatically generated procedural environments, etc. Artificial intelligence research has been commonly conceptualised as an attempt to reduce the complexity of human thinking. (cf. Varela 1988: 359-75) The idea was to map the human brain onto a machine for symbol manipulation – the computer. (Minsky 1952; Simon 1996; Hayles 1999) Already in the early days of what we now call “AI research” McCulloch and Pitts commented on human intelligence and proposed in 1943 that the networking of neurons could be used for pattern recognition purposes (McCulloch/Pitts 1943). Trying to implement cerebral processes on digital computers was the method of choice for the pioneers of artificial intelligence research. The “New AI” is no longer concerned with the needs to observe the congruencies or limitations of being compatible with the biological nature of human intelligence: “Old AI crucially depended on the functionalist assumption that intelligent systems, brains or computers, carry out some Turing-equivalent serial symbol processing, and that the symbols processed are a representation of the field of action of that system.” (Pickering 1993, 126) The ecological approach of the New AI has its greatest impact by showing how it is possible “to learn to recognize objects and events without having any formal representation of them stored within the system.” (ibid, 127) The New Artificial Intelligence movement has abandoned the cognitivist perspective and now instead relies on the premise that intelligent behaviour should be analysed using synthetically produced equipment and control architectures (cf. Munakata 2008). Kate Crawford (Microsoft Research) has recently warned against the impact that current AI research might have, in a noteworthy lecture titled: AI and the Rise of Fascism. Crawford analysed the risks and potential of AI research and asked for a critical approach in regard to new forms of data-driven governmentality: “Just as we are reaching a crucial inflection point in the deployment of AI into everyday life, we are seeing the rise of white nationalism and right-wing authoritarianism in Europe, the US and beyond. How do we protect our communities – and particularly already vulnerable and marginalized groups – from the potential uses of these systems for surveillance, harassment, detainment or deportation?” (Crawford 2017) Following Crawford’s critical assessment, this issue of the Digital Culture & Society journal deals with the impact of AI in knowledge areas such as computational technology, social sciences, philosophy, game studies and the humanities in general. Subdisciplines of traditional computer sciences, in particular Artificial Intelligence, Neuroinformatics, Evolutionary Computation, Robotics and Computer Vision once more gain attention. Biological information processing is firmly embedded in commercial applications like the intelligent personal Google Assistant, Facebook’s facial recognition algorithm, Deep Face, Amazon’s device Alexa or Apple’s software feature Siri (a speech interpretation and recognition interface) to mention just a few. In 2016 Google, Facebook, Amazon, IBM and Microsoft founded what they call a Partnership on AI. (Hern 2016) This indicates a move from academic research institutions to company research clusters. We are in this context interested in receiving contributions on the aspects of the history of institutional and private research in AI. We would like to invite articles that observe the history of the notion of “artificial intelligence” and articles that point out how specific academic and commercial fields (e.g. game design, aviation industry, transport industry etc.) interpret and use the notion of AI. Against this background, the special issue Rethinking AI will explore and reflect the hype of neuroinformatics in AI discourses and the potential and limits of critique in the age of computational intelligence. (Johnston 2008; Hayles 2014, 199-210) We are inviting contributions that deal with the history, theory and the aesthetics of contemporary neuroscience and the recent trends of artificial intelligence. (cf. Halpern 2014, 62ff) Digital societies increasingly depend on smart learning environments that are technologically inscribed. We ask for the role and value of open processes in learning environments and we welcome contributions that acknowledge the regime of production as promoted by recent developments in AI. We particularly welcome contributions that are historical and comparative or critically reflective about the biological impact on social processes, individual behaviour and technical infrastructure in a post-digital and post-human environment? What are the social, cultural and ethical issues, when artificial neuronal networks take hold in digital cultures? What is the impact on digital culture and society, when multi-agent systems are equipped with license to act?
Human + Machine (Technology) = Cybernetic organism. Human + Planet’s Non-human = Bio-citizen. Human + Planet’s Non-Human + Machine (Technology) = Cybernetic Bio-Citizens. The current capitalistic society has a restricted worldview. It is stuck within the bounds of what that has been already established. The fact that capitalists exponentially reproduce capital by reinvesting on real estate to keep the cash flow alive has created cities with designs that are commonplace worldwide. This process is driving us blindfolded to Anthropocene, a period characterized by anthropocentric dominance. But the world is not limited to human existence. A thorough reconsideration of our action in the first place and a realization for inclusive urbanism is the need of the day. This design report deals with creating an inclusive urban model through a revision of the human subject. A speculative ‘Post-Natural’ urban model of RC-16, Bryo-polis, acts as an alternative experimental urban model of existing London city. Here, it weighs humans and non-humans equally through technological intervention, to establish a distributed cognition between the participant. Thus, revising the thought (thinking) and action (being) of humans by blurring the boundaries between artificial intelligence, bio-intelligence, and human-intelligence. The resultant speciation in humans could access the unforeseen vistas when approaching current-day global issues like climate change, global warming, species extinction, and more.
International Journal of Cultural Policy
AI, a Wicked Problem for Cultural Policy? Pre-empting Controversy and the Crisis of Cultural Participation2022 •
This article explores the practice of preempting controversy as an example of the wicked problem of cultural participation in the digital media. Drawing on science and technology studies (STS), research into the history of cybernetics, artificial intelligence (AI), and policy studies, it argues that the ongoing digital transformation and the expansion of the algorithmic public sphere does not solve but amplifies the problem of cultural participation, challenging the "participatory turn" in cultural policy, defined as cultural policy's reorientation to encourage participation of different stakeholders at different stages of policymaking. This process is analysed through two cases: the postponing of a retrospective exhibition of the painter Philip Guston in the United States and the pre-emptive ban of a public art project centred on a monument for the Soviet Lithuanian writer Petras Cvirka in Lithuania.
2021 •
Cugurullo, F., Caprotti, F., Cook, M., Karvonen, A., McGuirk, P., & Marvin, S. (Eds.). (2023). Artificial Intelligence and the City: Urbanistic Perspectives on AI. Routledge
Artificial Intelligence and the City: Urbanistic Perspectives on AIThis book explores in theory and practice how artificial intelligence (AI) intersects with and alters the city. Drawing upon a range of urban disciplines and case studies, the chapters reveal the multitude of repercussions that AI is having on urban society, urban infrastructure, urban governance, urban planning and urban sustainability. Contributors also examine how the city, far from being a passive recipient of new technologies, is influencing and reframing AI through subtle processes of co-constitution. The book advances three main contributions and arguments: First, it provides empirical evidence of the emergence of a post-smart trajectory for cities in which new material and decision-making capabilities are being assembled through multiple AIs. Second, it stresses the importance of understanding the mutually constitutive relations between the new experiences enabled by AI technology and the urban context. Third, it engages with the concepts required to clarify the opaque relations that exist between AI and the city, as well as how to make sense of these relations from a theoretical perspective. Artificial Intelligence and the City offers a state-of-the-art analysis and review of AI urbanism, from its roots to its global emergence. It cuts across several disciplines and will be a useful resource for undergraduates and postgraduates in the fields of urban studies, urban planning, geography, architecture, urban design, science and technology studies, sociology and politics.
Springer Series in Design and Innovation
TRACES—In 2030, Artificial Intelligences Will Visit Museums?Within the SISCODE project, the science and society association TRACES, based in Paris, addresses the issue of making algorithms and artificial intelligence intelligible to their users. The project intends to raise awareness of algorithmic decision making in the citizen’s daily life through co-creation activities involving research, education, civic right organisations and policymaking. Within general cultural activities in an art-science, provocative approach, the issue has been addressed through an inversion of perspective, by analysing people’s relationship with AI when considering them as the target group of cultural productions.
China Perspectives
Making the Future with the Nonhuman2023 •
This essay examines two interconnected human-made nonhuman entities stemming from Shenzhen, China's first special economic zone, that have become dominant figures in mapping the city's-and by extension, China's-future: the robot and the drone. I bring an interdisciplinary, cultural studies approach to the multiple meaning-making practices that engage with these two objects; both participate in enacting the vision for the Guangdong-Hong Kong-Macao Greater Bay Area as an extension of the success of Shenzhen. These practices simultaneously normalise aspirations for a future fuelled by the power of nonhuman technological agents while offering glimpses into the uneven power relations between different humans that underpin such future making. At the same time, they also point to the emergent possibilities of meaningmaking that conjoin the human and the nonhuman.
2019 •
Intelligence Everywhere: What artistic explorations can tell us through and about technological development presented on Sept 18 2019 during the Humanities and Public Life Conference at Dawson College, Montréal, Canada Recent developments in machine learning and what John McCarthy has named artificial intelligence in 1956 have repeatedly been portrayed in the media as competing with human creativity. Binary narratives that (narcissistically) anthropomorphize and present technological advancements as either miraculous or antagonistic spread fear and fascination amongst the public. Machines, some threaten, will take your job as an artist, a lawyer, a taxi driver, a doctor, an accountant, and govern us … In this presentation I wish to draw a historical lineage between ideas that were at the roots of the British branch of cybernetics comparing and contrasting the worldview that underlined it with the approach taken by the founders of the Artificial Intelligence project in 1956. I wish to establish the link between the cybernetic worldview and the recent developments in machine learning that we commonly refer to as Artificial Intelligence. (AI) These powerful discoveries are currently used to generate images, natural language, soundscapes and videos that can be mistaken to have been produced by people. This has pushed some to declare that the machines were themselves creative. I will argue that while the tools do display what N.Katherine Hayles calls non-conscious cognition, a process that is found everywhere in nature, creativity, in the realm of art, is a concept rooted in the self-reflexive sense-making ability of the person orchestrating it as well as in the social, cultural and political context in which it is being examined. Presenting creativity from the point of view of the art world, I will argue that the definition of art does not lie solely in the formal aesthetics of the object produced but is a shifting culturally constructed concept that is by no means negated by machine “imagination” or “creativity”. The notion of authorship in relation with automation in the creative process have been explored thoroughly in the realm of art ever since, for example, Marcel Duchamp presented his readymade, Walter Benjamin published his famous text in 1937 and Roland Barth examined aspects of the topic in 1967. Early cybernetic prototypes that displayed cognitive behaviours as well as artworks that use automation in their creative process will be presented as well as a selection of recent art practises that explore and comment on the use of statistical models or what Hunger calls “enhanced pattern recognition” systems such as artificial neural networks and adversarial neural networks. (Hunger 2017) These artworks often present advanced technical tools as one component of a network (Latour) /agencement (Deleuze) in which humans interact with them in complex and intricate ways. Through the examination of a selection of projects by artists from various backgrounds, such as the recent work and writings by indigenous artists as well as local and international artists, I wish to point to some of the shortcomings they bring to light as well as how they engage us into some much-needed reflection about the technologies we generate and how they hold the potential to redefine us and the environment.
Our Internet-based digital culture is increasingly being determined by diverse forms of Artificial Intelligence (AI). Above all, the machine learning methods (ML) of so-called Deep Learning (DL) are significantly involved in the current transformation of information technologies. The latest successful implementations of DL have lead to noteworthy advances in AI, but DL itself is not new at all. It has been known for decades under the connectionist paradigm of Artificial Neural Networks (ANN). While for decades ANN were considered a dead end in AI research, recent advances in computational capacities coupled with new implementations have consistently led to breakthroughs in a number of applications. For example, in 2012 Krizhevsky, Sutskever and Hinton were able to use ANN/DL methods to reliable train computers to semi-autonomously recognize and classify large numbers of images. This breakthrough serves as the bedrock for much work on machine vision and only became possible with cheap, fast, and powerful GPUs. These new applications of ANN/DL methods have vast technical, ethical, economic, social, and political implications that are increasingly being negotiated in public discourse. They are now deployed for identifying potential terrorists through vast surveillance networks, for producing sentencing guidelines and recidivism risk profiles in criminal justice systems, for demographic and psychographic targeting of bodies for advertising, propaganda, or other forms of state intervention, and more generally for automating the processing of natural language, written and spoken, photographs and images, and motion pictures. All of these applications have been debated in public discourse, most notably in the recent Congressional hearings in the United States with Facebook founder and CEO Mark Zuckerberg. As the latter example clearly shows, AI technologies are also crucial to understanding the medial and political developments and transformations of the Internet. Conversely, technologies with regard, for instance, to the access to large (correctly) labeled data sets are heavily dependent on Internet platforms, applications, and technologies (e.g. program libraries such as TensorFlow or crowd-sourcing platforms such as Amazon Mechanical Turk). The aim of the book is to discuss the diverse political dimensions of Internet and AI technologies. Two perspectives, which are closely related to each other, are at the center: On the one hand, there is the question of how AI approaches, not least with regard to their connections to the Internet, can be characterized as black box...
Pesquisa Brasileira em Odontopediatria e Clínica Integrada
Correlation of Low CD4+ Counts with High Dental Caries Prevalence in Children Living with Perinatal HIV/AIDS Undergoing Antiretroviral TherapyPhysica D: Nonlinear Phenomena
Breathers in one-dimensional nonlinear thermalized lattice with an energy gap2003 •
Philosophos Revista De Filosofia
Resenha Ao Tractatus Logico-PhilosophicusChina – US trade war and its impact on the United States
China -US trade war and its impact on the United States2022 •
2021 •
Egyptian Journal of Chest Diseases and Tuberculosis
Modelling obstructive sleep apnea susceptibility using non-invasive inflammatory biomarkers2017 •
Journal of Nepal Medical Association
Oral Health and General Health Connections - An Emerging Challenge2011 •
Zbornik radova Pravnog fakulteta, Novi Sad
Inmate punishments: Disciplinary measures2016 •
Revista Brasileira de Reumatologia
Cloroquina na osteoartrite/osteoartrose (OA)2008 •
Plant Signaling & Behavior
Nitric oxide and glutathione impact the expression of iron uptake- and iron transport-related genes as well as the content of metals inA. thalianaplants grown under iron deficiency2012 •
Journal of Pharmaceutical and Biomedical Analysis
Stability study of piroxicam and cinnoxicam in solid pharmaceuticals1999 •
Zoologischer Anzeiger
Comparative morphology of the digestive tract of two Neotropical tree frogs (Genus Boana)2019 •
Airlangga Journal of Innovation Management
Inovasi Model Donasi Masjid Melalui Penerapan Financial Technology2020 •
Latin American Journal of Aquatic Research
Tourism growth altering spinner dolphins area of occupation in Fernando de Noronha Archipelago, Brazil2017 •
Obesity Research & Clinical Practice
Using state-issued identification cards for obesity tracking2015 •
2010 •
International Research Journal of Pharmacy
Formulation and Evaluation of Transfersomal Cream of Acriflavine2016 •
Proceedings of the ACM on Programming Languages
A history of the Oz multiparadigm language2020 •
IEEE Transactions on Magnetics
Wireless Temperature Sensor Operating in Complete Metallic Environment Using Permanent Magnets2012 •
Global Journal of Management and Business Research
Customer Preference of Organized Versus Traditional Retail Stores in India: A Comparative Analysis2015 •