September 26, 2016

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

Human Level Artificial Intelligence 2016: Artificial General Intelligence and then some (Part 1)

Credit: CC0 Public Domain
× close
Credit: CC0 Public Domain

(TechXplore)—In its inception, the field of Artificial Intelligence (AI) sought to create computers with general intelligence analogous to our own. This proved to be too challenging and elusive, thereby leading AI research to focus more narrowly on the development of intelligent systems capable of performing only problem- and domain-specific tasks, thereby giving rise to narrow, or weak, Artificial Intelligence. That said, interest in creating systems possessing human-like (and potentially beyond) general, or strong, Artificial Intelligence has reemerged and been termed Artificial General Intelligence (AGI). However, since the term Artificial Intelligence is often mistakenly used to describe both AI and AGI, confusion among the general population often ensues.

Enter the Artificial General Intelligence Society – a nonprofit organization dedicated to promoting the study and design of AGI systems, as well as to facilitate, publicize and facilitate of AGI knowledge though conferences, publications and other venues. In particular, the annual AGI Conference Series on Artificial General Intelligence – now in its ninth year – has been fundamental to the revitalization of AGI through interdisciplinary research and novel approaches to understanding intelligence.

This year's conference, AGI-16 (which was held in New York City on July 16-19 at the New School, the proceedings of which will be published in Springer's Lecture Notes in AI series and the papers available online) had a new wrinkle – namely, for the first time it was part of the Human-Level Intelligence 2016 (HLAI-16) event, along with the 2016 Annual International Conference on Biologically Inspired Cognitive Architecture (BICA 2016), the Eleventh International Workshop on Neural-Symbolic Learning and Reasoning (NeSy'16), and the Fourth International Workshop on Artificial Intelligence and Cognition (AIC 2016).

Given the presence of multiple organizations at this year's conference, it's not hard to imagine the number of papers and range of subjects presented. Accordingly, select talks and panel discussions in a range of research areas will be summarized, including cognitive models, consciousness, emotion, and Virtual Reality in Part 1; and neuromorphic architectures, robotics, and creativity – as well as links to videos of an AGI Tutorial, panel discussions and Prize Awards – in Part 2.

Stephen Grossberg, Wang Professor of Cognitive and Neural Systems and a Professor of Mathematics, Psychology, and Biomedical Engineering at Boston University, gave a keynote lecture titled Towards Solving the Hard Problem of Consciousness: The Varieties of Brain Resonances and the Conscious Experiences that they Support – and a hard problem it is indeed. Consciousness is one of the aspects of our experience which seems to be an actual thing, but in the world of neuroscience it's often referred to as qualia – that is, the qualitative properties of our subjective first-person experience of (in its broadest definition) sensory perceptions, somatic sensations, emotions, and cognitive states. His talk focused on evidence that he says supports his Adaptive Resonance Theory (ART) – discussed in his 2013 paper Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world1 – which he asserts solves the hard problem of consciousness by virtue of being "the most advanced cognitive and neural theory, with the broadest explanatory and predictive range, of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world." One potential issue, however, is his statement that to explain our experience of qualia, "a theory of consciousness needs to link brain to mind by modeling how brain mechanisms give rise to conscious psychological experiences, notably how emergent properties of several brain mechanisms interacting together embody parametric properties of conscious psychological experiences" – because despite ART being a well-formed thesis, and current hypotheses not specifying how subjective experiences actually arise from the brain, mind can itself be seen as qualia. Nevertheless, ART has been successfully applied to large-scale engineering and technology applications, and two ART components – Complementary Computing and Laminar Computing – are pertinent to biological intelligence.

In What are the Computational Correlates of Consciousness?2, James Reggia, Garrett Katz and Di-Wei Huang addressed the consciousness problem from a different perspective based on cognitive phenomenology, which they define as "the idea that our subjective experiences include deliberative thought processes and high-level cognition." Based on the idea that cognitive phenomenology may provide a more robust way of employing computational neuroscience – a multidisciplinary investigation of neural information processing function – to identify "computational correlates of consciousness in neurocomputational models of high-level cognitive functions that are associated with subjective mental states," the researchers have created a compendium of biologically-inspired cognitive architecture-based correlative candidates as a step towards resolving the mind-brain challenge and developing a foundation for creating a conscious computational device.

Expanding on an earlier work3, Riku Sekiguchi, Haruki Ebisawa and Junichi Takeno at Meiji University presented a pre-publication paper entitled Study on the Environmental Cognition of a Self-evolving Conscious System which discusses a simulation investigating a mechanism for developing self-consciousness using a neural network based bony "consciousness modules," a single module being termed a Module of Nerves for Advanced Dynamics (MoNAD). The researchers reported a successful result in architecting a system that they say is "relevant to the development of self-cognition and self-consciousness." More specifically, they add that the system was designed so that three subsystems (Association, Reason and Emotion/Feelings) interoperate in a manner that simulate an "independent conscience" to perform a "reasonable" behavior – that is, the system autonomously calculates and minimizes total disadvantage (which the authors interpret as being analogous to "pain"). In so doing, they state that through learning, the system "can suppress imitation behavior as a result of the cognition of another." This takes place by responding to a "behavior of avoiding pain after feeling the pain by oneself," which enables the system to "formulate cognition of oneself." The researchers conclude that their results may suggest the theoretical development of self-consciousness.

In Modeling the Interaction of Emotion and Cognition in Autonomous Agents4, Luis-Felipe Rodríguez, J. Octavio Gutierrez-Garcia and Félix Ramos addressed the applicability of cognitive architectures designed to create Autonomous Agents (AAs) – that is, software that independently senses, makes decisions about and acts upon its environment – that are behaviorally sensitive to emotional signals and are therefore perceived as believable, intelligent, and social. The researchers point out that while one way to realize this goal is to "incorporate processes that imitate those of human cognition and emotions," a suitable architectural context for modeling emotive/cognitive interaction has yet to be specified. To that end, they have proposed the development of Computational Models of Emotions by modeling the underlying mechanisms of emotions, and incorporating input/output interfaces that facilitate the affective/cognitive interaction.

In his talk Towards a Computational Framework for Function-Driven Concept Invention – based on the paper5 authored with his colleagues Danny Gomez-Ramirez and Kai-Uwe KühnbergerNico Potyka discussed a de novo implementation of applying Conceptual Blending Theory (a proposed explanation of human innovation, published in 1998 by Gilles Fauconnier and Mark Turner6, that focuses on our ability to combine different and contradictory concepts) to computational concept invention – a computational method of "blending of two thematically rather different conceptual spaces [that] yields a new conceptual space with emergent structure, selectively combining parts of the given spaces whilst respecting common structural properties."7 The key difference in the researchers' novel approach is it being primarily based not on the structural similarity of concept descriptions, but rather on a concept's function.

Several talks on Virtual Reality also presented inventive perspectives. One such discussion was John Sowa's The Virtual Reality of the Mind, which positioned Virtual Reality in the context of biological evolution seen within the framework of semiotics – specifically, Peirce's Theory of Signs. Sowa states that "in evolutionary terms, imagery developed hundreds of millions of years before symbolic or language-like systems of cognition." By combining imagery being inclusive of diagrams and written symbols, cognitive architecture's relation of symbols to perception and action, and the implicit inclusion of Artificial Intelligence under the broader category of cognitive science, Sowa arrives at a theory of Virtual Reality for Cognitive Architectures (VRCA) comprising a very broad range of species rather than being exclusively applicable to humans.

Eugene Borovikov, Ilya Zavorin and Sergey Yershov presented On Virtual Characters that Can See – an imaginative and fascinating take on Virtual Reality in which a Virtual Character (VC) endowed with visual sensory algorithms that allow it to identify and communicate with real-world intelligent beings. Based on their publication On Vision-Based Human-Centric Virtual Character Design: A Closer Look at the Real World from a Virtual One-8, the proposed VC would also be equipped with a robust cognitive architecture (CA) that would allow Virtual Characters to learn from their interactions with real-world beings – perhaps to the point of being able to reason and thereby becoming a virtual being with human-like intelligence.

More information: Read Part 2: Human Level Artificial Intelligence 2016: Artificial General Intelligence and then some (Part 2): https://techxplore.com/news/2016-09-human-artificial-intelligence_1.html

1Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world, Neural Networks, Volume 37, January 2013, pp. 1-47, ISSN 0893-6080, doi:10.1016/j.neunet.2012.09.017

2What are the computational correlates of consciousness? Biologically Inspired Cognitive Architectures, Volume 17, July 2016, pp. 101–113, doi:10.1016/j.bica.2016.07.009

3Development of a Self-evolving Conscious System, Procedia Computer Science, Volume 71, 2015, pp. 23-24, doi:10.1016/j.procs.2015.12.182

4Modeling the interaction of emotion and cognition in Autonomous Agents. Biologically Inspired Cognitive Architectures, Volume 17, July 2016, pp. 57–70, doi:10.1016/j.bica.2016.07.008

5Towards a Computational Framework for Function-Driven Concept Invention, Artificial General Intelligence, Volume 9782 of the series Lecture Notes in Computer Science, pp. 212-222, 25 June 2016, doi:10.1007/978-3-319-41649-6_21; Towards a Computational Framework for Function-Driven Concept Invention, ResearchGate, January 2016

6Conceptual integration networks, Cognitive Science (1998) 22(2):133—187, doi:10.1207/s15516709cog2202_1

7Blending in the Hub: Towards a computational concept invention platform (PDF), Proceedings of the 5th International Conference on Computational Creativity (ICCC 2014), June 10-13, Ljubljana, Slovenia, 2014

8On Vision-Based Human-Centric Virtual Character Design: A Closer Look at the Real World from a Virtual One, doi:10.4018/978-1-5225-0454-2.ch001, Chapter 1 in Integrating Cognitive Architectures into Virtual Character Design, June 2016, Copyright © 2016, ISBN13:9781522504542 | ISBN10:1522504540 | EISBN13:9781522504559, doi:10.4018/978-1-5225-0454-2

Journal information: Cognitive Science

Load comments (1)