The
development of “systems of knowing and understanding” provide universal
languages researchers can employ to tackle the myriad challenges presented in
the development of Artificial Intelligences.
In
the video posted here, Andrew Ng offers a brief ontological examination of
the keys issues surrounding Artificial Intelligence in his presentation, “The
Future of Robotics and Artificial Intelligence” (Stanford University, STAN
2011).
Gruber
(1992) argued that, “An ontology
is an explicit specification of a conceptualization. The term is borrowed from
philosophy, where an Ontology is a systematic account of Existence. For AI
systems, what "exists" is that which can be represented. When the
knowledge of a domain is represented in a declarative formalism, the set of
objects that can be represented is called the universe of discourse” (Link).
In
his presentation, Ng suggested that two main issues are paramount in the
development of AI: Control and Perception. Control refers to a robot’s abilities
to navigate and interact in physical spaces. Perception refers to a robot’s
abilities to “see and understand” the world around it. The exposition of the
narrative focused mainly on the dynamic issues related to robotic perceptions
of environment, ability to contextualize information it gathers, and abilities
to “read and understand” its social environments.
Ng
draws from neuro-scientific approaches to argue for the development of foundational,
simple algorithms that can drive robotic AI abilities to “perceive.” AI neural
networks can be crafted, informed by and patterned after, human neuro-biological
systems that control visual and auditory processing. This draws out an interesting
area of inquiry that I will explore in future posts: namely, how does our “human”
understanding of AI, our collective cultural technological competencies and cultural
technological value structures, inform they we actually conceptualize and create AI?
In
his 2006 presentation at the Human-Robot Interaction Seminar (Fachberriech
Informatik Universitat Dortmund) titled, “Recognition and Simulation of
Emotions,” Kleine-Cosack discussed the exigencies surrounding AI perception and
emotion recognition. I will examine the issues of emotion perception in AI presented
in his paper in future posts. In sum, he argued that “the acceptance of autonomous and especially humanoid robots will be
directly dependent on their ability to recognize and simulate emotions.” An especially
useful context to place Ng’s overview of AI cognitive perception. (Link).
AI
abilities to place sensory information (perceptual information) in social
context, and the development of unique AI responses to those sensory stimuli,
are a whole other ballgame. Yet, the formalization of “systems of knowing”
provided here take a large step toward integrating robots into human
experiences.
Philosophical
inquiry has demonstrated the traps inherent in developing formalized (ontological)
systems and vocabularies, yet I am reminded of a phrase from my favorite science-fiction
opus, Dune, when Thufir Hawat (one of the many characters in the Dune series
that characterizes the dynamic range of transhumanistic expression) states, “the
first step in avoiding a trap, is to know that the trap exists.”
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.