Welcome to the
Laboratory for computational neurodynamics and cognition

canal-20131101

About the Lab

Although it is something most of us do every day without effort, memorizing is in fact an incredibly complex task. For instance, the simple act of storing and retrieving a given perceptual pattern is something a computer cannot do with anything approaching human efficiency and robustness.

The CONEC lab aims to better understand how human cognitive system accomplishes the complex task of create (and enhance) representation from patterns as well as recognize, identify, categorize and classify them. In particular, research focuses on a nonlinear dynamics system perspective where time and change are the key variables.

To understand how the human cognitive system works, we need to develop formal models. CONEC lab uses recurrent artificial neural networks that are massively parallel and where the information is distributed among the units. Therefore the main objective is the development of a general bidirectional associative memory that can take into account supervised, unsupervised and reinforcement learning while being constrained by neuroscience data. From models development, it is hope that we will have a better understanding on how the brains work.


Members

image (1)
Director

Chartier, Sylvain

Sylvain

Director of the Laboratory for computational neurodynamics and cognition and Associate Professor, School of Psychology

Ph.D. (2004) – Psychology – Université du Québec à Montréal
B.Sc. (Honours) (1998) – Psychology – Université du Québec à Montréal
B.A. (1996) – Psychology – University of Ottawa

 

Room: VNR 3022

Work E-mail: Sylvain.Chartier@uOttawa.ca

Dr. Sylvain Chartier received the B.A. degree from the University of Ottawa, in 1993 and the B.Sc. and Ph.D. degrees from the Université du Québec à Montréal, in 1996 and 2004, respectively, all in psychology. His doctoral thesis was on the development of an artificial neural network for autonomous categorization. From 2004 to 2007, Dr. Chartier was post-doctoral fellow at the Centre de recherche de l’Institut Philippe-Pinel de Montréal where he conducted research on eye-movement analysis and classification. Since 2007, He is a Professor at the University of Ottawa.

Keywords

  • Recurrent Associative Memories
  • Nonlinear dynamic systems
  • Quantitative methods

Contact

Email: Sylvain.Chartier@uOttawa.ca

Room: VNR 3022

Affiliated Researcher

Cyr, André

Andre

Dr André Cyr graduated in Medicine at the University of Montreal and has completed a PhD in Informatics and Cognition at the University of Quebec in Montreal. Its first field of interests is the general understanding of the intelligence phenomenon with possible links in the artificial intelligence domain. Learning and memory are the main topics in his researches, studied from a bio-inspired computational perspective. The methodology used for the experiments consists in developing artificial spiking neural networks acting as brain controller for complete cognitive virtual and physical robotic agents. The subjects of investigation are various but they commonly share the simulation of characteristics or behaviors of natural low-level intelligence, as in the invertebrates. The different research hypotheses are first explored in virtual scenarios, then the SNN are embodied in physical robots for tests in real world constraints. The data produced are studied from the level of the synapses up to the cognitive agent behaviors. Two examples of current projects are the simulation of the visual attention phenomenon and learning abstract concept of sameness / difference.

Keywords

  • General artificial intelligence
  • Cognition
  • Adaptative behavior
  • Learning and memory
  • Bio-inspired-robotics
  • Computational neuroscience
Graduate Students

Berberian, Nareg

Nareg

B.Sc. Psychology (2015)

Keywords

  • Analysis of electrophysiological data (Multi-electrode Utah Array; Single cell recordings; Calcium imaging; EEG)
  • Plasticity in networks of spiking neurons
  • Bio-inspired vision and learning in robotics
  • Associative memory
  • Decision-making

Email

nberb062@uOttawa.ca


Church, Kinsey

Kinsey Church
BSc. Psychology (2019)

Research Interests: Artificial neural networks, cognition, learning, behaviour, and artificial intelligence. My current project focuses on the Exploration-Exploitation trade off in cognition.

Email: kchur026@uOttawa.ca


Rolon-Mérette, Damiem

Damiem

Degrees: B.Sc. spécialisé en Psychologie et B.Sc. spécialisé approfondie Majeur Biochimie avec Majeur en Psychologie (Année)

Research interest: I am currently focusing on the mechanisms behind learning and memory in the human brain, more specifically associative learning. To do so, artificial neural networks are used to model the phenomenon in other to draw parallel conclusion to the human brain. Key concepts that I am currently interested are the one of one-to-many associations, role of context in associative learning and how this can lead to general purpose artificial intelligence.

Email: drolo083@uOttawa.ca


Rolon-Mérette, Thaddé

Thadde

Since 2017, Ph.D. student in experimental psychology at the University of Ottawa.

Received a B.Sc. in biomedical sciences at the University of Ottawa in 2015.

Received a B.Sc. in psychology at the University of Ottawa in 2016.

His research interest spans the areas of cognition and artificial neural networks, with an emphasis on:

  • Associative memories
  • Learning
  • Contexts
  • Growing architecture
  • Feature extraction
  • Deep learning

Ross, Matthew

Matthew
Matt completed his Honours BSc. with spec. in psychology at the university of Ottawa and is currently doing is PhD in Experimental Psychology under the supervision of Dr. Chartier. His research interests lay in the development and testing of artificial neural networks (ANNs), for modeling emergent properties of cortical circuitry, with a focus on learning. Recently, Matt has been more focused on biologically inspired models of vision, and the pursuit of real world testing using robotic agents. 
Honours Students

There are no results for this content list.

Alumni

Publications

istock_publications_resized
Applications

B

C

H

L

M

N

O

R

T

V

W

Artificial neural networks

A

B

C

D

G

H

J

L

M

N

O

R

S

T

V

W

Z

Features extraction

C

G

L

M

N

O

R

T

V

Memory

B

C

G

H

J

M

N

R

S

T

Z

Robotics

B

C

R

W

Spiking neural networks

B

C

J

M

R

Z

Tutorials

C

J

L

M

V

W

Vision

B

C

O


Projects

istock_project_resized
Architecture Development and Nonlinearly Separable Tasks

Usually, bidirectional associative memories can only discriminate between two classes; if a straight line can separate the data on a plane. Such supervised learning is encountered in many real-life situations. For example, associating a name with a phone number is a classic linear association. In logic, it is similar of accomplishing the OR gate. This type of classification is robust to noise and can generalize to new data. However, there are many examples in real life where linear separation does not happen. A well-known example is the fulcrum problem in which humans integrate information across two dimensions (in this case, weight and distance). In logic this is called the XOR gate. Many multi-layer networks can accomplish nonlinearly separable tasks. However, most of them lack biological plausibility. Therefore, if we want to increase the BAM model’s explanatory power, it should also take into account nonlinear separable tasks (1). This way, both unsupervised and complex supervised learning could be incorporated within the same model. A solution would be to consider an iterative architecture development scheme (e.g. cascade correlation; (2)). It has been shown that such a technique enables networks to accomplish the task (3). Hence, another hypothesis is that a “growing” network could tackle this class of problems. To test this hypothesis, we will need to add a free parameter to decide when to “expand” the model (e.g., when a new unit needs to be recruited). In addition, we will need to study how each unit is connected and trained in regards to the initial BAM.

  1. Chartier, S., Leth-Steensen, C. & Hébert, M.-F. (2012). Performing Complex Associations Using a Generalized Bidirectional Associative Memory. Journal of Experimental & Theoretical Artificial Intelligence, vol. 24, no. 1, pages 23-42
  2. Fahlman, S. E., & Lebiere, C. (1990). The cascade-correlation learning architecture. Advances in Neural Information Processing Systems II, 524-532.
  3. Tremblay, C., Myers-Stewart, K., Morissette, L., & Chartier, S. (2013, July). Bidirectional Associative Memory and Learning of Nonlinearly Separable Tasks. In R. West & T. Stewart (Eds.), Proceedings of the 12th International Conference on Cognitive Modeling, Ottawa, Canada, pp. 420-425.
Reinforcement Learning for Associative Memory

Both unsupervised and supervised learning are passive. In contrast, reinforcement learning implies that the network must be active; it must generate a possible action (output). The environment provides only success or failure of a given solution. Therefore, in the case of failure, the network must try a new potential solution based on its encoded knowledge (exploitation). In some situations, the network could have exhausted all potential solutions and therefore it must generate a novel solution (exploration) (1). Because of the passive aspect of BAM, few attempts have been made to implement reinforcement learning. First a simple reinforcement learning technique was used (2). Then a better implementation, using Q-learning, was used in a recurrent associative memory (3). In both cases, results showed the possibility of implementing reinforcement learning. 

  1. Sutton, R. S. & Barto, A. G. (1998). Reinforcement Learning: An Introduction. Cambridge: MIT Press.
  2. Chartier, S. Boukadoum, M. & Amiri, M. (2009). BAM learning of nonlinearly seperable tasks by using an asymmetrical output function and reinforcement learning, IEEE Transactions on neural networks, vol. 20, pp. 1281-1292.
  3. Salmon, R., Sadeghian, A. & Chartier, S. (2010). Reinforcement learning using associative memory networks, Proceedings of the International Joint Conference on Neural Networks, Barcelona, Spain, 7 pages.
Plasticity in Developing Neural Circuits

Activity-dependent changes in synaptic transmission arise from a large number of mechanisms known as synaptic plasticity. Synaptic plasticity can be divided into three broad categories: (1) long-term plasticity, where changes are prolonged for hours or longer that result in learning and memory (2) homeostatic plasticity, where synapses and neurons maintain excitability and connectivity despite abrupt changes resulting from experience-dependent plasticity; (3) short-term plasticity, where changes in synaptic strength occur over milliseconds to minutes. In neural circuits, neurons coding for loaded items exhibit patterns of activity that are quite distinguishable from others that stay at baseline level of activity. Within this scheme, it remains unknown how the collective contribution of short-term, long-term and homeostatic plasticity results in the gain, maintenance, or loss of information in neural circuits. To tackle this problem, it becomes important to examine the synergistic interaction between these distinct, yet ubiquitous mechanisms of plasticity in neural circuits.

  1. Berberian N., Ross M., Chartier S., Thivierge J.P. (2017). Synergy Between Short-Term and Long-Term Plasticity Explains Direction Selectivity in Visual Cortex. IEEE Symposium Series on Computational Intelligence (IEEE SSCI), 1-8.
  2. Costa R P., Froemke R. C., Sjöström P. J., & van Rossum M. C. W. (2015). Unified pre- and postsynaptic long-term plasticity enables reliable and flexible learning, Elife, vol. 4, pp. 1–16.
  3. Mongillo G., Barak O., & Tsodyks M. (2008). Synaptic Theory of Working Memory. Science, vol. 319, no. 5869, pp. 1543–1546.
Learning Relational Concepts

Relational concepts learning represents the process of abstracting rules between stimuli, without any precise references of their physical features. Several animal species already show this capacity (1). As an example, Above/Below is one of the possible spatial relational concepts that may be learned from natural agents, be they small as invertebrates (2). Moreover, more than one relation could be learned at the same time (3). Even if empirical data are available from small neural organisms, a precise circuit involving a relational concept learning process remains to be found. This cognitive phenomenon may also be investigated from different computational tools, as with artificial spiking neurons controlling a robot (4). One objective of our lab is to challenge our models, inserting relevant facts to this cognitive process from an artificial intelligence perspective.

  1. Zentall, T. R., Wasserman, E. A., & Urcuioli, P. J. (2014). Associative concept learning in animals. Journal of the experimental analysis of behavior, 101(1), 130-151.
  2. Avarguès-Weber, A., Dyer, A. G., & Giurfa, M. (2010). Conceptualization of above and below relationships by an insect. Proceedings of the Royal Society of London B: Biological Sciences, rspb20101891.
  3. Avarguès-Weber, A., Dyer, A. G., Combe, M., & Giurfa, M. (2012). Simultaneous mastering of two abstract concepts by the miniature brain of bees. Proceedings of the National Academy of Sciences, 201202576.
  4. Cyr, A., Avarguès-Weber, A., & Theriault, F. (2017). Sameness/difference spiking neural circuit as a relational concept precursor model: A bio-inspired robotic implementation. Biologically Inspired Cognitive Architectures, 21, 59-66.

Contact Us

Laboratory for computational neurodynamics and cognition

School of Psychology
Faculty of Social Sciences
University of Ottawa
136 Jean-Jacques Lussier
Vanier Hall, Room 3022
Ottawa, Ontario, Canada K1N 6N5
Map