Invited Speakers

The confirmed invited speakers for the Satellite Workshop Cognitive Network Science at NetSci17 are:



Joseph Austerweil

Asterweil Lab, Department of Psychology, University of Wisconsin, US


Analyzing semantic memory retrieval using network theory

An outstanding problem in using network-based analyses in cognitive psychology is how to determine the networks to use for a domain. Most researchers construct semantic networks from word association norms that aggregate the responses of thousands of participants. This is expensive and precludes analyses at the individual-level. We discuss a novel technique for efficiently estimating semantic networks from fluency data. Given a set of fluency lists from a group of individuals, it uses hierarchical Bayesian inference to estimate each individual’s semantic network. It assumes human memory retrieval produces a fluency list by performing a random walk on an unobservable network, where we only observe the first visit to a node. We validate the technique through simulations and behavioral experiments and demonstrate that as few as three lists per participant are needed to have an accurate estimate of each individual’s semantic network. The semantic networks estimated by our technique explain human similarity data better than other state-of-the-art techniques. We will conclude by presenting an easy-to-use web interface hosted on the Austerweil lab website to analyze any fluency data and estimate individual networks.


Gareth J. Baxter

Complex Systems and Random Networks Group, Department of Physics and I3N, University of Aveiro, Portugal


Using a mathematical model to study Language change

Language change is the process by which a new “way of saying the same thing” replaces an old one in a speech community. These changes may be at the level of sounds, words, vocabulary, grammar or any other element of language, and a change may take decades or centuries. Together with my collaborators, I have developed an agent-based mathematical model based on Croft’s usage based, evolutionary account of language change. A key feature of the model is the central role played by the symmetric social contact and asymmetric social influence networks.  By carefully considering the form of these networks in different situations, and including essential elements in the model, we have been able to use the model to examine several phenomena observed in real language change. I will review some key results, and use them to illustrate the benefits (and limitations) of such stochastic models of complex systems.

Ramon Ferrer i Cancho

Departament de Ciencies de la Computacio, Universitat Politecnica de Catalunya, Spain


A breakpoint in the decay of the distribution of syntactic dependency lengths

R. Ferrer-i-Cancho and Carlos Gomez-Rodriguez

The syntactic structure of a sentence can be modelled as a tree where vertices are words and edges indicate syntactic dependencies. The length of a dependency is defined as the distance between the words involved in the linear order of the sentence. It is well-known that the sum of dependency lengths of a sentence is smaller than expected by chance. Furthermore, the probability that a dependency has a certain length de- creases quickly as length increases. Here we provide massive evidence of a breakpoint in the decay of that probability with the help of dependency treebanks from different languages and two annotation styles. The break- point is typically located at length two and separates an initial regime of fast decay from a regime of a slower decay. This can be paradoxical if one expects that pressure for dependency length minimization due to interference and decay of activation does not reduce at longer distances. We suggest that the paradox can be solved in light of the now-or-never bottleneck put forward by Christiansen & Chater. We also suggest a relationship between the breakpoint and the span of short-term memory.



Elisabeth Karuza

Thompson-Schill Lab, Center for Cognitive Neuroscience, University of Pennsylvania, US

Community structure based on shared visual features guides acquisition of object categories

Elisabeth A. Karuza, Sharon L. Thompson-Schill, Mariya Bershad, and Danielle S. Bassett

Evidence suggests that human learners exploit the topological properties underlying sequentially experienced events as they develop representations of coherent units in their environment. For example, learners are remarkably sensitive to events that densely co-occur with each other in time (i.e., temporal community structure). Here, we extend this work beyond the temporal domain, asking whether learners show similar sensitivity when topological properties are instead defined by visual similarity (i.e., feature-based community structure). To test this possibility, we constructed a set of novel objects whose visual characteristics conformed to a graph with robust community structure: each node represented an object and each edge represented a pair of objects sharing precisely one visual feature. Thus, while holistically unique, objects within a community tended to share features with one another. Objects in different communities showed little feature overlap. During initial training, participants were exposed to a randomized stream of the novel objects and instructed to detect via button press the occasional presence of an “oddball” component. After the exposure phase, we probed the extent to which participants successfully acquired community-based knowledge. In a series of experiments, we first provide evidence that learners could indeed distinguish between communities in the input available to them. Most compellingly, we then show that they were able to generalize this knowledge to classify previously unseen objects. In sum, we propose that the computation of graph-based regularities constitutes one powerful mechanism through which the naïve learner performs the essential task of constructing object categories.

Yoed N. Kenett

Thompson-Schill Lab, Center for Cognitive Neuroscience, University of Pennsylvania, US

The complex role of modularity in semantic memory structure

A key characteristic of many networks is their modularity – the extent to which they can break apart into smaller modules, or communities. A module in a network is a cluster of nodes that are more densely linked to other nodes within the same cluster then to nodes outside of their cluster. At the brain level, a modular organization of neural structural and functional networks is considered a fundamental principle. In fact, breakdown in brain network modularity has been related to neuropathology. What is the role of modularity in semantic memory structure? Based on a series of studies, I will discuss the significance of modularity in semantic memory structure in relation to language development, creativity, and atypical populations. Furthermore, an analytical analysis, illustrating the implications of a network being too modular will be presented. Our work illustrates a complex role of modularity in semantic memory: on the one hand advantageous for language development and retrieval from semantic categories (“modules”) and on the other hand inhibiting flexible thought by limiting spread of information through the semantic network.



Massimo Stella

Institute for Complex Systems Simulation, University of Southampton, UK


Modelling the Mental Lexicon via Percolation, Markov Chains and Multiplex Networks

Massimo Stella and Markus Brede

Empirical research has shown that word similarities have an impact in learning, storing and retrieving words from the mind, hence the importance of a network representation of this mental lexicon of word relationships. In our work we proposed a series of quantitative null models for phonological networks, where nodes represent words and links represent phonological similarities (i.e. two phonetic transcriptions having edit distance one). Our null models, based on percolation and Markov processes, suggest the presence of constraints in the assembly of real words, such as (i) the avoidance of large degrees, (ii) and the avoidance of triadic closure, compatibly with previous empirical findings. We also extended previous analyses of the ML by adopting a multiplex network framework, including (i) phonological similarities, (ii) synonyms and (iii) free associations. This multi-layered structure was used for investigating how phonological and semantic relationships influence word acquisition: when mainly similar sounding words are learned, the lexicon grows according to the local structure of the whole multiplex.