Skip to content

Graphs, Brains, and Gremlin

July 14, 2011

SMI32-stained pyramidal neurons in the cerebral cortex. Provided by UC Regents Davis under Creative Commons Attribution 2.5 License

What do graphs and brains have in common? First, they both share a relatively similar structure: Vertices/neurons are connected to each other by edges/axons. Second, they both share a similar process: traversers/action potentials propagate to effect some computation that is a function of the topology of the structure. If there exists a mapping between two domains, then it is possible to apply the processes of one domain (the brain) to the structure of the other (the graph). The purpose of this post is to explore the application of neural algorithms to graph systems.

The Rosetta Stone of Graphs

Before diving specifically into graphs and brains, its important to see that at a certain level of generality, many real-world structures are graphical in form. The article “A Rosetta Stone for Connectionism” (full download) is a Santa Fe Institute article that presents mappings between a generic graph and various dynamical systems from the vantage point of connectionism (i.e. artificial neural networks).

“The term connectionism is usually applied to neural networks. There are, however, many other models that are mathematically similar, including classifier systems, immune networks, autocatalytic chemical reaction networks, and others. In view of this similarity, it is appropriate to broaden the term connectionism. I define a connectionist model as a dynamical system with two properties: (1) The interactions between the variables at any given time are explicitly constrained to a finite list of connections. (2) The connections are fluid, in that their strength and/or pattern of connectivity can change with time.”

In the article, Dr. J. Doyne Farmer goes on to provide the “Rosetta Stone” for connectionism. He draws parallels between abstract graphical structures (generic) and real-world systems that respect the generic graphical form (e.g. neural networks). The thesis table from the article is reproduced below with some slight modifications (italicized) to identify the terminology of the currently popularized property graph data model which is discussed in the second half of this post. Of particular interest to this post are the first two columns (generic/graph and neural network/brain).

Generic Neural Network Classifier System Immunity Network Autocatalytic Network
node, vertex neuron message antibody type polymer species
state, vertex properties activation level intensity free antibody polymer concentration
connection, edge axon/synapse classifier chemical reaction catalyzed chemical reaction
parameters, edge properties connection weight strength and specificity reaction affinity catalytic velocity
interaction rule, traversal sum/sigmod linear threashold bell-shaped mass action
learning algorithm Hebbian, back propagation bucket brigade clonal selection approach to attractor
graph dynamics synaptic plasticity genetic algorithms genetic algorithms artificial chemistry rules

Why are brains interesting for those doing software engineering, data management, and information retrieval? Simply put, the brain provides a unique information processing/management style that is very different from the style of today’s popular relational and document databases. For the brain/graph, the world is a fuzzy realm of similarities and associations amongst highly interconnected data. For the relational/document database, the world is discrete and exact with data being atomic and reductionistic. Today’s graph databases provide a data organization model that mimics many of the structural qualities of the brain and as such, can efficiently evaluate neurally-inspired algorithms.

Spreading Activation and the Brain

Spreading activation is an algorithm that simulates the real-world process of a cascade of action potentials (neuron depolarizations). When a neuron reaches a certain level of depolarization, it release neurotransmitters which, when in sufficient quantity and of the excitatory type, can trigger the depolarization of any number of adjacent neurons. The wave of action potentials across a neural network is known as a spreading activation.

Sensations to Concepts

This “spreading of activation” throughout the brain yields human cognition. Here is a particular example based on vision: light enters the eye, hits the retina, triggers the depolarization of certain ganglia cells in the retina which propagate a spreading activation through to the thalamus, and then off to the visual cortex in the back of the brain. It is in the visual cortex and up through to the rest of the cerebral cortex that the spreading activation is “interpreted” (i.e. propagated) via a hierarchy of neurons that denote ever more invariant structures. Invariants can be understood as another word for a concept. For instance, irrespective of seeings the front of a lion, the side of a lion, or even a drawing or sound of a lion, the conception of a lion is realized. Irrespective of the constant barrage of seemingly random signals entering through the human sensory system, a stable, consistent world of objects is ultimately the conscious experience of the human. For the human, the world is not a 2D space of rapidly changing bits of color. Instead, its a 3D space of stable objects and their physical and conceptual relationships to one another. Mental concepts surpress/compress/generalize sensory novelty.

Where do concepts begin? Do they begin in the upper levels of the cortical hiearchy? The answer is “no.” Concepts start immediately. Every step of a spreading activation triggers more complex concepts—i.e. higher-level invariants. The further the propagation, the more invariant (abstract/general/encompassing) the concept. A sensation activates neurons which denote simple concepts such as lines at different angles. These action potentials then trigger more complex concepts such as a cylinder, which merge with color concepts and fur concepts to yield a lion’s tail and, ultimately, to the complex concept of a lion. A “lion” is the excitation of various pathways that uniquely converge to elicit the conception/qualia of a lion (see “the binding problem“). Spreading activation can be expressed as the process of the branching and merging of a sensory signal through various layers of neural tissue, where each layer contains neurons which denote more abstract invariants.

Learning and Behavior

Concepts are triggered from the realization of other concepts. Humans learn the associations between concepts through experience. Quite simply, at the neural level, in Hebbian-based learning, “neurons that fire together, wire together.” The co-occurrence of activation potentials in a cluster of neurons leads them to “bundle,” such that in the future, if one neurons fires, there is a high probability that the others will fire as well. Given this rule, humans associate lions with danger and danger with running. The associations between concepts may elicit behavior (“that’s a lion, run!”) by directing the propagation of the spreading activation to the motor cortex. The motor cortex translates conceptual actions into the manipulation of muscles that ultimately yield the human’s behavior in the world. On and on, in a constant interactive process (“the lion is gaining, run faster!”) we are tossed about reality. We spend so much of our conscious lives embalmed in a web of high-level abstractions that we never realize the insanity of the relentless novelty right in front of our eyes. For we “are” the cortex.

Gremlins in the Brain

When data is in a graph database and optimized for graph traversals, its possible to take advantage of the algorithms distilled from cognitive neuroscience research. In fact, with graph traversal languages like Gremlin, neural-inspired data management and computing is only a few keystrokes away. This section draws parallels between the neural algorithms discussed previous and those aspects of Gremlin that make such algorithms feasible to implement.

NOTE: All the examples presented are as of Gremlin 1.2-SNAPSHOT+. There are a few tweaks since Gremlin 1.1 for which these code fragments are oriented.

Action Potentials

A traversal description will propagate traversers (i.e. gremlins) over vertices and edges in the graph. These traversers can update the state of a vertex (or edge) by simply manipulating its properties (refer to the first column of the table above). For example, given a set of root/start vertices, the following code will increment the activation/depolarization of each vertex adjacent to the root vertices.


As a vertex becomes more activated, it may “fire” if a certain threshold of activation is reached. When the vertex fires, it provides activation to its adjacent vertices. Building on the previous code fragment, spreading activation continues if and only if the activation has reached a particular threshold. The following pattern can be used to simulate this behavior, where aggregate will ensure a breadth-first evaluation.

root.out.sideEffect{it.activation++}.aggregate.filter{it.activation > threshold}

In a weighted graph, not all edges are created equal. In such situations, the firing of vertex A may have less of an effect on vertex B than vertex C has on B. This is the case where the strength of tie is a function of the edge weight. To ensure that the activation level of a vertex is respective of the edge weight, the following code fragment can be used. The code simply saves the edge weight to x and then applies it to the vertex’s activation.

root.outE.sideEffect{x=it.weight}.inV.sideEffect{it.activation+=x}.aggregate.filter{it.activation > threshold}

To proceduralize this computation, represent it as a single step called spread which takes a threshold parameter.

Gremlin.defineStep('spread', [Vertex,Pipe], { def threshold ->
 _().outE.sideEffect{x=it.weight}.inV.sideEffect{it.activation+=x}.aggregate.filter{it.activation > threshold}

In the newly abbreviated form, the following code iterates a spreading activation for some indeterminate number of steps (until no more vertices are activated) with the parameterized threshold arbitrarily set to 50.


In summary, given a set of root vertices, a spreading activation is propagated through the graph, where only those pathways that receive enough stimulation/depolarization are followed.

NOTE: The activations at each step are not reset back to 0. As such, activation builds up over the course of the propagation. A simple modification to the filter step rectifies this situation. However, for the sake of simplicity/readability, it was left out.

filter{go=(it.activation > threshold); it.activation=0; go}

Hebbian Learning

When a cluster of neurons synchronously fire, they are reacting to the same stimuli and thus, can be seen as different interpretations of the same phenomena. In a similar fashion, concepts are associated to each other when their co-realization occurs (for water is wet).

root.spread(50).gather{ def vertices -> 
  vertices.each { a ->
    vertices.each{ b ->
      if(a != b) {

In the above code, at each step of the propagation, “vertices that fire together, wire together.” For those vertices that already share an edge, the strength/weight of their edge is incremented. One of the fascinating aspects of the Hebbian algorithm is that as the graph is being processed, its structure is changing to reflect in situa learning. In this way, query and manipulation are coupled.

Finally, it is possible to make the Hebbian code fragment into a single step as was previously done for spreading activation. Together, spreading activation with Hebbian-based learning can be expressed in Gremlin as follows:



A graph is a generic structure. The traversal is a generic process. Traversals over graphs have many similarities to spreading action potentials over brains. This post presented the mapping between graphs and brains and provided example Gremlin code snippets that mimic standard neural algorithms. There are many variations to the themes presented that can be explored and tailored to the developer’s particular problem requirements. Moreover, given that most graph databases make use of the property graph data model, hybrid algorithms that mix semantics with spreading activation can be explored to various fruitful ends. To conclude, a collection of interesting extensions are itemized.

  1. Inhibitory traversals: there are excitatory and inhibitory neurotransmitters. The function of inhibition is to reduce noise in a signal, accentuate conceptual boundaries, and ultimately ensure that a spreading activation only goes down particular pathways.
  2. Forgetting: Over time, low weighted edges can be pruned or specific regions of the graph can be “lesioned” to promote future growth to future stimuli.
  3. Hierarchal architectures: invariants can be realized by making use of structural hierarchies within the graph. This supports classification as well as concept learning and association.
  4. Neural growth: new vertices can be introduced into the graph in order to support the generation of yet more concepts and mappings to stimuli.
  5. Processing regions: it is possible to compartmentalize particular areas of the graph and wire them together as a way of organizing the structure of the graph into functional subgraphs.


Hawkins, J., Blakeslee, S., “On Intelligence,” Times Books, September 2004.

Rodriguez, M.A., Ham, M.I., Gintautas, V., Kunsberg, B.S., “A Prospectus on the Obstacles Inhibiting the Implementation of Advanced Artificial Neural Systems – Part 1,” Decade of Mind IV Conference, Albuquerque, New Mexico, January 2009.

Riesenhuber, M., “The HMAX Model,” The Laboratory for Computational
Cognitive Neuroscience at Georgetown University.

Rodriguez, M.A., “From the Signal to the Symbol: Structure and Process in Artificial Intelligence,” Center for Nonlinear Studies Post Doctorate Seminar, Los Alamos National Laboratory, Los Alamos, New Mexico, November 2008.

Langer, S.K., “Philosophy in a New Key: A Study in the Symbolism of Reason, Rite, and Art,” Harvard University Press, January 1957.

Healy M.J., Caudell, T.P., “Ontologies and Worlds in Category Theory: Implications for Neural Systems, Axiomathes, 16(1-2), pp. 165-214, Springer, 2006.

Rodriguez, M.A., “Grammar-Based Random Walkers in Semantic Networks,” Knowledge-Based Systems, 21(7), pp. 727-739, Elsevier, 2008.

Note on images: All non-captioned images from Wikipedia are public domain.

%d bloggers like this: