You can build networks three ways: interactively, with network definition files, and by importing networks from other simulators. Annie will support your workflow, however you like doing things. If you like working bottom-up, you can build entire networks interactively in the dashboard, even one neuron and one synapse at a time. When you're ready, push the "Build Network" button and Annie will automatically generate your network for you. Or, if you prefer working top-down, you can specify your entire network ahead of time at any desired level of resolution.
In what follows we'll use capital letter to denote Annie's data OBJECTS, to distinguish them from the colloquial usage of similar words. For example a NEURON is one of Annie's data structures, whereas a neuron is just a neuron.
A NETWORK is built exactly like you think it is. There are nuclei, and each nucleus contains some cell types, arranged within the nucleus. Each cell type has a characteristic behavior, defined by a set of neuron parameters like tau, size, compartmentation, and so on - and each neuron has connectivity, in the form of synapses (or synapse-like structures). Each synapse and each neuron can interact with the extracellular space in a variety of ways, and this is an area where Annie excels, for instance she's explicitly aware of glia (like astrocytes and oligodendrocytes).
A network can be as simple or as complex as you like. This is a valid network definition file:
NEURON NAME ABC TYPE IAF TAU 0.1 SYNAPSE FROM ABC TO ABC TYPE INHIBITORY WEIGHT 0.1
You can do the same thing interactively with a few mouse clicks. Annie will show you complicated-looking forms that you can fill in at an excruciating level of detail, or you can just push the "OK" button and accept the defaults. The defaults will always work, and Annie will let you know if there are any problems or conflicts. If you're building small networks you'll probably want to do it interactively, you can do that faster than you can type the words. Annie starts adding value in large networks with geometry. For instance, here is a slightly more detailed network definition file:
NETWORK EYE CENTER (0, 0, 0) EXTENT (12000,12000,12000)
NUCLEUS RETINA CENTER (0,0,6000) EXTENT (10000,10000,2000) ORIENTATION (0,0,1.5)
CELL RODS CENTER (0,0,6300) EXTENT (10000,10000,5) NEURONS 400 ARRANGEMENT GRID_2
In this example, we've defined a network coordinate space of (-12000,-12000,-12000) to (12000,12000,12000). Annie will check that everything we subsequently define lives within this space (the primary reason for this is to account for and control for edge effects). Within the network called "EYE", we've defined one nucleus called RETINA, and within that there is one cell type called RODS. (We don't have any synapses yet, we haven't defined any connections). You don't have to actually type these files, Annie will type them for you. You can build things interactively, and whenever you're ready you can save your network in an easily readable and editable form.
We need to be specific about the coordinate axes. Coordinates are defined in terms of a "cat's-eye view", so the negative X axis is to the left, and the positive X axis is to the right. Similarly, the negative Y axis is down, and the positive Y axis is up. The positive Z axis is farther away from the cat, so "farther away" from the retina. The negative Z axis is behind the cat. In this network, the center has been defined as (0,0,0) which is where the cat is - or more precisely, the center of the network, which is the center of the cat's brain. The retina has been defined at Z location 6000, so, slightly in front of the center of the cat's brain - and it has a Z extent of 2000, so everything inside our retina (all of its layers) should live within Z coordinates 4000 to 8000.
Thus, we position our rods at Z location 6300, and note that these neurons have an extent of 5 along the Z axis. We are defining them to be 3-dimensional, and the reason for this is so they will render in tools like Maya and Blender. Everything in Annie is 3-dimensional, just like it is in the real world. The 1- and 2-dimensional "sheets" of neurons are just convenient conceptual abstractions, and they help us put geometry together, but in real life everything is 3-dimensional. To emphasize this distinction, we've declared 400 neurons in a GRID_2 arrangement. Without any further directives about layout, Annie will figure out how to populate 20x20 neurons into a two-dimensional grid, using the 10000x10000 extents given. Extents are specified in terms of "distance away from center", so an extent of 10000 actually means a coordinate range of -10000 to 10000, which in turns defines a total coordinate extent of 20001 points. Thus we get 20 neurons along each axis of 20001 grid points, which means the neurons will be spaced 1000 units apart.
The units of distance are assumed to be in microns in this case, but they could be anything. With the given coordinate space we get about 20mm of retina, which is "almost" a whole retina but not quite - we get a "large patch of retina". Currently the tick times in Annie's simulator are set up to be tenths of milliseconds, which means time constants above 50 microseconds (or so) can be used. This covers most of the real world biochemical activity. In the above example our nucleus (retina) is given a slight downward tilt, by specifying an orientation in terms of yaw, roll, and pitch. We could similarly specify the CELL group this way, and we can also override the default grid constructionusing a LAYOUT directive, so if we have 400 neurons and we want a 40x10 layout instead of a 20x20 layout, that's a quick and easy way to accomplish it.
Annie is immense and intense with geometry. She'll support you at any level you like. If you just want connection maps, you can generate those in seconds, or pull them from a library of existing transforms. However Annie understands 20 different mesh formats, including everything handled by meshio and a whole lot more. When specifying a CELL, there's a rich variety of geometric options ranging from the exquisitely simple (location, size, shape, layout) to the fantastically complex (branching fractal trees of axons and dendrites and synaptic insertion into neuropils, glomeruli, glia, and a host of other biologically realistic geometries). Annie understand layers, modules, capsules, and all manner of topology (like "nearness" in the mappings between curved manifolds). Much of the time though, you can get away with saying DIVERGENCE 5, and you're done - Annie will connect each neuron to its 5 nearest neighbors in the target layer. However if you prefer you can define an axon geometry at the level of the CELL, and Annie will apply this to the neurons with the variance you specify. For example in some brain areas we find neurons with directional axons that travel straight into neuropils, where they branch profusely. These geometries can be fully specified in text files, but it's a lot easier to export them to Blender, manipulate them, and import them back. Annie is not a mesh tool, although she provides some notable visualizations. Rather, she is the central exchange for import and export of your geometry to all the needed toolsets in all the required formats.
To take the network definition file to the next level, we can specify the details of each neuron type, for example:
CELL RODS CENTER (0,0,6300) EXTENT (10000,10000,5) * NEURON_SIZE (2,2,5) VARIANCE (0.1, 0.1, 0.5) N_NEURONS 400 ARRANGEMENT GRID_2 * length and diameter are given in microns NEURON_SHAPE CYLINDER LENGTH 5 DIAMETER 2 NEURON_TYPE LINEAR RESTING_POTENTIAL -50 VARIANCE 3 TAU 0.1
Here we've given our photoreceptors a cylindrical shape, and we've defined them to be linear with a resting potential of -50 +/- 3 mV. They're also fast, they have a 100 microsecond membrane time constant. In any of Annie's files, anything preceded with an * is a comment. If we hadn't defined a NEURON_SHAPE, we could have uncommented the NEURON_SIZE directive to provide a shape in a different way. You don't have to define a shape, but it's helpful for rendering, because otherwise the rendering engine will just draw a sphere by default. You'll note that the Z-axis extent defined for the RODS, matches the length of the photoreceptors as defined in the NEURON_SHAPE directive. (In the alternative NEURON_SIZE directive, the extent of the z axis is also 5). And note also, that we've give our rods a diameter of 2 microns, which means that inside our coordinate extent of 20000 we can pack 10,000 rods along each axis, so this definition can actually give us 1 million rods if we want them. We can actually do this on a PC, each neuron takes up around 1000 bytes of memory and 1 gB is well within the memory space of most modern PC's. However 1 million neurons will not render into a 512x512 image, so many of Annie's visualizations will not work with networks this big. In practice a 100x100 grid can be easily visualized on a computer screen, anything with a higher resolution requires more sophisticated rendering tools (which may be included in the next version of Annie, depending on user demand).
We can expand our retina in a different way, by providing more neurons. Here for example is a basic definition of a small patch of retina. It's not a full definition of a real retina, but it's good enough to see the behavior of the inhibitory surrounds in the M-type ganglion cells. Perusing this lengthy example will illustrate many of the relevant concepts involved in network construction. For example we've added an LGN to our network, and we've defined two external nuclei, one called EXTERNAL_LIGHT and the other called EXTERNAL_OPTIC_RADIATION. The concept of "external" nuclei and cell groups, in Annie, is much like the concept of "nodes" in other simulators (like Nengo), but also includes the Brian2 and Nest concepts of "generators". A node is anything that serves as either a source or a sink of network information, so for example external stimuli being applied to neuron would typically be applied through a node. In this network we have a node called LIGHT, it's defined as NEURON_TYPE EXTERNAL. And, we have simulation definitions that say APPLY TEMPLATE TO LIGHT. What we really want is for light to imping on our photoreceptors, but we know in advance that the photoreceptors are on the inside of the retina and light has to go all the way through the vitreous and aqueuous humors and the lens, and all the layers of retinal neurons, before it can impact a photoreceptor. Thus having a "node" give us a convenient way of applying any filtering in the light pathway, if we need such a thing. But it also gives us a clean and trackable away of applying our stimulus in an isolated manner. If we use Annie's visualization tools to look at the activity in the LIGHT node, we'll see the combined effect of all the stimulus templates we're applying, at any given point in time. This helps us debug our stimuli, when we have complicated experimental scenarios. The goal is for Annie to be your experimental subject. It replaces the cat. You can run the same kind of experiments on Annie, that you'd run on the cat. You can look at neurons and synapses, and even external electric and magnetic activity in the form of "fields".
You can see the full file here.
If we wanted to, we could have elaborated the LGN to include the interneurons, but we're just doing this for purposes of illustration right now. Hopefully you will notice the simple elegance of this approach. It thinks like a neuroscientist, things are organized logically and geometrically. Every one of these directives has many more options, you can specify neurons and synapses down to the level of individual channel time constants and receptor subunit kinetics. But this example illustrates the basic approach to geometry. Annie uses primitives, like building blocks to construct more elaborate structures. There are other ways of defining a retina. Annie will build a retina by replicating a "connection module", and a cerebral cortex can be built the same way. In the next release of Annie when the interactive user interface is fully working, you'll be able to draw a module on the screen and have Annie generate a network for you.
In the example, we've laid out an overall geometry that puts light at Z coordinate 1000, and the optic radiation at Z coordinate 11,000. The inner plexiform layer is at z=4700 and the outer plexiform layer is at z=5800. The ganglion cells are at z=4000, and bipolar cells are at z=5000. Our photoreceptors are in the z=6000 neighborhood, and our LGN is around Z=9000. So we have established a basically linear pathway, from the light at 1000, to the optic radiation at 11,000. But, to be biologically consistent, we've placed the photoreceptors deeper than the ganglion cells, so light has to get all the way through the retina before it can impact the photoreceptors. And of course we've established a basic set of connections consistent with known retinal anatomy. This is a working network, we can actually run this, and when we do we can see the effects of the stimuli directly on the computer screen. We can ask Annie to build this network for us, and instead of simulating it we can request that its structure be exported in a dozen different formats. One of the useful things we can do with this, is bring the network into Maya or Blender, and convert all of the geometric structures to meshes, and then re-import them back into Annie. Annie will then dutifully interpret all of the geometry in mesh form, for example it will calculate unions and intersections between meshes and so on. This would be a workflow for example, for an anatomist interested in creating a 3-d image of a brain structure. You can create a network inside Annie, export one of the cell types into Blender, changes its geometry, and import it back into Annie. Which means, you can trace the shape of your nucleus in the microscope, make a mesh out of it, and apply the mesh to Annie's cell group geometry. This is an enormously useful workflow with layered brain structures like the LGN and the superior colliculus. You can start by defining your LGN as a cube with EXTENT (10000,10000,10000), import it into Blender where it will show up as a bunch of modifiable vertices and faces, bring your microscope image into Blender and align it with your network object, and simply move the vertices until they match the microscope image. Then re-import back into Annie, and now you have a correctly shaped and scaled brain structure, and this is also a great way to properly define the outlines of layers and so on.
Anyway, we need to stay with the basics for a minute, before getting scientific. The first part of the example defines the cells, the second defines the connections. You'll notice we've made light inhibitory on the photoreceptors, and everything in the feed-forward pathway has a divergence of 1, whereas everything in the lateral pathways have higher divergences. Also the H1 horizontal cells and the amacrine cells have axons, whereas the rods and bipolar cells don't. A divergence of 5 means "connect to the 5 nearest neighbors" in the target cell group. Nearest neighbors are determined by registering the target coordinates with the source coordinates. Divergences can also be specified in terms of coordinate extents, like with the amacrine cells in the above example. A divergence of (2500,2500,5) means connect to anything within a radius of 2500 in the x-y plane, and we give the Z coordinate a little slop just in case we've declared the neuron locations with variance. One can also define a connection in terms of a MAP (as in the case of the on and off ganglion cells above). A map is like a function, or what other simulators sometimes call a "mask". A map is typically a function that defines how the connection should be made geometrically, for example a very common map is Gaussian connectivity, where the density of synaptic connections is higher in the middle of the target area, and falls off towards the periphery. A map is given by a .MAP file, and can be created and edited with Annie's map tool. One can also specify mapping functions directly using a FUNCTION directive.
Annie can also generate connections dynamically. It isn't shown in this first example because we don't want to get too complicated too quickly. But you can tell Annie to create constrained fractal branching trees for you, that kind of thing. One of the most powerful uses of this feature is in developmental models, where axons are sprouting and synapses are being pruned. Annie is completely aware of moving geometries, there is a large library of behaviors related to ephrins and etc.
In the above example we have consistenly specified TOPOGRAPHY POINT for the connections, this means they are point-to-point (topographic). This illustrates Annie's ability to line up geomtries and perform calculations between them. There are many other ways to organize a network. You can have 6500 motor neurons in a clump of cells like the abducens nucleus, or you can have very sophisticated modular architecture like the cerebral cortex. Annie handles it all. You can get very specific with the geometry, you can define the shapes of cortical sulci and gyri if you wish, using splines or NURBs or meshes any other method the visualization tools will support. In the unlikely event that you can define them analytically by providing equations, Annie will handle that too.
Now that you know how to get stimuli into the network, let's find out how to get information out. Annie has some built-in ways of visualizing network activity (for example Annie's user interface employs Python's "panel" package (from the good folks at Holoviz) to enable a large list of spectacular interactive displays. However more fundamentally, the biggest problem with simulations is "too much data"! The moment you get into the simulation business, you'll find yourself overwhelmed with data. A 1-second network simulation using IAF (simple) neurons can generate over 1 gB of output when all the neurons and synapses are being read at each tick. So instead of reading "everything", we only read the things we're interested in, that way the web sockets don't get overwhelmed. To tell Annie you're interested in something, you can attach a "probe" to whatever you're interested in. You can attach a probe to just about anything. If you want a time series of the membrane potential of neuron 1641, you can attach a probe to that particular variable, and its value at each tick will be sent to wherever the probe says it should be written. It could be a file, it could be a web socket, it could even be a custom device in the operating system. If you'd like a time series of every neuron in a cell group, just attach a probe to the entire group, and you'll get a CSV file with columns that say "Tick" and "Neuron ID", and you can read that directly into Python as a Pandas dataframe and visualize it immediately using any of the popular plotting packages (Matplotlib, HVPlot, VTK, Vega, etc).
Finally a word about interactivity. The whole point of Annie is to provide a useful level of interactivity for the neuroscience community. No one wants to type out differential equations onto a computer screen, and no one wants to re-invent the wheel using TensorFlow. Annie can do all the things that TensorFlow and Brian2 can't. I built Annie because I needed a transverse Hopfield network, and no other simulator could do that for me, not even the big powerful ones like TensorFlow. Such a simple requirement - and such an enormous gap in the research space! Development on Annie is ongoing. This is just the first version, consider it as version 1.0. The next version will have a full-blown interactive user interface, so you'll be able to create definition files by drawing on the screen. The idea is, you spend 5 minutes with Annie to create your network, then go have a cup of coffee while Annie runs the simulation for you, and saves all the requested data in an immediately useful form - so you actually look forward to getting back to work, because everything's ready for you! Instead of having to look forward to hours of number crunching and reformatting just to get a visualization.
Right now, you can use Annie either interactively in a limited way, or via Python as either an application or a library. Annie has some primitive visualizations and they're pretty cumbersome to use right now, this is on the hot list for enhancements and the ease of use will be dramatically improved in the next release. Meanwhile, the primary way of visualizing network activity is via probes and CSV files. The Python API is provided on a separate page. A typical workflow is to create the desired .TEM, .SIM, and .IN files, then edit the user.py file to redefine the SIMULATION_NAME and CELL_TO_RENDER, then run the simulation by saying "python annie.py". Annie will use a constellation of files named after your simulation name, with various extensions. Your SIMULATION_NAME should be the same as your .IN file, for example if you tell Annie your SIMULATION_NAME is "EYE", Annie will look for an EYE.IN file. |