AbstractSince the beginning of the year, the European Union and United States have separately announced major initiatives in brain science. The latter is called the Brain Activity Mapping (BAM) Project2 and the size of the effort and the implications for science and medicine have been compared to the Human Genome Project. A key part of the effort involves developing new scientific instruments capable of observing the activity of large ensembles of neurons in awake behaving humans with the goal of understanding the neural basis for cognition and diagnosing a wide range brain disorders from Parkinson’s to Alzheimer’s.
The problem these instruments are intended to solve can be divided conceptually into two parts: recording and reporting. Recording involves sensing and coding for transmission neural activity including membrane potentials, protein expression levels, calcium concentrations and their correlates. Reporting involves conveying the coded information from the locus of the recording — typically deep within the neural tissue of an awake subject — to some external computing or storage device.
The technical challenge involved in building these instruments is considerable, perhaps on a par with constructing the Large Hadron Collider (LHC), but while the LHC accelerator ring is 27 kilometers in circumference, the components comprising BAM instruments may include billions of nanoscale parts and be contained entirely within a human skull. This lecture explores several of the key technologies being considered to address the reporting problem including nanoscale communication networks, micron-diameter fiber-optic cables, light and ultrasound microscopy, recombinant DNA and synthetic biology.
![]() |
![]() |
The European Commission will award a total of two billion euros to two projects one of which focuses on basic research on the brain and brain diseases such as depression, Parkinson’s disease and Alzheimer’s and possibly develop new treatments. (Source: Reuters, Brussels, Monday, January 28, 2013)
The Obama administration is planning a decade-long scientific effort to examine the workings of the human brain and build a comprehensive map of its activity, seeking to do for the brain what the Human Genome Project did for genetics. (Source: New York Times, New York, February 17, 2013)
![]() |
Without an appropriate instrument, it may be difficult if not impossible to infer the hidden causes that give rise to a given observable phenomenon. This is certainly true when it comes to understanding the brain. We are astute observers of human behavior but it has been extraordinarily difficult to trace behavior back to the neural circuits and biological mechanisms that caused it. We can see in some detail the activity of a few neurons or view as though through a cloudy glass the aggregate behavior of large collections neurons. However, to make the next steps forward, we need to track millions if not billions of neurons, simultaneously recording activity at many levels of detail. To do so we will need to automate much of what was previously done by skilled scientists and exploit the exponentially increasing returns provided by modern computational technologies to make sense of the resulting deluge of data.
![]() |
![]() |
![]() |
![]() |
![]() |
Once the membrane potential reaches the threshold for spike initiation, a spike is likely to soon follow and propagate along the axon to the synapses. If we could locate a nanoscale sensor in the axon hillock to record the local membrane potential that would be quite useful. In fact, it would probably be almost as useful to simply record when the potential exceeds the threshold.
![]() |
We have the technology for recording membrane potentials and calcium concentrations, but reading off these signals at scale is potentially problematic. The sensors are essentially biomolecules that fluoresce when certain local states obtain. The patterns of fluorescence are read off by a very sensitive optical imaging device called a two-photon excitation microscope.
However, a two-photon microscope relies on light and, while it is able to penetrate deeper than conventional light microscopes, the depth of penetration for state-of-the-art technology is limited to at most a few millimeters. BAM researchers would like to read off such information for millions of neurons simultaneously at millisecond resolution in an awake behaving human subject.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
The challenge of scalable neuroscience is to build instruments that enable us to record the behavior of ensembles of billions of neurons at millisecond temporal resolutions where each neuron is a machine of incredible complexity, and infer from this virtual deluge of data — “tsunami” is perhaps a more apt metaphor, the function of individual neurons and predict the collective behavior of an entire brain in both its normal and pathological operating regimes.
![]() |
![]() |
As long as we have had microscopes powerful enough to resolve individual neurons, scientists have been refining methods for imaging neural tissues using specialized preparations that make neuron cell bodies stand out and utilizing ever more powerful devices, with scanning electron microscopes currently now common in academic labs. Once the tissue is prepared and an image taken, it is generally the task of a trained neurophysiologist to interpret the image and determine where one cell leaves off and another one begins. Having skilled humans in the loop, whether working with the tissue samples or interpreting images doesn’t scale, and so research labs led by Winfried Denk at Max Planck and Sebastian Seung at MIT are developing robotic devices for handling the tissue and interpreting the results of imaging [5, 29].
![]() |
![]() |
![]() |
![]() |
It is possible use solvents to break the bonds linking the target molecule and its complement antibody and wash away the dyes following one step of imaging to apply a different stain — that is to say, a stain involving new dyes with antibodies that target different molecules, proteins in the case of Smith’s work. However, you can only apply this “rinse and repeat” cycle so many times without significantly degrading the tissue sample. Currently Smith can image up to about 64 proteins before the tissue is compromised. How might you improve on this number? One possibility is to replace the fluorophore with a more complex molecule capable of encoding more bits. With seven fluorophores you can only encode the equivalent of three bits, but if you had a molecule of DNA with seven base pairs and could associate each target molecule/antibody with a unique sequence of seven nucleotides then you could distinguish between 47 = 16,384 different molecules. Of course, you couldn’t simply read off the DNA sequence with EM, but you could slice each section into tiny cubes, sequence the DNA in each cube, assign each cube its coordinates in the tissue sample, and build a 3-D map annotated with the labels of the target molecules inferred from sequencing [3, 34, 39].
![]() |
Pretty amazing technology! Of course, there’s no chance of recording any dynamics, but still it could be a real game changer for the field of connectomics. Recent work [2] from Janelia Farm also published in April on imaging transparent zebrafish does offer opportunities for observing the dynamics of neural activity. In this case, it is only the skin that is transparent and then only during the larval stage, but there are now also transgenic lines that retain their transparency as adults. Given its small size — 3 mm long, 2 mm thick, and 2.5 mm wide, once the skin is made transparent, two-photon microscopy can be used to image the entire brain of a living zebrafish.9
![]() |
![]() |
![]() |
(a) The simplest recording device used by electrophysiologists is a metal needle insulated everywhere but at its tip, which is inserted into living tissue so as to rest in close proximity to a neuron and used to measure extracellular local field potentials to predict action potentials. The insulated portion of the needle is called the shank and the tip is called the electrode. There are many variations on this basic theme often involving multiple electrodes spaced along the shank that are used to record from several sites simultaneously. Current state-of-the-art technology supports on the order of 1000 electrodes, arranged linearly along a single shank [20], in a planar configuration usually referred to as a multi-electrode array [35] (MEA), and, more recently, in devices consisting of many multi-electrode shanks arranged in several rows like the bristles of a brush with the base of each shank anchored to a rectangular baseplate thereby allowing for the 3-D placement of thousands of electrodes [54]. The biggest disadvantage of this technology is that it is invasive and used in human subjects only when necessary, as in the case of guiding a surgeon in removing a brain tumor. Resolution: 10 micron resolution in z, 100 micron resolution in x and y, and millisecond temporal resolution. Improvements in x and y resolution are difficult due to tissue displacement and cellular damage resulting from device insertion.
(b) The second option is to use light of an appropriate frequency to illuminate the target tissue penetrating to such depth as the signal allows, and then analyze the reflected signal to infer properties of the cells and their associated molecules and behavior. The method of one- and two-photon microscopy using near infrared laser light is one of the more powerful technologies associated with this approach. In this method, molecules of interest are tagged with fluorescent dyes and then illuminated by laser light at a frequency such that one or two photons, depending on which variant of the technology is being employed, are sufficient to excite an electron in the dye molecule such that, when it relaxes from its excited energy state back to its ground state in the span of a few femtoseconds, it emits a photon of a shorter wavelength than the excitation photon(s). In the case of two-photon microscopy, the laser beam is focused on a small spot and scanned line-by-line over the sample tissue [26]. As a result, only dye molecules within the focal spot are excited. While a very useful technology, even with a near-infrared (NIR) laser operating betwen 800 nm to 2500 nm, penetration is limited to a few centimeters at best and thus the technology has limited potential for recording from large-brained, awake, behaving animals. However, there is great promise for studies involving mice [24]. Resolution: ~0.5 mm2 field of view, and ~2.5 micron lateral (x and y) resolution.
(c) Nuclear magnetic resonance imaging allows us to observe and manipulate the magnetic properties of certain atomic nuclei. An MRI machine uses a powerful magnet to induce a magnetic field that serves to align the spins of magnetic nuclei. A resonant radio-frequency (RF) pulse is used to produce a second magnetic field perpendicular to the first which perturbs the alignment causing the nuclei to precess much like a top spinning in a gravitational field. The frequency of precession depends on the properties of the nuclei and the strength of the magnetic field. We measure the RF signals emitted by the spinning nuclei using an RF receiver and a special coil serving as an RF antenna called a probe located inside the MRI machine and used to predict the precession frequency and thereby infer properties of molecules in a tissue sample. Not all atomic nuclei have the right magnetic properties thus limiting what sort of molecules one can detect — hydrogen does, allowing us to study the movement of water molecules. In addition, the signals we are trying to measure are very weak and hence the challenge is to build a device with high-enough signal-to-noise ratio in a given localized region of the brain, and with a high-enough spatial resolution to measure the properties of molecules at the cellular level. State-of-the-art spatial resolution using experimental coils — not yet available in production machines — is on the order of 100 × 100 × 500 μm3 [27]. Temporal resolution depends on what you’re trying to measure and how much you have to sample from a region within a given recording volume or voxel to make measurements of the required accuracy [28, 6]. Resolution: MRI — 1 s temporal, 5 mm spatial; MEG — 1 ms temporal, 1 cm spatial; EEG — 1 ms temporal, 5 cm spatial.
(d) Biological organisms routinely generate, encode, transcode, transmit and decode electrical and chemical signals. The machinery for doing so is now being adapted to serve diverse technological purposes from sequencing DNA as in the case of third-generation sequencing technologies using nanopores [47] or optogenetics in which naturally occurring light-gated ion channels originally used to excite or inhibit neural activity [8, 51] are now being developed as voltage sensors capable of resolving action potentials [7]. Methods for encoding digital information in the form of sequences of nucleotides in strands of DNA have been around for some time [16] and now we are seeing new ideas about using molecular machinery to measure proxies for electrical activity and store the resulting information on DNA [32, 53]. We’ll explore these and other related applications in more depth later in the talk.
![]() |
High-intensity focused ultrasound developed for medical applications employ phased-array ultrasonics in which an array of piezoelectric transducers is used to produce multiple pressure waves whose phase is adjusted by introducing delays in the electrical pulses that generate the pressure waves. By coordinating these delays, the focal point — point of highest pressure and thus highest temperature in the tissue — can be precisely controlled. The result is an instrument that can destroy a brain or breast tumor without cutting into the surrounding tissue. In the case of the brain, the cranium poses a challenge due to its variable thickness, but this can be overcome either by performing a craniotomy or by using a CT scan to construct a 3-D model of the skull and then generating a protocol based on this model that adjusts the delays to correct for aberrations in signal propagation due to the changes in thickness of this particular skull.
![]() |
![]() |
First, the FUS method of induced BBB opening is applied — the rat undergoes a craniotomy to aid in beam focusing and power modulation, and then, prior to sonication, an ultrasound contrast agent (SonoVu [18] developed by Bracco Diagnostics) is intravenously injected to facilitate acoustic cavitation. The details regarding power and pulse length provide interesting insight into how to control for adverse effects in the test animals. Next AuNRs10 (gold nanorods) are injected into the jugular vein and accumulate at the BBB-opening-focus following sonication. Finally the area to be monitored is illuminated by a tunable laser11 and then scanned by an ultrasonic transducer using a set of piezoelectric drive motors with a step size of 120μm in each of two directions. The authors claim that the “experimental results show that AuNR contrast-enhanced photoacoustic microscopy successfully reveals the spatial distribution and temporal responses of BBB disruption area in the rat brains.”
![]() |
The local reporters would forward information to a second type of reporting device more sparsely distributed and responsible for relaying the information received from the local reporters to external receivers not limited by size or power requirements. These relay devices could number in the thousands instead of millions, be somewhat larger than the more numerous and densely distributed reporters, and be located inside of the dura but on the surface of the brain, within fissures or anchored on the membranes lining the capillaries supplying blood to the brain and the ventricles containing cerebrospinal fluid.
We anticipate and analyze one such scheme in [17]. In line with our discussion of the acoustic spectrum, the above figure illustrates a second approach in which the sub-dural reporters communicate with and supply power to another class of (implanted) reporters using ultrasonic energy. In the paper describing this work [44], the implanted reporters are referred to as neural dust and we’ll refer to an individual implanted reporter as a dust mote. Dust motes are also responsible for recording measurements of the local field potential (LFP) that the dust motes then transmit to the sub-dural reporters. The dust motes are approximately cube shaped, measuring 50 μm on a side, and consist of a piezoelectric device and some electronics for powering the mote and transmitting LFP measurements.
A 100 μm mote designed to operate at a frequency of 10 MHz, λ = 150 μm has ~1 dB attenuation at a 2 mm depth — the attenuation of ultrasound in neural tissue is ~0.5 dB/(cm MHz). A comparable electromagnetic solution12 operating at 10 GHz, λ = 5 mm has ~20 dB attenuation at 2 mm. There are other complicating factors concerning the size of the inductors required to power EM devices and efficient antennae for signal transmission — displacement currents in tissue and scattering losses can increase attenuation to 40 dB in practical devices.
The small size of the implanted devices also complicates directly sensing the local field potential. The electrodes employed by electrophysiologists measure local field potentials at one or more locations with respect to a second, common electrode which acts as a ground and is located at some distance from the probe. In the case of LFP measurements made by dust motes, the distance between the two electrodes is constrained by the size of the device such that, the smaller this distance, the lower the signal-to-noise ratio — see Gold et al [25] for an analysis of extra- and intra-cellular LFP recording strategies for constraining compartmental models. The S/N problem can be ameliorated somewhat by adding a “tail” to each dust mote thereby locating the second electrode at some distance from the first one situated on the main body.
![]() |
![]() |
These lists of quantitative facts just scratch the surface of what you’ll need for any particular analysis. The Harvard BioNumbers site provides an extensive searchable database of useful numbers. For instance, I recently wanted to know about nucleotide misincorporation rates and this query provided me with the information I was looking for.
Certainly we are not going to position a probe at every location within a few microns of a synapse in the human brain. We may, however, be able to develop nanoscale machines and distribute them throughout the brain so that every neuron can comfortably accommodate a few thousand of these machines positioned strategically in its active synapses. These machines would be designed to record the passage of proteins called neurotransmitters, which are the primary currency for exchanging information between neurons. Fortunately, given the state of the art in molecular biology, we don’t have to generate these machines de novo, but rather it seems plausible that we will be able to adapt existing cellular machinery from a variety of organisms to perform the basic sensing and communicating tasks required.
![]() |
![]() |
![]() |
![]() |
![]() |
It is said that if you can imagine an operation that might be performed in a cell involving some manipulation of proteins or amino acids, then there is almost certainly some organism whose cells routinely perform that operation. Natural selection has had billions of years in which to explore the possibilities and seldom misses a trick. Molecular biologists are amassing a great catalog of such machines, many of which can be adapted to perform functions in cells other than those in which they are found naturally. This means that often as not if we need a molecular machine to perform a particular function we can order one from this catalog and adapt it to suit our purposes.
A good example of such adaptation comes from the latest generation of devices for sequencing DNA. The Human Genome Project more or less completed the sequencing of the human genome in 2004 after more than a decade of work and a cost of nearly 3 billion dollars. But this was just a single instance of the genome — actually it was a patchwork of pieces of DNA from several individuals, and, while we share a good deal of our individual genetic code, it is the differences between individuals that are likely to provide the clues in finding the causes and cures for many diseases. The race was on to drive down the cost and reduce the time required to days if not minutes.
![]() |
Oxford Nanopore has developed a technology that inserts a protein channel or pore in a silicon substrate. Single-strand DNA is threaded through this pore and a sensor reads out the voltage resulting from molecular changes as each nucleotide passes through the sensor.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Researchers in the field of synthetic biology are now able to build circuits of biomolecules that perform computations such as logical operations that are not readily accessible in nature. DNA polymerase is the workhorse for building such circuits. This enzyme, which is found in all cells including bacteria and viruses, synthesizes DNA molecules from their nucleotide building blocks. Circuits using such naturally occurring enzymes are remarkable efficient, but they are quite slow. One of our collaborators has designed a three-bit demodulator circuit that requires over 300 polymerase reactions. Silicon logic is fast but it is also incredibly inefficient by comparison.
In the case of the nanopore gene sequencing technology, a sample is extracted from the target organism and the sequencing is performed externally. In the case of tracing the connectome using a rabies virus, the experiments are performed in vitro using cells grown in a culture. How might we operate directly on cells in vivo, that is to say in live animals without requiring surgery, inserting probes or sacrificing the animal to extract information for analysis.
![]() |
![]() |
The blood brain barrier protects the brain from toxins and pathogens that might disrupt the neural machinery controlling vital processes throughout the body. Unfortunately for those affected by viral-borne brain diseases, nature has figured out how to bypass the barrier and the HIV and rabies retroviruses mentioned earlier are examples of such pathogens. The silver lining is that we are figuring out how to use these same viral vectors to repair cell damage and deliver drug payloads selectively to targets throughout the body and the brain in particular. We’re also figuring out ways to foil natural viruses so they can’t cross the blood-brain barrier.
If we can use the arteries and capillaries to distribute and deliver molecular machines, then we might also use the lymph system and the vessels that return blood to the lungs and heart and waste products to the kidneys as a means of conveying information to locations external to the central nervous system where it might be more easily processed, say, using an artificial filtering process akin to dialysis17. This would provide an expedient for reading out neural-state information in lieu of more complicated nanotechnology solutions employing tiny radio transmitters that have been suggested in the literature.
Now we have a means of providing input and extracting output from the brain19. Granted that the input modality we’ve been exploring requires we infect each cell with virus and modify its DNA, and that would likely not serve as the input side of a real-time computer interface. On the output side however, we might have a shot at being able to observe a behaving brain at an unprecedented scale and level of detail. Assuming that our virally-delivered molecular machines don’t interfere with the normal operation of the cells, we could in principle develop technology for reading off states of the brain that would not harm the host and could operate indefinitely20. What sort of information might we want to collect and how would we go about doing so?
Traditionally the focus has been on recording spike trains in the form of changes in the membrane potential of individual neurons. However, the signaling pathways in the brain are subtle and multitude; they include electrical pathways22 in the form of action potentials and voltage-gated ion channels, genetic pathways in the form of DNA translated into RNA and proteins expressed and transported within the cell, and chemical pathways in form of neurotransmitters which are emitted into the synaptic cleft separating an axon and a dendrite and serve to open ligand-gated ion channels on the dendrite.
![]() |
Must we record the state of all these components in order to obtain a complete picture? Perhaps, but it may be that the proteomic history — the record of specific proteins expressed and transported across synapses — is sufficient to infer most of what is going on informationally and computationally within the brain. In any case, we are going to assume so for the remainder of this discussion, and make the additional simplifying assumption that the production and transfer of neurotransmitters provide enough information.
![]() |
![]() |
Next we need to associate neurotransmitters with their identifying barcodes and convey these barcodes along with the neurotransmitters making sure that they find their way into the receiving neuron where they can be assembled into packets that describe each event as a triple of three barcodes encoding the transmitting neuron, the receiving neuron and the class of neurotransmitter conveyed. Once assembled these packets would be flushed into the cerebrospinal fluid to be subsequently eliminated from the brain via the lymph and blood circulation system.
![]() |
It would seem the authors have in mind sacrificing the animal in the final step, but it may be possible to pass source-sink pairs through the cell membrane and into the lymph-blood system for external harvest as proposed earlier. They suggest that propagation might be accomplished using a trans-synaptic virus such as rabies [41]. Additional techniques would be required to map barcodes to brain areas. The authors claim to be developing an approach based on PhiC31 integrase for joining barcodes and PRV amplicons [42] for trans-synapatic barcode propagation. While quite ambitious with many complicated steps yet to be filled in, a number of neuroscientists, myself included, believe that some variant of this idea could be accomplished in the relatively near-term future, and there are plans afoot [4] to take on even more ambitious goals24.
![]() |
![]() |
![]() |
![]() |
I’ll leave you with this scene from The President’s Analyst in which James Coburn plays a psychiatrist confronted by the president of the phone company asking the Coburn character to use his influence with the president to help pass legislation requiring a new nanoscale phone technology to be implanted in the brain of every newborn. (You can check out the clip in this YouTube video.)
P.S. I could literally go on for hours and have in some more intimate classroom situations. I’ve created this annotated transcription of the talk with lots of footnotes providing additional detail and relevant papers for your further reading. I believe wanting to tell everyone about what you’ve discovered is a wonderful child-like characteristic that is also incredibly valuable to both the individual and society. It is a form of public thinking that is under-appreciated and often discouraged in precocious children. I encourage it in everyone I meet and cherish it in my students and colleagues. I thank you for your patience indulging me in my habit.
1 This material was presented in a lecture delivered at the University of California Berkeley’s Redwood Center for Computational Neuroscience on July 26, 2013 as part of the Berkeley Summer Course in Mining and Modeling Neuroscience Data. This annotated “transcript” is an extended and updated version of an earlier presentation given at the Helen Wills Neuroscience Institute on April 19, 2013.
2 With the announcement of $100M for the first year of funding, BAM was re-christened BRAIN for “Brain Research through Advancing Innovative Technologies”.
3 Deep Thought is a computer that was created by the pan-dimensional, hyper-intelligent species of beings (whose three dimensional protrusions into our universe are ordinary white mice) to come up with the answer to “The Ultimate Question of Life, the Universe, and Everything”.
4 Arthur Dent, having escaped the destruction of earth which was part of an enormous computational matrix specially designed to answer the ultimate question, is believed to have some portion of this computational matrix in his brain, and so in the conclusion of the “Hitchhiker” series, entitled The Restaurant at the End of the Universe, he attempts to discover that answer to “The Ultimate Question” by extracting it from his brainwave patterns.
5 It helps to get some appreciation of scale by comparing the sizes of objects that we can experience. For example, the height of Mount Everest is 29,029 feet (8,848 meters). We can put that in human perspective by a simple change in the units that we use for our measurements. Mount Everest is about 5,000 times as high as a six foot person — (/ 29029.0 6.0) = 4838.17. The same method of contrast works for smaller objects at the nanoscale. The cell body or soma of a neuron can vary between 4 and 100 microns. A six foot person (1.8288 meters) is roughly 100,000 times the size of a neuron cell body — (/ (* 1.8288 (expt 10.0 6.0)) 10.0) = 182880.0.
In dealing with irregularly shaped objects, we use idealizations such as assuming that the sun and earth are spherical. The radius of the earth is around 6,371 kilometers — (defconst earth-radius 6371.0). The radius of the sun is around 696,000 kilometers — (defconst sun-radius 696000.0). The radius of the sun is around 100 times the earth’s radius — (/ 696000.0 6371.0) = 109.25. The volume of the sun is around 1,000,000 times the earth’s volume. If you’re curious about the odd parenthetical expressions, I’m writing these notes using the Emacs editor and using its scripting language — a dialect of Lisp — to perform my simple back-of-the-envelope calculations:
(defconst float-pi 3.141592653589793 "The value of Pi.") (defun volume-of-sphere (radius) (* (/ 4.0 3.0) float-pi (expt radius 3.0))) (defun ratio-of-volumes (radius-1 radius-2) (/ (volume-of-sphere radius-1) (volume-of-sphere radius-2))) (defconst sun-radius 696000.0 "Radius of the sun in kilometers.") (defconst earth-radius 6371.0 "Radius of the earth in kilometers.") (ratio-of-volumes sun-radius earth-radius) ;; => 1,303,781.78 (defconst solar-system-radius 5913520000.0 "Radius of the solar system in kilometers.") (ratio-of-volumes solar-system-radius sun-radius) ;; => 613,352,996,129.95
Measuring the size of the solar system requires that we introduce additional assumptions regarding the shape of objects whose boundaries are inconstant. The radius of the solar system is taken to be the average distance between the Sun and Pluto is 5,913,520,000 kilometers — (defconst solar-system-radius 5913520000.0). The radius of the solar system is around 10,000 times the sun’s radius — (/ 5913520000.0 696000.0) = 8496.44. The radius of the solar system is around 1,000,000 times the earth’s radius — (* 8496.44 109.25) = 928236.07.
Comparing the relative sizes of objects at scales below the nanoscale, e.g., electrons and protons, presents new challenges. A proton is not an elementary particle and hence it possesses a physical size, although its spatial envelope varies since the surface of a proton is somewhat fuzzy due to being defined by the influence of forces that don’t come to an abrupt end. The proton is about 1.6-1.7 femtometers in diameter. Note that one femtometer is 1.0 × 1015 meters or 0.001 picometer, and one nanometer is 1,000 picometers or 1,000,000 femtometers.
An electron is an elementary particle and hence it is described in quantum mechanical terms as a wavefunction, which in principle covers all space. There is a measure called the classical electron radius, also known as the Lorentz radius which is roughly the size the electron would need to have for its mass to be completely due to its electrostatic potential energy — not taking quantum mechanics into account.
If we consider the size of atoms, our measurements are a little easier to describe but still require we deal with some degree of ambiguity at the quantum level. The bond length is the average distance between the nuclei of two bonded atoms, e.g., the bond length between a carbon and a hydrogen atom is around 100 picometers or 0.1 nanometers. The atomic radius is the mean distance from the nucleus to the boundary of the surrounding cloud of electrons, e.g., atomic radius of hydrogen is 25 picometers and that for carbon is 70 picometers.
6 Nature is clever at exploiting different scales in which different physical principles dominate offering advantages in size, speed or efficiency. Complex, high-evolved biological computing systems like the brain tend to be composed of multiple physically co-mingled, informationally coupled and highly interactive subsystems. The fact that these subsystems rely on different physical principles generally results in models based on different conceptual and mathematical abstractions. Any given observable behavior is likely to involve several of these subsystems and hence there is a need to explain how the relevant subsystems cooperate to produce the behavior of interest — a difficult task if the models associated with these subsystems do not share the same conceptual / mathematical space. The task of bridging between subsystems operating at different scales is made the more difficult given that the data themselves are often incommensurate — for example, biochemistry at the synaptic level and electrical fields at the neural level. Terry Sejnowski has made similar points in his lectures, illustrating the differences in the scales occupied by different neural subsystems with the following graphic:
![]() |
7 When Feynman discussed assembling nanoscale machines, he would often speak of first building a set of 1/4 scale tools, using them to build 1/16 scale tools, and so on, ultimately constructing millions of entire nanoscale factories. This leads some to think that nanoscale assembly will look like macroscale assembly — tiny machine tools made out of rigid parts, constructing nanoscale products out of materials that behave like the materials we encounter in everyday life. If we were to proceed with this intuition, we would very likely end up being disappointed. Nanoscale fabrication and assembly present new engineering challenges precisely because different physical laws dominate at different scales, but it also offers powerful new opportunities for combinatorial scaling.
Objects at the nanoscale, organic molecules in the case of biological systems, tend to be “flexible”, “sticky” and perpetually “agitated.” “Flexibility” refers to the fact that proteins and other large molecules that comprise biological systems generally have multiple shapes or “conformations”. Even once proteins are folded into a particular conformation, the geometric arrangement of their constituent atoms changes in accord with the attractive or repulsive forces acting between parts of the protein, e.g., Van der Waals force, and interactions with other molecules in their vicinity, e.g., due to the forces involved in making and breaking covalent bonds.
“Stickiness” refers to the fact that these molecules routinely exchange electrons allowing new molecules to be formed from existing molecules by way of chemical reactions catalyzed by enzymes. These molecules have locations — the “sticky” sites — corresponding to molecular bonds where electrons can shift their affinity to create new bonds with other nearby molecules — and hence “stick” together. Finally, “agitated” refers to the fact that the molecules are constantly in motion due to changes in conformation, interaction with other macromolecules, and being struck by smaller fast moving atoms and molecules. The attendant forces cause individual particles to undergo a random walk, with the behavior of the ensemble as a whole referred to as Brownian motion.
In nanoscale engineering, these properties of nanoscale objects can be channeled to create products by self-assembly. The study of soap films provides a relatively simple introduction to the natural processes involved in self-assembly, and there are a number of popular books in the library that detail these same processes at work in biological systems [19, 30]. Physicists like to joke that you don’t study quantum mechanics to understand it — since that is clearly impossible, only to apply it — and, of course, that implies a book on quantum theory that doesn’t include a lot of worked-out examples and derivations is of little value. Quantum mechanics is definitely a prerequisite for many nanoscale engineering applications, but it is also necessary to acquire intuitions that enable us to imagine how molecules interact both in pairs and in larger ensembles at these unfamiliar scales. Fortunately, biology provides us with a diverse collection of molecular machines we can study to develop those intuitions.
8 A new tissue preparation technique out of Karl Deisseroth’s lab renders an entire mouse brain essentially transparent [14]. Moreover, the process “preserves the biochemistry of the brain so well that researchers can test it over and over again with chemicals that highlight specific structures within a brain and provide clues to its past activity.” One potential disadvantage of the technique is that it washes out the lipids. The technique makes use of a hydrogel which “forms a kind of mesh that permeates the brain and connects to most of the molecules, but not to the lipids, which include fats and some other substances. The brain is then put in a soapy solution and an electric current is applied, which drives the solution through the brain, washing out the lipids.”
I had assumed that washing out the lipids would make it difficult if not impossible to resolve cell boundaries but the authors report that using mouse brains “we show intact-tissue imaging of long-range projections, local circuit wiring, cellular relationships, subcellular structures, protein complexes, nucleic acids and neurotransmitters.” Moreover their preparation also “enables intact-tissue in situ hybridization, immunohistochemistry with multiple rounds of staining and de-staining in non-sectioned tissue, and antibody labelling throughout the intact adult mouse brain.”
9 With some help from David Cox, I learned a little more about zebrafish, their characteristics pertinent to optical transparency and our ability to image their internal structure. The quick summary is that it is only the skin that is transparent in larval zebrafish, subdermal structures with chromophores that produce significant light scattering still limit effective penetration depth. However, by avoiding scattering in the normally pigmented superficial layers and owing to the small size of the larval organism, the developing brain is small enough that it is well within the feasible recording depth for 2-photon imaging — which is a useful technique, in part, because it is reasonably tolerant of scattering of collected photons. In general, blood, melanin in the skin, fat and water — in the case of the frequencies (800 nm to 2500 nm) used in near-infrared spectroscopy — are the tissue components most responsible for absorption.
Within the cell, nuclei and mitochondria are the most significant light scatterers (source). David pointed out that in the case of CLARITY [14], clarified brains also lack blood, which “the animal would be probably unhappy without, even if the neuronal lipids could be made transparent (transparent occelated fish blood notwithstanding).”
In mouse and human systems, the in vivo spatial resolution of the adult animal is limited due to the normal opacification of skin and subdermal structures. The characteristic adult pigmentation pattern of the zebrafish consists of three distinct classes of pigment cells arranged in stripes: black melanophores, reflective iridophores, and yellow xanthophores. Some mutant strains exhibit a complete lack of one or more of these types of pigmentation (source). White et al [50] developed a transgenic strain of zebrafish that largely eliminates all three in adult fish, but does nothing to reduce absorption in subdermal structures.
10 By tuning the averaged dimensions of AuNRs to 40 nm by 10 nm, their absorption peak was shifted to 800-nm wavelength. In addition, polyethylene glycol (PEG) was coated on the surface of the AuNRs to increase their biocompatibility, stealth effect to the immune system, and consequently the circulation time in blood stream.
11 A tunable laser system provided laser pulses with 10-Hz pulse-repetition frequency (PRF), 6.5-ns pulse width, and 800-nm wavelength. In addition to avoiding the strong interference from blood, 800-nm wavelength is also an isosbestic point in absorption spectrum of hemoglobin that we can ignore the effects of blood oxygenation on photoacoustic-microscopy measurements. The laser light was aligned to be confocal with a 25-MHz focused ultrasonic transducer (-6 dB fractional bandwidth: 55%, focal length: 13 mm, v324, Olympus) at 3 mm under the surface of rat brains.
12 Note that the speed of sound — approximately 500 m/s in air and 1500 m/s in water — is considerably slower than that of an electromagnetic signal — exactly 299,792,458 m/s in a vacuum and close enough to that for our purposes in most other media. The relatively slow acoustic velocity of ultrasound results in a substantially reduced wavelength when compared to an electromagnetic signal at the same frequency. Compare for example a 10 MHz, λ = 150 μm ultrasound signal in water with a 10 Mhz, λ = 30m EM signal. An EM signal of this wavelength would be useless for neural imaging; the tissue would be essentially transparent to the signal and so penetration depth would be practically unlimited, but there would hardly be any reflected signal and the size of an antenna necessary to receive such a signal would be prohibitively large — on the order of half the wavelength for an efficient antenna. A comparable EM solution for neural dust [44] would therefore be closer to the 10 GHz frequency provided in the text.
13 Myelin is a dielectric (electrically insulating) material that forms a layer, the myelin sheath, usually around only the axon of a neuron. It is essential for the proper functioning of the nervous system. It is an outgrowth of a type of glial cell (source)
14 Nucleotides are “biological molecules that form the building blocks of nucleic acids (DNA and RNA) and serve to carry packets of energy within the cell (ATP).” This useful graphic compactly illustrates the structural characteristics of the family of nucleotides, highlighting the phosphate groups that assist in providing the energy required for reactions catalyzed by enzymes:
![]() |
15 Here’s an excerpt from Oxford Nanopore’s promotional material describing their basic technology:
A nanopore is, essentially, a nano-scale hole. This hole may be:Richard A. L. Jones the author of Soft machines: nanotechnology and life [30] has some interesting observations concerning the gene-sequencing technology being developed by Oxford Nanopore in this article.
Biological: formed by a pore-forming protein in a membrane such as a lipid bilayer
Solid-state: formed in synthetic materials such as silicon nitride or graphene, or
Hybrid: formed by a pore-forming protein set in a synthetic material.
This diagram shows a protein nanopore set in an electrically resistant membrane bilayer. An ionic current is passed through the nanopore by setting a voltage across this membrane.
If an analyte passes through the pore or near its aperture, this event creates a characteristic disruption in current. By measuring that current, it is possible to identify the molecule in question. For example, this system can be used to distinguish between the four standard DNA bases G, A, T and C, and also modified bases. It can be used to identify target proteins, small molecules, or to gain rich molecular information, for example to distinguish the enantiomers of ibuprofen or molecular binding dynamics.
16 Such circuits can be used for a variety of purposes including biology-based computing. There is another, direct application of DNA to computing which was developed by Len Adleman and first applied to solving instances of the NP-complete Hamiltonian Path Problem: given an undirected graph G determine if there is a path through G that includes every vertex in G exactly once. Adleman’s original work [1] on so-called DNA computing developed into a new field which has come to be called Biocomputing.
In Genesis Machines: The New Science of Biocomputing, Martyn Amos provides an interesting description of Adleman’s work and, in particular, his application of PCR. Adleman’s DNA-based algorithm works by harnessing the ability of bacteria to replicate quickly in order to generate a large number of DNA sequences and then searching through these sequences in parallel to determine if there exists a sequence coding for a Hamiltonian path. The Amos account includes a short biography of the eccentric Kary Mullis who is credited with inventing PCR and the recipient of a Nobel prize in chemistry for his accomplishments.
18 Here is an excerpt from Ventola [48] discussing how nanoparticles — denoted “NP” in the following — are removed from circulation by the immune system; the abbreviation “RES” denotes the reticuloendothelial system which is the part of the immune system consisting of phagocytes located in reticular connective tissue and is referred to as the mononuclear phagocyte system in modern medical texts:
NPs are generally cleared from circulation by immune system proteins called opsonins, which activate the immune complement system and mark the NPs for destruction by macrophages and other phagocytes. Neutral NPs are opsonized to a lesser extent than charged particles, and hydrophobic particles are cleared from circulation faster than hydrophilic particles. NPs can therefore be designed to be neutral or conjugated with hydrophilic polymers (such as PEG) to prolong circulation time. The bioavailability of liposomal NPs can also be increased by functionalizing them with a PEG coating in order to avoid uptake by the RES. Liposomes functionalized in this way are called “stealth liposomes.”NPs are often covered with a PEG coating as a general means of preventing opsonization, reducing RES uptake, enhancing biocompatibility, and/or increasing circulation time. SPIO NPs can also be made water-soluble if they are coated with a hydrophilic polymer (such as PEG or dextran), or they can be made amphophilic or hydrophobic if they are coated with aliphatic surfactants or liposomes to produce magnetoliposomes. Lipid coatings can also improve the biocompatibility of other particles.
Relevant to the elimination of NPs by the kidneys, this paper by Choi et al [13] claims to have “precisely defined the requirements for renal filtration and urinary excretion of inorganic, metal-containing nanoparticles”, and, while somewhat narrowly focused, it provides some useful general information regarding renal filtration.
17 An even simpler expedient might involve creating information capsules that would directly marshal the normal filtration capabilities of the kidneys18 to flush the data-laden cargo into the urinary tract where it would be easier to process. In the case of lab animals like mice, a catheter could be used to collect the urine, or, simpler yet, use a nonabsorbent bedding material in the animal’s cage with a removable screened collection tray.
19 If I had to bet, I’d put my money on quantum mechanics playing a role in the development of practical methods for reading off neural states at scale. In the near-term, we may be able to utilize existing cellular transport machinery to extract neural state, but such a primarily-biological approach won’t offer high-enough temporal or spatial resolution for sophisticated brain-computer interfaces. Quantum mechanical principles such as quantum tunneling are critical in the design of semiconductors, including transistors consisting of single atoms, and technologies based on quantum dots offer efficient approaches for encoding and transporting information locally and are likely to figure in the development of nanoscale communications networks [10].
My high school physics class didn’t cover any quantum theory but I picked up a little in the electrical engineering courses I took in college. (I was a math major and so I didn’t take the full EE curriculum which I now regret.) If you weren’t exposed to quantum theory in high school or college, but know basic classical electromagnetic theory, you might want to at least learn a few quantum mechanical principles so you’ll have some clue when they come up in relation to technologies for neural interfaces.
I suggest trying to get an AP Physics Exam B level of understanding that covers Max Planck’s analysis of black-body radiation, Albert Einstein’s interpretation of the photoelectric effect, and Werner Heisenberg’s uncertainty principle along with Hermann Weyl’s more formal equation relating the standard deviation of position σx and the standard deviation of momentum σp shown here, σxσp = h/4π, for a special case involving Gaussian distributions where h is Planck’s constant.
I admit that trying to understand quantum theory is difficult and good intuitions are hard to come by — you might have heard that Einstein, who along with Planck, Heisenberg, and Niels Bohr helped to develop quantum mechanics, was not comfortable with the theory. I’ve had some success suggesting that students look at Michael Fayer’s Absolutely Small: How Quantum Theory Explains our Everyday World [22] for an account that is not only accessible but also reasonably detailed, or his textbook [21] for a more rigorous quantitative treatment. I always feel a little more comfortable with equations when I can translate them into code and perform my own synthetic experiments by playing with the constants. Here’s a Matlab implementation of Planck’s equation for calculating the electromagnetic radiation emitted by a black body in thermal equilibrium at a definite temperature.
21 High temperatures damage cells by destroying organic molecules such as proteins, carbohydrates, lipid and nucleic acids:
Alteration of Cell Walls and Membranes:
Cell Walls: Prokaryotic bacteria, eukaryotic plants and some fungi have cells that are surrounded by a protective cell wall composed mainly of structural carbohydrates and which helps maintain cell shape. Heat can disrupt the bonds within cell walls making them weak and structurally unsound.
Cell Membranes: Phospholipids are the main component of cell membranes and, in eukaryotic cells, phospholipids create an entire transport system for moving materials into, out of and around within the cell. Phospholipids become more fluid when heat is applied, disrupting the integrity of cellular membranes.
Viral Membranes: Some viruses, those that are considered to be enveloped, are surrounded by phospholipids that they steal from the cells that they parasitize. Enveloped viruses can be rendered harmless when their viral envelope is destroyed, because the virus no longer has the recognition sites necessary to identify and attach to host cells.
Damage to Proteins and Nucleic Acids:
Cellular Proteins: These large three dimensional molecules are composed of amino acids linked together by peptide bonds. Heat denatures — changes the shape of — proteins, and the 3-D structure of a protein is essential to its function. If a protein’s shape is irreversibly changed, the protein is no longer functional. Denaturation of protein is irreversible.
Nucleic Acids: Composed of linked nucleotides, nucleic acids, such as DNA and RNA contain the code for building of protein molecules. Like proteins, nucleic acids are very heat sensitive. High temperatures can result in fatal mutations to DNA or can halt the process of protein synthesis, by damaging RNA.
20 There are technical books on in vivo nanoscale communication networks [10] including discussions of the suitability of various network topologies and variant signal-transmission technologies from magnetic resonant coupling [12] to exploiting existing cellular transport and signaling pathways [33, 11]. For example, here’s a nanoscale radio receiver made from a single carbon nanotube, and a discussion of resonant inductive coupling as an efficient method for communicating over short distances.
Whether you encode information in cellular waste products or transmit the information over a local nanoscale communication network, thermodynamics dictates that you will expend energy. You either have to provide this energy locally, making less available to the cell or you have to transport energy into the cells from an external source. If the signal or power transmission involves electromagnetic radiation, then you have to be careful to avoid damaging the organic components and you will have to dissipate any waste heat21 since inevitably the operations will not be perfectly efficient.
Here’s a back-of-the-envelope calculation that you might be able to carry out if you know something about cellular and wireless technologies. Suppose you want to place a nanoscale transmitter — ultimately the gamers would like to transmit and receive but scientists are currently most interested in recording what’s going on — either inside or within a few nanometers of nearly every neuron in the primate cortex, that’s about 10 billion give or take an order of magnitude. Each transmitter would have to transmit something on the order of 40K bits per second to capture the information encoded in an action potential. A complete action-potential cycle takes around 4 milliseconds consisting of about 2 milliseconds for polarization and depolarization of the axon cell membrane followed by a refractory period of about 2 milliseconds during which the neuron is unable to fire.
![]() |
Existing biological systems use highly energy-efficient processes. Excerpting from a recent survey article by Mark Leeson [33]: “Recent measurements of reaction energies give values of ~10-19 J for a few hundred molecules for communication. In comparison, for CMOS, the switching energy is related to the capacitance and the square of the supply voltage. Employing 0.18 μm CMOS at 1.2 V, with an oxide thickness of 2 nanometers, the switching energy is ~10-15 J. It therefore seems likely that molecular communication mechanisms will be able to undertake computation functions that dissipate less power than current electrical components. [...] However, information propagation speeds in molecular communication are only in the hundreds of bits per second range because of the diffusion mechanism and the energy limits imposed by the device size.” Leeson goes on to analyze a simple biological coding and transmission scheme based on diffusion. This is only a start and one of the key questions remaining unanswered is the following: “Is it possible to sustain an appropriately high rate of data transmission without ‘cooking’ the brain or otherwise interfering with its normal function?”.
22 In addition to chemical synapses of the sort alluded to in the presentation, there are also electrical synapses. An electrical synapse “is a mechanical and electrically conductive link between two abutting neurons that is formed at a narrow gap between the pre- and postsynaptic neurons known as a gap junction.”
23 The presence of calcium can be used as a marker for neural activity. The method of calcium imaging is used to measure the concentrations of calcium within cells using fluorescent molecules that respond to the binding of Ca2+ ions by changing their fluorescence properties. This technique can partially reconstruct the firing patterns of relatively large populations — thousands — of neurons, but has poor temporal resolution and requires bathing the cells in light that can cause photodamage.
24 As for the even more ambitious goal of reading off the proteomic history of a behaving brain, here are some additional questions and speculations that highlight the challenges:
How do you design a retrovirus delivery vector that only targets neurons, doesn’t replicate, achieves high coverage of the target population, and can be manufactured economically in sufficient quantity?
Could you achieve this so that the encapsulated RNA instructions are identical except for a unique signature used to tag the host neuron for collecting connection attributes?
If not, could you induce the host to generate such a signature exactly once upon the first being infected, and provide some guarantee that with high probability the self-manufactured signature is unique within a given population of neurons all of which use the same method for generating their signatures?
How might we introduce the vector into the blood supply, avoid an immune response rejection, circumvent the blood-brain barrier, reliably makes its way to the host, and quietly self-destruct if the host is already infected?
Are there existing options for retroviruses that are effectively benign allowing the host to perform normally after the initial infection and ward off subsequent infections to avoid altering the signature?
How can we control the rate at which the new molecular machinery propagates information packets across synapses, or perhaps we can tag the neurotransmitters so that they convey the signature of the transmitting neuron?
What are some candidate cellular machines that could be adapted to sense neural states and how might they be modified to carry out sensing operations without interfering with their normal function?
Could you alter a ribosome so that as a side effect of transcription it also produces a marker for the particular protein — perhaps it could do this some fraction of the time such that the marked proteins are proportional to total production?
If not, could a completed protein be tagged — perhaps in an epigenetic fashion via methylation, or might it be better to tag and package the mRNA after it has performed its purpose assuming that the process leaves the single-stranded mRNA intact?
Once the data has been recorded and packaged to include the necessary provenance to enable reconstruction of the neural-state information, and is either floating free within the cell membrane or secured to some organelle as a staging area for subsequent post processing, how would you perform any additional protective cloaking and transfer the packaged information outside the cell body and subsequently into the blood-lymph system?